POPULARITY
We recorded this episode of Mind Your Business six-months to the day Hurricane Helene made her way through the High Country. We start the program with a message of hope to sew into our daily conversations along with some updates on where progress is emerging.Earlier this week, Cheap Joes Art Stuff announced its closing the doors on a 40-year run as one of the nation's most popular art supply depots. Joseph Miller joins us to answer the question of why the time to say goodbye is now.We also recap the 9th-annual 4 Under 40 awards and give the details on the next big Chamber event. Fore!Mind Your Business is produced weekly by the Boone Area Chamber of Commerce. The program is made possible thanks to the sponsorship support of Appalachian Commercial Real Estate.Support the show
In this episode, Nate Gilmore has a conversation with Michael Thompson, the Church of the Nazarene's General Counsel, and Joseph Miller, the associate General Counsel. Mike and Joe discuss what the office of the General Counsel does and how their previous legal experience helps serve the Church of the Nazarene's mission today. Lifelong Learning Code: 80890 Click here to learn about Lifelong Learning
We present gradient routing, a way of controlling where learning happens in neural networks. Gradient routing applies masks to limit the flow of gradients during backpropagation. By supplying different masks for different data points, the user can induce specialized subcomponents within a model. We think gradient routing has the potential to train safer AI systems, for example, by making them more transparent, or by enabling the removal or monitoring of sensitive capabilities.In this post, we: Show how to implement gradient routing.Briefly state the main results from our paper, on... Controlling the latent space learned by an MNIST autoencoder so that different subspaces specialize to different digits;Localizing computation in language models: (a) inducing axis-aligned features and (b) demonstrating that information can be localized then removed by ablation, even when data is imperfectly labeled; andScaling oversight to efficiently train a reinforcement learning policy even with [...] ---Outline:(01:48) Gradient routing(03:02) MNIST latent space splitting(04:31) Localizing capabilities in language models(04:36) Steering scalar(05:46) Robust unlearning(09:06) Unlearning virology(10:38) Scalable oversight via localization(15:28) Key takeaways(15:32) Absorption(17:04) Localization avoids Goodharting(18:02) Key limitations(19:47) Alignment implications(19:51) Robust removal of harmful capabilities(20:19) Scalable oversight(21:36) Specialized AI(22:52) ConclusionThe original text contained 1 footnote which was omitted from this narration. --- First published: December 6th, 2024 Source: https://www.lesswrong.com/posts/nLRKKCTtwQgvozLTN/gradient-routing-masking-gradients-to-localize-computation --- Narrated by TYPE III AUDIO. ---Images from the article:
So... Did Dr. Joseph Miller answer the question? Do gods still exist? Listen here for my response to the episode! Support this show!! Monthly support: https://podcasters.spotify.com/pod/show/biblically-speaking-cb/support One-time donation: venmo.com/cassian-bellino Follow Biblically Speaking on Instagram! https://www.instagram.com/thisisbiblicallyspeaking/ #demon #bible #podcast #polytheism --- Support this podcast: https://podcasters.spotify.com/pod/show/biblically-speaking-cb/support
Do gods (not God) still exist today? Do we currently live in a polytheistic culture? What are the key differences between the God of the Israelites and the gods of Mesopotamian cultures, especially with regard to creation stories? This week we have Dr. Joseph Miller on the show. Dr. Joseph R. Miller's love for learning is reflected in his diverse educational background. Guided by a rich knowledge-base in the fields of engineering, theology, philosophy, and ethics, Dr. Miller possesses a unique interdisciplinary perspective that enriches his approach to teaching, speaking, and training. Dr. Miller is currently Associate Professor of Christian Worldview at Grand Canyon University. When Dr. Miller is not speaking or teaching, he enjoys doing life with his wife Suzanne and their three sons. You can read more of his work, listen to his podcast ‘RaZe the Roof,' and watch his videos at MoreThanCake.org. Support this show!! Monthly support: https://podcasters.spotify.com/pod/show/biblically-speaking-cb/support One-time donation: venmo.com/cassian-bellino Additional Links: https://morethancake.org/ Joseph's Substack: https://jrmiller777.substack.com/p/media-information Follow More Than Cake on Instagram: https://www.instagram.com/morethancake777/ Find Joseph on X: https://x.com/i/flow/login?redirect_after_login=%2Fjrmiller777 Schaeffer Dialogues: https://podcasts.apple.com/us/podcast/the-schaeffer-dialogues/id1773486296 Raze the Roof Podcast: https://podcasts.apple.com/us/podcast/raze-the-roof/id1742835271 --- Support this podcast: https://podcasters.spotify.com/pod/show/biblically-speaking-cb/support
In a bleak-sounding future, an A.I. soldier has determined that the only way to end war is to end humanity. Call us and leave a voicemail at 1 (305) 563-6334 Music provided by: Atlas Sound Arts This is SciFi Voice: Dear Nikky Mentions: James Allocca, Gary Bloom, Sleepy Captain Kyoshoo, Madison on the Air, Dr. Dilip Kumar Gupta, Jesse Einstein, temesgen getachew, Joseph Miller, 2Nin Ent Ads: Seldon Crisis, Following Films Sci-Fi fan or creator? Follow the hashtag #ThisisSciFi for more sci-fi goodness! #ScienceFiction And join our Discord Server! #WeNeedRobertToWatchBabylon5 Discord A "Spotify for Podcasters" podcast --- Support this podcast: https://podcasters.spotify.com/pod/show/scifiremnant/support
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Transformer Circuit Faithfulness Metrics Are Not Robust, published by Joseph Miller on July 12, 2024 on LessWrong. When you think you've found a circuit in a language model, how do you know if it does what you think it does? Typically, you ablate / resample the activations of the model in order to isolate the circuit. Then you measure if the model can still perform the task you're investigating. We identify six ways in which ablation experiments often vary.[1][2] How do these variations change the results of experiments that measure circuit faithfulness? TL;DR We study three different circuits from the literature and find that measurements of their faithfulness are highly dependent on details of the experimental methodology. The IOI and Docstring circuits in particular are much less faithful than reported when tested with a more precise methodology. The correct circuit for a set of prompts is undefined. The type of ablation you use to isolate the circuit determines the task that you are asking the circuit to perform - and therefore also the optimal circuit. This is especially important because previous work in automatic circuit discovery has tested algorithms by their ability to recover these "ground-truth" circuits from the literature - without considering these potential pitfalls and nuances. Case Studies We look at three circuits from the mech interp literature to demonstrate that faithfulness metrics are highly sensitive to the details of experimental setup. Indirect Object Identification Circuit The IOI circuit is the most well known circuit in a language model. It computes completions to prompts of the form: "When Mary and John went to the store, John gave a bottle of milk to ____" The circuit is specified as a graph of important attention heads (nodes) and the interactions between them (edges) as applied to a specific sequence of tokens. The authors report that the circuit explains 87% of the logit difference between the two name tokens. They find this number by passing some inputs to the model and ablating all activations outside of the circuit. Then they measure how much of the logit difference between the correct and incorrect name logits remains. However, an important detail is that they arrived at this number by ablating the nodes (heads) outside of the circuit, not by ablating the edges (interactions between heads) outside of the circuit. So they don't ablate, for example, the edges from the previous token heads to the name mover heads, even though these are not part of the circuit (effectively including more edges in the circuit). We calculate the logit difference recovered (defined below) when we ablate the edges outside of the circuit instead. They ablate the heads by replacing their activations with the mean value calculated over the "ABC distribution", in which the names in the prompts are replaced by random names.[3] In our experiments, we also try resampling the activations from different prompts (taking individual prompt activations instead of averaging). The first thing that jumps out from the box plots above is the very large range of results from different prompts. The charts here are cut off and some points are over 10,000%. This means that although the average logit difference recovered is reasonable, few prompts actually have a logit difference recovered close to 100%. And we see that ablating the edges instead of the nodes gives a much higher average logit difference recovered - close to 150% (which means that the isolated circuit has a greater logit difference between the correct and incorrect names than the un-ablated model). So the edge-based circuit they specified it is much less faithful than the node-based circuit they tested. The authors calculate the 87% result as the ratio of the expected difference (over a set of prompts) in the ...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Transformer Circuit Faithfulness Metrics Are Not Robust, published by Joseph Miller on July 12, 2024 on LessWrong. When you think you've found a circuit in a language model, how do you know if it does what you think it does? Typically, you ablate / resample the activations of the model in order to isolate the circuit. Then you measure if the model can still perform the task you're investigating. We identify six ways in which ablation experiments often vary.[1][2] How do these variations change the results of experiments that measure circuit faithfulness? TL;DR We study three different circuits from the literature and find that measurements of their faithfulness are highly dependent on details of the experimental methodology. The IOI and Docstring circuits in particular are much less faithful than reported when tested with a more precise methodology. The correct circuit for a set of prompts is undefined. The type of ablation you use to isolate the circuit determines the task that you are asking the circuit to perform - and therefore also the optimal circuit. This is especially important because previous work in automatic circuit discovery has tested algorithms by their ability to recover these "ground-truth" circuits from the literature - without considering these potential pitfalls and nuances. Case Studies We look at three circuits from the mech interp literature to demonstrate that faithfulness metrics are highly sensitive to the details of experimental setup. Indirect Object Identification Circuit The IOI circuit is the most well known circuit in a language model. It computes completions to prompts of the form: "When Mary and John went to the store, John gave a bottle of milk to ____" The circuit is specified as a graph of important attention heads (nodes) and the interactions between them (edges) as applied to a specific sequence of tokens. The authors report that the circuit explains 87% of the logit difference between the two name tokens. They find this number by passing some inputs to the model and ablating all activations outside of the circuit. Then they measure how much of the logit difference between the correct and incorrect name logits remains. However, an important detail is that they arrived at this number by ablating the nodes (heads) outside of the circuit, not by ablating the edges (interactions between heads) outside of the circuit. So they don't ablate, for example, the edges from the previous token heads to the name mover heads, even though these are not part of the circuit (effectively including more edges in the circuit). We calculate the logit difference recovered (defined below) when we ablate the edges outside of the circuit instead. They ablate the heads by replacing their activations with the mean value calculated over the "ABC distribution", in which the names in the prompts are replaced by random names.[3] In our experiments, we also try resampling the activations from different prompts (taking individual prompt activations instead of averaging). The first thing that jumps out from the box plots above is the very large range of results from different prompts. The charts here are cut off and some points are over 10,000%. This means that although the average logit difference recovered is reasonable, few prompts actually have a logit difference recovered close to 100%. And we see that ablating the edges instead of the nodes gives a much higher average logit difference recovered - close to 150% (which means that the isolated circuit has a greater logit difference between the correct and incorrect names than the un-ablated model). So the edge-based circuit they specified it is much less faithful than the node-based circuit they tested. The authors calculate the 87% result as the ratio of the expected difference (over a set of prompts) in the ...
Greg Kelley is on location with special guests for another field edition of the Unknown Nations Podcast! Join the conversation with Joseph Miller and John Vandenoever from In Touch Ministries as they share their firsthand experiences after traveling through the Philippines and Indonesia. The guys discuss trip highlights, witnessing the impact of the Messenger audio Bibles, and being deeply moved by the dedication of local leaders on the field. Tune in for real-life testimonies, fresh ministry insights, and learn more about the unique challenges and triumphs of sharing the gospel in these regions. Explore more about Unknown Nations at www.UnknownNations.com or connect with us at UnknownNations.com/contact.
Former Dauphin County Chief Detective Thomas Brennan's phone rings between 3 am and 4 am from a responding officer requesting assistance on a crime scene. When the detective arrives, he notices a car pulled close to a ditch. Brennen, who received training at the FBI Behavioral Science Unit during his tenure with the state police, knew immediately what he was dealing with when he arrived at the Conrail crime scene. After Brennan asks the officer for details, the officer explains that a Conrail train security guard had stumbled onto a man attacking a woman. The man was startled and he ran off. Thankfully the victim was injured but alive so he called for police and waited with her until help arrived. Detective Brennan noted that the vehicle's trunk was open and inside was what many would call a “kit” such as rope, and duct tape, as in things to help someone kill or at least bind another person. Outside the car on the ground were plastic mats, seemingly to cover the ground inside the ditch. It was now clear to Brennan that this man had planned to kill his victim, put her in that ditch, and drive away. What was also clear to the detective was that this was not this man's first time killing someone, he had done it before and he knew what he was doing. Join Jen and Cam of Our True Crime Podcast as we discuss ‘Steelton Serial Killer: Joe Miller.'As always, the listener discretion is by our bestie Edward @octoberpodVHSJust like our music is by our bestie Nico @theinkypawprintThis episode is sponsored by the amazing sleepcreme.comIf you don't experience better sleep, they will refund your money. So what do you have to lose except another night of amazing rest? Order your bottle at sleepcreme.com That is sleepcreme.comYou will love it. Sources:https://www.cor.pa.gov/About%20Us/1989-SCI-Camp-Hill-Riot/Documents/01-oral%20histories%20-%20transcripts/ORAL%20HISTORY%20--%201989%20Camp%20Hill%20Riot%20-%20Joe%20Miller.pdfhttps://www.pennlive.com/news/2016/04/the_grisly_history_of_joseph_m.htmlhttp://inmatelocator.cor.pa.gov/#/https://r.search.yahoo.com/_ylt=AwrilzEabGdmhi0nxk4PxQt.;_ylu=Y29sbwNiZjEEcG9zAzEEdnRpZANEMjY4MjBZSF9DXzEEc2VjA3Ny/RV=2/RE=1718082714/RO=10/RU=https%3a%2f%2fcriminaldiscoursepodcast.com%2fjoseph-miller%2f/RK=2/RS=REGPWwbZxnb8gEVcjUqHncqYYX4-https://www.fox43.com/article/news/local/contests/convicted-serial-killer-joseph-miller-charged-in-1997-cold-case-murder/521-c6f1https://en.wikipedia.org/wiki/Joseph_Daniel_Millerhttps://local21news.com/news/local/new-murder-charge-filed-against-most-prolific-killer-in-dauphin-co-historyhttps://www.inquirer.com/philly/news/local/20080725_Pa__court_backs_life_in_prison_for_killer.htmlThe compulsion to rape and kill: Inside Steelton serial killer Joseph Miller's mindStory by John Luciew | jluciew@pennlive.comhttps://www.pennlive.com/news/2016/04/the_grisly_history_of_joseph_m.html
Joseph Miller, Co-Chair, Antitrust Practice, Mintz Levin, and Holden Brooks, Partner, McGuireWoods, discuss recent developments in the health care antitrust space. They cover the DOJ's health care monopolies task force, private equity in health care, Hart-Scott-Rodino changes, the FTC's final rule on non competes, state statutes, Indiana's pre-transaction review law, and entrenchment in the health care market. Joseph and Holden spoke about this topic at AHLA's 2024 Advising Providers: Legal Strategies for AMCs, Physicians, and Hospitals, in New Orleans, LA.To learn more about AHLA and the educational resources available to the health law community, visit americanhealthlaw.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How To Do Patching Fast, published by Joseph Miller on May 14, 2024 on LessWrong. This post outlines an efficient implementation of Edge Patching that massively outperforms common hook-based implementations. This implementation is available to use in my new library, AutoCircuit, and was first introduced by Li et al. (2023). What is activation patching? I introduce new terminology to clarify the distinction between different types of activation patching. Node Patching Node Patching (aka. "normal" activation patching) is when some activation in a neural network is altered from the value computed by the network to some other value. For example we could run two different prompts through a language model and replace the output of Attn 1 when the model is given some input 1 with the output of the head when the model is given some other input 2. We will use the running example of a tiny, 1-layer transformer, but this approach generalizes to any transformer and any residual network. All the nodes downstream of Attn 1 will be affected by the patch. Edge Patching If we want to make a more precise intervention, we can think about the transformer differently, to isolate the interactions between components. Now we can patch the edge Attn 1 -> MLP and only nodes downstream of MLP will be affected (eg. Attn 1->Output is unchanged). Edge Patching has not been explicitly named in any prior work. Path Patching Path Patching refers to the intervention where an input to a path is replaced in the 'treeified' view of the model. The treeified view is a third way of thinking about the model where we separate each path from input to output. We can implement an equivalent intervention to the previous diagram as follows: In the IOI paper, 'Path Patching' the edge Component 1 -> Component 2 means Path Patching all paths of the form where all components between Component 1 and Component 2 are MLPs[1]. However, it can be easy to confuse Edge Patching and Path Patching because if we instead patch all paths of the form this is equivalent to Edge Patching the edge Component 1->Component 2. Edge Patching all of the edges which have some node as source is equivalent to Node Patching that node. AutoCircuit does not implement Path Patching, which is much more expensive in general. However, as explained in the appendix, Path Patching is sometimes equivalent to Edge Patching. Fast Edge Patching We perform two steps. First we gather the activations that we want to patch into the model. There's many ways to do this, depending on what type of patching you want to do. If we just want to do zero ablation, then we don't need to even run the model. But let's assume we want to patch in activations from a different, corrupt input. We create a tensor, Patch Activations, to store the outputs of the source of each edge and we write to the tensor during the forward pass. Each source component has a row in the tensor, so the shape is [n_sources, batch, seq, d_model].[2] Now we run the forward pass in which we actually do the patching. We write the outputs of each edge source to a different tensor, Current Activations, of the same shape as Patch Activations. When we get to the input of the destination component of the edge we want to patch, we add the difference between the rows of Patch Activations and Current Activations corresponding to the edge's source component output. This works because the difference in input to the edge destination is equal to the difference in output of the source component.[3] Now it's straightforward to extend this to patching multiple edges at once by subtracting the entire Current Activations tensor from the entire Patch Activations tensor and multiplying by a Mask tensor of shape [n_sources] that has a single value for each input edge. By creating a Mask tensor for each destination node w...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How To Do Patching Fast, published by Joseph Miller on May 14, 2024 on LessWrong. This post outlines an efficient implementation of Edge Patching that massively outperforms common hook-based implementations. This implementation is available to use in my new library, AutoCircuit, and was first introduced by Li et al. (2023). What is activation patching? I introduce new terminology to clarify the distinction between different types of activation patching. Node Patching Node Patching (aka. "normal" activation patching) is when some activation in a neural network is altered from the value computed by the network to some other value. For example we could run two different prompts through a language model and replace the output of Attn 1 when the model is given some input 1 with the output of the head when the model is given some other input 2. We will use the running example of a tiny, 1-layer transformer, but this approach generalizes to any transformer and any residual network. All the nodes downstream of Attn 1 will be affected by the patch. Edge Patching If we want to make a more precise intervention, we can think about the transformer differently, to isolate the interactions between components. Now we can patch the edge Attn 1 -> MLP and only nodes downstream of MLP will be affected (eg. Attn 1->Output is unchanged). Edge Patching has not been explicitly named in any prior work. Path Patching Path Patching refers to the intervention where an input to a path is replaced in the 'treeified' view of the model. The treeified view is a third way of thinking about the model where we separate each path from input to output. We can implement an equivalent intervention to the previous diagram as follows: In the IOI paper, 'Path Patching' the edge Component 1 -> Component 2 means Path Patching all paths of the form where all components between Component 1 and Component 2 are MLPs[1]. However, it can be easy to confuse Edge Patching and Path Patching because if we instead patch all paths of the form this is equivalent to Edge Patching the edge Component 1->Component 2. Edge Patching all of the edges which have some node as source is equivalent to Node Patching that node. AutoCircuit does not implement Path Patching, which is much more expensive in general. However, as explained in the appendix, Path Patching is sometimes equivalent to Edge Patching. Fast Edge Patching We perform two steps. First we gather the activations that we want to patch into the model. There's many ways to do this, depending on what type of patching you want to do. If we just want to do zero ablation, then we don't need to even run the model. But let's assume we want to patch in activations from a different, corrupt input. We create a tensor, Patch Activations, to store the outputs of the source of each edge and we write to the tensor during the forward pass. Each source component has a row in the tensor, so the shape is [n_sources, batch, seq, d_model].[2] Now we run the forward pass in which we actually do the patching. We write the outputs of each edge source to a different tensor, Current Activations, of the same shape as Patch Activations. When we get to the input of the destination component of the edge we want to patch, we add the difference between the rows of Patch Activations and Current Activations corresponding to the edge's source component output. This works because the difference in input to the edge destination is equal to the difference in output of the source component.[3] Now it's straightforward to extend this to patching multiple edges at once by subtracting the entire Current Activations tensor from the entire Patch Activations tensor and multiplying by a Mask tensor of shape [n_sources] that has a single value for each input edge. By creating a Mask tensor for each destination node w...
GPT-5 training is probably starting around now. It seems very unlikely that GPT-5 will cause the end of the world. But it's hard to be sure. I would guess that GPT-5 is more likely to kill me than an asteroid, a supervolcano, a plane crash or a brain tumor. We can predict fairly well what the cross-entropy loss will be, but pretty much nothing else. Maybe we will suddenly discover that the difference between GPT-4 and superhuman level is actually quite small. Maybe GPT-5 will be extremely good at interpretability, such that it can recursively self improve by rewriting its own weights. Hopefully model evaluations can catch catastrophic risks before wide deployment, but again, it's hard to be sure. GPT-5 could plausibly be devious enough so circumvent all of our black-box testing. Or it may be that it's too late as soon as the model has been trained. These [...] ---Outline:(01:10) How do we do better for GPT-6?(02:02) Plan B: Mass protests against AI(03:06) No innovation required(04:36) The discomfort of doing something weird(05:53) Preparing for the moment--- First published: April 30th, 2024 Source: https://forum.effectivealtruism.org/posts/J8sw7o5mWbGFaBW4o/why-i-m-doing-pauseai --- Narrated by TYPE III AUDIO.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why I'm doing PauseAI, published by Joseph Miller on April 30, 2024 on LessWrong. GPT-5 training is probably starting around now. It seems very unlikely that GPT-5 will cause the end of the world. But it's hard to be sure. I would guess that GPT-5 is more likely to kill me than an asteroid, a supervolcano, a plane crash or a brain tumor. We can predict fairly well what the cross-entropy loss will be, but pretty much nothing else. Maybe we will suddenly discover that the difference between GPT-4 and superhuman level is actually quite small. Maybe GPT-5 will be extremely good at interpretability, such that it can recursively self improve by rewriting its own weights. Hopefully model evaluations can catch catastrophic risks before wide deployment, but again, it's hard to be sure. GPT-5 could plausibly be devious enough so circumvent all of our black-box testing. Or it may be that it's too late as soon as the model has been trained. These are small, but real possibilities and it's a significant milestone of failure that we are now taking these kinds of gambles. How do we do better for GPT-6? Governance efforts are mostly focussed on relatively modest goals. Few people are directly aiming at the question: how do we stop GPT-6 from being created at all? It's difficult to imagine a world where governments actually prevent Microsoft from building a $100 billion AI training data center by 2028. In fact, OpenAI apparently fears governance so little that they just went and told the UK government that they won't give it access to GPT-5 for pre-deployment testing. And the number of safety focussed researchers employed by OpenAI is dropping rapidly. Hopefully there will be more robust technical solutions for alignment available by the time GPT-6 training begins. But few alignment researchers actually expect this, so we need a backup plan. Plan B: Mass protests against AI In many ways AI is an easy thing to protest against. Climate protesters are asking to completely reform the energy system, even if it decimates the economy. Israel / Palestine protesters are trying to sway foreign policies on an issue where everyone already holds deeply entrenched views. Social justice protesters want to change people's attitudes and upend the social system. AI protesters are just asking to ban a technology that doesn't exist yet. About 0% of the population deeply cares that future AI systems are built. Most people support pausing AI development. It doesn't feel like we're asking normal people to sacrifice anything. They may in fact be paying a large opportunity cost on the potential benefits of AI, but that's not something many people will get worked up about. Policy-makers, CEOs and other key decision makers that governance solutions have to persuade are some of the only groups that are highly motivated to let AI development continue. No innovation required Protests are the most unoriginal way to prevent an AI catastrophe - we don't have to do anything new. Previous successful protesters have made detailed instructions for how to build a protest movement. This is the biggest advantage of protests compared to other solutions - it requires no new ideas (unlike technical alignment) and no one's permission (unlike governance solutions). A sufficiently large number of people taking to the streets forces politicians to act. A sufficiently large and well organized special interest group can control an issue: I walked into my office while this was going on and found a sugar lobbyist hanging around, trying to stay close to the action. I felt like being a smart-ass so I made some wise-crack about the sugar industry raping the taxpayers. Without another word, I walked into my private office and shut the door. I had no real plan to go after the sugar people. I was just screwing with the guy. My phone did no...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why I'm doing PauseAI, published by Joseph Miller on April 30, 2024 on LessWrong. GPT-5 training is probably starting around now. It seems very unlikely that GPT-5 will cause the end of the world. But it's hard to be sure. I would guess that GPT-5 is more likely to kill me than an asteroid, a supervolcano, a plane crash or a brain tumor. We can predict fairly well what the cross-entropy loss will be, but pretty much nothing else. Maybe we will suddenly discover that the difference between GPT-4 and superhuman level is actually quite small. Maybe GPT-5 will be extremely good at interpretability, such that it can recursively self improve by rewriting its own weights. Hopefully model evaluations can catch catastrophic risks before wide deployment, but again, it's hard to be sure. GPT-5 could plausibly be devious enough so circumvent all of our black-box testing. Or it may be that it's too late as soon as the model has been trained. These are small, but real possibilities and it's a significant milestone of failure that we are now taking these kinds of gambles. How do we do better for GPT-6? Governance efforts are mostly focussed on relatively modest goals. Few people are directly aiming at the question: how do we stop GPT-6 from being created at all? It's difficult to imagine a world where governments actually prevent Microsoft from building a $100 billion AI training data center by 2028. In fact, OpenAI apparently fears governance so little that they just went and told the UK government that they won't give it access to GPT-5 for pre-deployment testing. And the number of safety focussed researchers employed by OpenAI is dropping rapidly. Hopefully there will be more robust technical solutions for alignment available by the time GPT-6 training begins. But few alignment researchers actually expect this, so we need a backup plan. Plan B: Mass protests against AI In many ways AI is an easy thing to protest against. Climate protesters are asking to completely reform the energy system, even if it decimates the economy. Israel / Palestine protesters are trying to sway foreign policies on an issue where everyone already holds deeply entrenched views. Social justice protesters want to change people's attitudes and upend the social system. AI protesters are just asking to ban a technology that doesn't exist yet. About 0% of the population deeply cares that future AI systems are built. Most people support pausing AI development. It doesn't feel like we're asking normal people to sacrifice anything. They may in fact be paying a large opportunity cost on the potential benefits of AI, but that's not something many people will get worked up about. Policy-makers, CEOs and other key decision makers that governance solutions have to persuade are some of the only groups that are highly motivated to let AI development continue. No innovation required Protests are the most unoriginal way to prevent an AI catastrophe - we don't have to do anything new. Previous successful protesters have made detailed instructions for how to build a protest movement. This is the biggest advantage of protests compared to other solutions - it requires no new ideas (unlike technical alignment) and no one's permission (unlike governance solutions). A sufficiently large number of people taking to the streets forces politicians to act. A sufficiently large and well organized special interest group can control an issue: I walked into my office while this was going on and found a sugar lobbyist hanging around, trying to stay close to the action. I felt like being a smart-ass so I made some wise-crack about the sugar industry raping the taxpayers. Without another word, I walked into my private office and shut the door. I had no real plan to go after the sugar people. I was just screwing with the guy. My phone did no...
Lucas and Ashley welcome Joseph Miller.
Last month I was at the National Cyber Summit in Huntsville and I interviewed a bunch of folks about their particular cyber area of interest, their business, or how they got into cybersecurity. My first interview for the podcast is with Joseph Miller of Aeyon and his unique entry into the cybersecurity community. More interviews coming sooner rather than later. Feel free to email thoughts, comments, or suggestions to darren@thecyburguy.com. You can also follow me on a variety of Social Media sites: Linkedin: linkedin.com/in/darrenmott Substack: thecyburguy.substack.com Facebook: https://www.facebook.com/profile.php?id=100050113752067 X (Twitter): @thecyburguy Instagram: @thecyburguy Tik Tok: Just Kidding: DELETE TIKTOK NOW!
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Visible loss landscape basins don't correspond to distinct algorithms, published by Mikhail Samin on July 28, 2023 on LessWrong. Thanks to Justis, Arthur Conmy, Neel Nanda, Joseph Miller, and Tilman Räuker for their feedback on a draft. I feel like many people haven't noticed an important result of mechanistic interpretability analysis of grokking, and so haven't updated how they think about loss landscapes and algorithms that neural networks end up implementing. I think this has implications for alignment research. When thinking about grokking, people often imagine something like this: the neural network implements Algorithm 1 (e.g., memorizes the training data), achieves ~ the lowest loss available via memorization, then moves around the bottom of the Algorithm 1 basin and after a while, stumbles across a path to Algorithm 2 (e.g., the general algorithm for modular addition). But the mechanistic interpretability of grokking analysis has shown that this is not true! Approximately from the start of the training, Algorithm 1 is most of what the circuits are doing and what almost entirely determines the neural network's output; but at the same time, the entire time the neural network's parameters visibly move down the wider basin, they don't just become better at memorization; they increasingly implement the circuits for Algorithm 1 and the circuits for Algorithm 2, in superposition. (Neel Nanda et al. have shown that the circuits that at the end implement the general algorithm for modular addition start forming approximately at the start of the training: the gradient was mostly an arrow towards memorization, but also, immediately from the initialization of the weights, a bit of an arrow pointing towards the general algorithm. The circuits were gradually tuned throughout the training. The noticeable change in the test loss starts occurring when the circuits are already almost right.) A path through the loss landscape visible in 3D doesn't correspond to how and what the neural network is actually learning. Almost all of the changes to the loss are due to the increasingly good implementation of Algorithm 1; but apparently, the entire time, the gradient also points towards some faraway implementation of Algorithm 2. Somehow, the direction in which Algorithm 2 lies is also visible to the derivative, and moving the parameters in the direction the gradient points means mostly increasingly implementing Algorithm 1, and also increasingly implementing the faraway Algorithm 2. "Grokking", visible in the test loss, is due to the change that happens when the parameters already implement Algorithm 2 accurately enough for the switch from mostly outputting the results of an implementation of Algorithm 1 to the results of an improving implementation of Algorithm 2 not to hurt the performance. Once it's the case, the neural network puts more weight into Algorithm 2 and at the same time quickly tunes it to be even more accurate (which is increasingly easy as the output is increasingly determined by the implementation of Algorithm 2). This is something many people seem to have missed. I did not expect it to be the case, was surprised, and updated how I think about loss landscapes. Does this generalize? Maybe. I'm not sure whether it's correct to generalize from the mechanistic interpretability of grokking analysis to neural networks in general, real LLMs are under-parametrised while the grokking model is very over-parameterised, but I guess it might be reasonable to expect that this is how deep learning generally works. People seem to think that multi-dimensional loss landscapes of neural networks have basins for specific algorithms, and neural networks get into these depending on how relatively large these basins are, which might be caused by how simple the algorithms are, how path-depe...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Visible loss landscape basins don't correspond to distinct algorithms, published by Mikhail Samin on July 28, 2023 on LessWrong. Thanks to Justis, Arthur Conmy, Neel Nanda, Joseph Miller, and Tilman Räuker for their feedback on a draft. I feel like many people haven't noticed an important result of mechanistic interpretability analysis of grokking, and so haven't updated how they think about loss landscapes and algorithms that neural networks end up implementing. I think this has implications for alignment research. When thinking about grokking, people often imagine something like this: the neural network implements Algorithm 1 (e.g., memorizes the training data), achieves ~ the lowest loss available via memorization, then moves around the bottom of the Algorithm 1 basin and after a while, stumbles across a path to Algorithm 2 (e.g., the general algorithm for modular addition). But the mechanistic interpretability of grokking analysis has shown that this is not true! Approximately from the start of the training, Algorithm 1 is most of what the circuits are doing and what almost entirely determines the neural network's output; but at the same time, the entire time the neural network's parameters visibly move down the wider basin, they don't just become better at memorization; they increasingly implement the circuits for Algorithm 1 and the circuits for Algorithm 2, in superposition. (Neel Nanda et al. have shown that the circuits that at the end implement the general algorithm for modular addition start forming approximately at the start of the training: the gradient was mostly an arrow towards memorization, but also, immediately from the initialization of the weights, a bit of an arrow pointing towards the general algorithm. The circuits were gradually tuned throughout the training. The noticeable change in the test loss starts occurring when the circuits are already almost right.) A path through the loss landscape visible in 3D doesn't correspond to how and what the neural network is actually learning. Almost all of the changes to the loss are due to the increasingly good implementation of Algorithm 1; but apparently, the entire time, the gradient also points towards some faraway implementation of Algorithm 2. Somehow, the direction in which Algorithm 2 lies is also visible to the derivative, and moving the parameters in the direction the gradient points means mostly increasingly implementing Algorithm 1, and also increasingly implementing the faraway Algorithm 2. "Grokking", visible in the test loss, is due to the change that happens when the parameters already implement Algorithm 2 accurately enough for the switch from mostly outputting the results of an implementation of Algorithm 1 to the results of an improving implementation of Algorithm 2 not to hurt the performance. Once it's the case, the neural network puts more weight into Algorithm 2 and at the same time quickly tunes it to be even more accurate (which is increasingly easy as the output is increasingly determined by the implementation of Algorithm 2). This is something many people seem to have missed. I did not expect it to be the case, was surprised, and updated how I think about loss landscapes. Does this generalize? Maybe. I'm not sure whether it's correct to generalize from the mechanistic interpretability of grokking analysis to neural networks in general, real LLMs are under-parametrised while the grokking model is very over-parameterised, but I guess it might be reasonable to expect that this is how deep learning generally works. People seem to think that multi-dimensional loss landscapes of neural networks have basins for specific algorithms, and neural networks get into these depending on how relatively large these basins are, which might be caused by how simple the algorithms are, how path-depe...
Buy Some Merch to help support...Click Here!!Join the Cellar Dwellers Support Team...Click Here!!Join us in the Ol' Dirty Basement for Part 2 of serial killer Joey Miller. This week we were lucky enough to sit down with Chief Darrell Reider of the Swatara Police department. The Chief was involved in the apprehension and interrogation of Joey Miller and was able to shed some light on the whole case. A great interview you don't want to miss. Tune in and enjoy!!!Special Thanks to Chief Darrell Reider for hanging in the basement!!Sounds: https://freesound.org/people/Sami_Hiltunen/sounds/527187/ Eerie intro musicSources: Compulsion to rape and kill - Inside Steelton serial killer Joseph Miller's mindPatriot NewsJune 15, 2022https://www.pennlive.com/news/2022/06/compulsion-to-rape-and-kill.html 'Most prolific killer' in Dauphin Co. confesses to 1986 and 1990 murdersCBS 21June 24, 2016https://local21news.com/news/local/new-murder-charge-filed-against-most-prolific-killer-in-dauphin-co-history Lives Cut Short: The Victims of Serial Killer Joseph D. MillerPatriot NewsApril 14, 2016https://www.pennlive.com/news/2016/04/the_victims_of_serial_killer_j.html Profile of Serial Killer Joseph Daniel MillerPatriot NewsApril 13, 2016https://www.pennlive.com/news/2016/04/16_years_ago_profile_of_a_seri.htmlSupport the showThanks to The Tsunami Experiment for the theme music!!Check them out hereSUPPORT US AT https://www.buzzsprout.com/1984311/supporters/newMERCH STORE https://ol-dirty-basement.creator-spring.comFind us at the following https://oldirtybasement.buzzsprout.com WEBSITE https://www.facebook.com/odbasement/ FACEBOOK https://www.instagram.com/oldirtybasement/ INSTAGRAM https://mobile.twitter.com/odbasement TWITTER https://www.tiktok.com/@oldirtybasementpodcast TIKTOK oldirtybasement@outlook.com EMAIL
Buy Some Merch to help support...Click Here!!Join the Cellar Dwellers Support Team...Click Here!!Join us in the Ol' Dirty Basement for part 1 of 2 on local serial killer Joey Miller. In this first episode we go over Joey's early life and eventual crimes that led to his arrest in the early 90's. We also had a call with Matt's dad Steelton George, to talk a little bit about the town and even had a little trivia on Steelton fun facts. Next week's part 2 episode we will have special guest Chief Darrell who was involved in the arrest and interrogation of Joey. Be sure to tune back in for that. Enjoy!! Sounds: https://freesound.org/people/Sami_Hiltunen/sounds/527187/ Eerie intro musicSources: Compulsion to rape and kill - Inside Steelton serial killer Joseph Miller's mindPatriot NewsJune 15, 2022https://www.pennlive.com/news/2022/06/compulsion-to-rape-and-kill.html 'Most prolific killer' in Dauphin Co. confesses to 1986 and 1990 murdersCBS 21June 24, 2016https://local21news.com/news/local/new-murder-charge-filed-against-most-prolific-killer-in-dauphin-co-history Lives Cut Short: The Victims of Serial Killer Joseph D. MillerPatriot NewsApril 14, 2016https://www.pennlive.com/news/2016/04/the_victims_of_serial_killer_j.html Profile of Serial Killer Joseph Daniel MillerPatriot NewsApril 13, 2016https://www.pennlive.com/news/2016/04/16_years_ago_profile_of_a_seri.html Joseph Miller – Dauphin County Serial KillerCriminal Discourse PodcastDecember 9, 2019https://criminaldiscoursepodcast.com/joseph-miller/ Thanks to The Tsunami Experiment for the theme music!!Check them out hereSUPPORT US AT https://www.buzzsprout.com/1984311/supporters/newMERCH STORE https://ol-dirty-basement.creator-spring.comFind us at the following https://oldirtybasement.buzzsprout.com WEBSITE https://www.facebook.com/odbasement/ FACEBOOK https://www.instagram.com/oldirtybasement/ INSTAGRAM https://mobile.twitter.com/odbasement TWITTER https://www.tiktok.com/@oldirtybasementpodcast TIKTOK oldirtybasement@outlook.com EMAIL
ABC #049 - Part 3 Joseph Miller Huston was an up-and-coming architect who got the plum job of designing Pennsylvania's State Capitol; instead of leading him to even bigger jobs, it became his professional downfall.
J. Edward "Gas" Addicks made his fortune in the gas industry, but decided he wanted to be a United States Senator; he spent much of his wealth in a fruitless attempt at achieving his goal. Samuel "Stars and Stripes" Ashbridge would give a patriotic speech at the drop of a hat and was elected Philadelphia's mayor in 1899; he left office four years later a rich man. Fellow tour guide and Philadelphia author and historian Tom Keels will tell you his story. Joseph Miller Huston was an up-and-coming architect who got the plum job of designing Pennsylvania's State Capitol; instead of leading him to even bigger jobs, it became his professional downfall. These three men interred at Laurel Hill are remembered today for their graft and dishonesty in a city that muckraking journalist Lincoln Steffens called "corrupt but content." Learn about their crimes and punishments.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We Found An Neuron in GPT-2, published by Joseph Miller on February 11, 2023 on LessWrong. We started out with the question: How does GPT-2 know when to use the word an over a? The choice depends on whether the word that comes after starts with a vowel or not, but GPT-2 can only output one word at a time. We still don't have a full answer, but we did find a single MLP neuron in GPT-2 Large that is crucial for predicting the token " an". And we also found that the weights of this neuron correspond with the embedding of the " an" token, which led us to find other neurons that predict a specific token. Discovering the Neuron Choosing the prompt It was surprisingly hard to think of a prompt where GPT-2 would output “ an” (the leading space is part of the token) as the top prediction. Eventually we gave up with GPT-2_small and switched to GPT-2_large. As we'll see later, even GPT-2_large systematically under-predicts the token “ an”. This may be because smaller language models lean on the higher frequency of " a" to make a best guess. The prompt we finally found that gave a high (64%) probability for “ an” was: The first sentence was necessary to push the model towards an indefinite article — without it the model would make other predictions such as “[picked] up”. Before we proceed, here's a quick overview of the transformer architecture. Each attention block and MLP takes input and adds output to the residual stream. Logit Lens Using the logit lens technique, we took the logits from the residual stream between each layer and plotted the difference between logit(‘ an') and logit(‘ a'). We found a big spike after Layer 31's MLP. Activation Patching by the Layer Activation patching is a technique introduced by Meng et. al. (2022) to analyze the significance of a single layer in a transformer. First, we saved the activation of each layer when running the original prompt through the model — the “clean activation”. We then ran a corrupted prompt through the model: By replacing the word "apple" with "lemon", we induce the model to predict the token " a" instead of " an". With the model predicting " a" over " an", we can replace a layer's corrupted activation with its clean activation to see how much the model shifts towards the " an" token, which indicates that layer's significance to predicting " an". We repeat this process over all the layers of the model. We're mostly going to ignore attention for the rest of this post, but these results indicate that Layer 26 is where " picked" starts thinking a lot about " apple", which is obviously required to predict " an". Note: the scale on these patching graphs is the relative logit difference recovery: PatchedLogitDiff−CorruptedLogitDiffCleanLogitDiff−CorruptedLogitDiff (ie. "what proportion of logit(" an") - logit(" a') in the clean prompt did this patch recover?"). The two MLP layers that stand out are Layer 0 and Layer 31. We already know that Layer 0's MLP is generally important for GPT-2 to function (although we're not sure why attention in Layer 0 is important). The effect of Layer 31 is more interesting. Our results suggests that Layer 31's MLP plays a significant role in predicting the " an" token. (See this comment if you're confused how this result fits with the logit lens above.) Finding 1:We can discover predictive neurons by activation patching individual neurons Activation patching has been used to investigate transformers by the layer, but can we push this technique further and apply it to individual neurons? Since each MLP in a transformer only has one hidden layer, each neuron's activation does not affect any other neuron in the MLP. So we should be able to patch individual neurons, because they are independent from each other in the same sense that the attention heads in a single layer are independent from each othe...
Sales mercenary with several interests. CSAT Solutions: My primary focus is gaining and retaining Clients for this top-shelf computer repair provider. CSAT is the behemoth of computer whole-unit depot repair serving the largest computer manufacturers on the planet. With such an expertise in printed circuit board repair, it's a natural addition to add medical devices (my focus) to their portfolio -bringing the all-in-one repair model to the medical device industry. With the engineering resources that CSAT possesses, it easy to forget about the enormous logistics back end that can benefit our future Clients - shipping directly to their customers seamlessly. LinkedIn doesn't have enough space to write about all of the capabilities of this company, best to go to - www.CSAT.com Grant Cardone Licensee: Grant Cardone has built the #1 sales training platform. In his programs, he distills forty years of sales and marketing advice.
Host Dale Cooper and Teri Rugeley, Associate Broker at Old Colony Realtors, discuss tips and strategies when tackling the fast-paced Charleston area real estate market.Teri invites home inspector Sam Wood from Sam Wood Inspections to discuss the inspection process. Terri Rugeley has over three decades of experience. Find her at Old Colony Realtors, R. Joseph Miller, Broker of Record. Teri Rugeley 304-389-3654Old Colony Realtors1205 Virginia St E304-344-2581
Host Dale Cooper and Teri Rugeley, Associate Broker at Old Colony Realtors, discuss tips and strategies when tackling the fast-paced Charleston area real estate market.Terri Rugeley has over three decades of experience. Find her at Old Colony Realtors, R. Joseph Miller, Broker of Record. Teri Rugeley 304-389-3654Old Colony Realtors1205 Virginia St E304-344-2581
Host Dale Cooper and Teri Rugeley, Associate Broker at Old Colony Realtors, discuss tips and strategies when tackling the fast-paced Charleston area real estate market.Terri Rugeley has over three decades of experience. Find her at Old Colony Realtors, R. Joseph Miller, Broker of Record. Teri Rugeley 304-389-3654Old Colony Realtors1205 Virginia St E304-344-2581
AI technology is here to help us. It can make increase research, decision-making, and efficiency in your business. AI is not here to get your job, it's here to simplify it. Join your host, Chad Burmeister and his guest Joseph Miller to talk about the usefulness of AI in the PreSales space. Joseph is the co-founder and Chief Data Scientist of Vivun. Vivun is an enterprise software company that is focused solely on presales. Learn what the job is of the AI in Vivun and it enhances the job of a sales engineer.
06/14/21 - Dan Interviews W/Ambassador, Israel's 17th Permanent Rep. To The UN & Chairman of The Worl Likud, Danny Danon, OAN Chief White House Correspondent, Chanel Rion, Congressman for Florida's 19th District, Rep. Byron Donalds, Young Patriot, Jenna Miller, Concerned Parent, Joseph Miller & New York Gubernatorial Candidate, Andrew Giuliani
Host Jeff Jenkins and Teri Rugeley, Associate Broker at Old Colony Realtors, discuss tips and strategies when tackling the fast-paced Charleston area real estate market. Terri Rugeley has over three decades of experience. Find her at Old Colony Realtors, R. Joseph Miller, Broker of Record. Teri Rugeley 304-389-3654 Old Colony Realtors 1205 Virginia St E 304-344-2581
Host Dale Cooper and Teri Rugeley, Associate Broker at Old Colony Realtors, discuss tips and strategies when tackling the fast-paced Charleston area real estate market. Terri Rugeley has over three decades of experience. Find her at Old Colony Realtors, R. Joseph Miller, Broker of Record. Teri Rugeley 304-389-3654 Old Colony Realtors 1205 Virginia St E 304-344-2581
Tommy caught up with Glaswegian songwriter Joseph Miller to chat about his new single, Previous releases, songwriting and of course that dream gig. Tracks from his Live at Glasgow E.P. (recorded at The Hug & Pint) were Didn't I & Vodka + Boys His new single is Fade Away His Dream Gig will happen at SSE Hydro where he will be supported by Aaliyah, Bobby Deans & Cookie Olafonte
Host Dale Cooper and Teri Rugeley, Associate Broker at Old Colony Realtors, discuss tips and strategies when tackling the fast-paced Charleston area real estate market. Terri Rugeley has over three decades of experience. Find her at Old Colony Realtors, R. Joseph Miller, Broker of Record. Teri Rugeley 304-389-3654 Old Colony Realtors 1205 Virginia St E 304-344-2581
Donna starts us off with the story of Amy & Eric Mertz. In the 1990's they move into their new home in Kutztown, Pennsylvania. But strange things start to happen and there's an uneasy feeling in the air. A team of investigators come in to try and find out what's going on. Kerri's story starts at 30:50 – It's also set in Pennsylvania in the 1990s. When Clara Johnson needs a ride home from the bar Joseph Miller stops to pick her up. But then things quickly go from bad to worse. For images from these stories and full show notes, go to aparanormalchicks.com If you have any local true crime, local urban legend/lore, ghost stories.. we want them all!! We want to hear from YOU. Especially if you have any funny Ambien stories! Email us at aparanormalchicks@gmail.com Join The Creepinati @ www.patreon.com/theAPCpodcast Please rate and review us on Apple Podcast and Stitcher! Thanks so much. STALK US ON SOCIAL, Y'ALL! Facebook Page Facebook Group Instagram Twitter TikTok A Paranormal Chicks is produced with assistance from Aurality. Contact will@auralitysounds.com and quote APC.
Donna starts us off with the story of Amy & Eric Mertz. In the 1990's they move into their new home in Kutztown, Pennsylvania. But strange things start to happen and there's an uneasy feeling in the air. A team of investigators come in to try and find out what's going on. Kerri's story starts at 30:50 – It's also set in Pennsylvania in the 1990s. When Clara Johnson needs a ride home from the bar Joseph Miller stops to pick her up. But then things quickly go from bad to worse. For images from these stories and full show notes, go to aparanormalchicks.com If you have any local true crime, local urban legend/lore, ghost stories.. we want them all!! We want to hear from YOU. Especially if you have any funny Ambien stories! Email us at aparanormalchicks@gmail.com Join The Creepinati @ www.patreon.com/theAPCpodcast Please rate and review us on Apple Podcast and Stitcher! Thanks so much. STALK US ON SOCIAL, Y'ALL! Facebook Page Facebook Group Instagram Twitter TikTok A Paranormal Chicks is produced with assistance from Aurality. Contact will@auralitysounds.com and quote APC.
Tap into this episode to hear Joseph Miller's amazing testimony!! --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app --- Send in a voice message: https://anchor.fm/adifferentpodcast/message
Today's interview is the first of two separate interviews with environmental law subject matter experts. We have the pleasure to speak with Mr. Joseph Miller, to discuss Environmental Law and Air Force Operations. We plan to discuss some the biggest environmental law challenges we face both within the Air Force and DoD and the interplay between environmental law and operational law – an area often overlooked but very important from a national security perspective. Mr. Miller is the Chief of the Air Force Environmental Law Field Support Center, located in San Antonio, Texas which is now part of the Operations & International Law Directorate, Office of The Judge Advocate General. He leads a team of 32 attorneys and paralegals located at 9 locations in advising the headquarters staff, major commands and subordinate legal offices of the US Air Force on all environmental and land use statutes, regulations and policies.
In this week's episode, we discuss how to build a food brand and the ins and outs of the food industry. We are joined by Jason Ramjit vice president of sales at Ava Manufacturing as well as president of Valued Manufacturing Resources. Some of the topics discussed: How to private label a food brand, how to find trends in the food market, what is the space for vegan, health, and allergen-free food, How to market and get your brand out there. This is a Pure Conversation Podcast. To find out more, go to pureconversation.com. Pure Conversation is an opinion-based podcast. We are not financial consultants. All financial decisions should be made with consultation from an expert. Pure Conversation Interviews are hosted by Joseph Miller.
In this week's episode, we discuss how to become a marketing powerhouse. We are joined by Matt Erickson, a marketing professional at national positions. Some of the topics discussed: How to compete with big companies marketing-wise, SEO, automating marketing, and how to build up your company. This is a Pure Conversation Podcast. To find out more, go to pureconversation.com. Pure Conversation is an opinion-based podcast. We are not financial consultants. All financial decisions should be made with consultation from an expert. Pure Conversation is hosted by Joseph Miller and Darren May.
In this week's episode, we discuss landing a deal on Shark Tank, how to start a company and raise capital, and how to use calculated risks to succeed. We are joined by Neal Hoffman, the creator of Mensch on a Bench, a wildly successful holiday toy for the better part of a decade. Some of the topics discussed: Starting a business, raising capital from investors, how to sell yourself, thinking differently, being willing to put it all on the line, how to pitch investors even if they aren't in your space, and how to accomplish your dreams. For more information on this topic check out our website: https://www.pureconversation.com/pure-conversation-podcast/how-to-catch-a-shark-shark-tank-then-and-now-mensch-on-a-bench-with-neal-hoffman/ This is a Pure Conversation Podcast. To find out more, go to pureconversation.com. Pure Conversation is an opinion-based podcast. We are not financial consultants. All financial decisions should be made with consultation from an expert. Pure Conversation is hosted by Joseph Miller and Darren May.
In this week's episode, we discuss entrepreneurialism, how to start a company or not-for-profit, and what it takes to succeed. We are joined by Rabbi David Samson, also known as the “Educational Entrepreneur”. Some of the topics discussed: Starting a business, starting a school, how to break the mold, educations, thinking differently, how to deal with obstacles, and how to fail or succeed. For more information on this topic check out our website: https://www.pureconversation.com/leadership/entrepreneurialism-with-rabbi-david-samson/ This is a Pure Conversation Podcast. To find out more, go to pureconversation.com. Pure Conversation is an opinion-based podcast. We are not financial consultants. All financial decisions should be made with consultation from an expert. Pure Conversation is hosted by Joseph Miller and Darren May.
Revolution and freedom hold different meanings to different people, and the history of the colonial world is no exception. This episode of American Capital explores the impact of the Transatlantic Slave Trade through data: how large it truly was, and what it looked like to those involved first-hand. Along the way, we tie together the Haitian Revolution—arguably the only successful revolt of enslaved people in the New World—and the American Revolution. Ultimately asking: was there an "Age of Revolutions" occurring around the world, and were all of these fought under a common definition of "freedom?" — EPISODE MENTIONS Who: Aaron Lopez, Charles Leclerc, Jean-Jacques Dessalines, John Hancock, Joseph Miller, Napolean Bonaparte, Robert Paul Thomas, Thomas Jefferson, Toussaint Louverture What: American Declaration of Independence, American Revolution, Dunmore's Proclamation, Haitian Declaration of Independence, Haitian Revolution, Louisiana Purchase, Navigation Acts, Stamp Act (1765), Slave Ship Brookes, Slave Ship Creole, Slave Ship Sally, Slave Ship Zong, Transatlantic Slave Trade Where: British West Indies, Colonial Brazil, Haiti (Saint-Domingue), Indian Ocean, Middle Passage, The Thirteen Colonies, West Africa Documents: "A Quantitative Approach to the Study of the Effects of British Imperial Policy Upon Colonial Welfare", SlaveVoyages.org, Voyage of the Slave Ship Sally --- Support this podcast: https://anchor.fm/american-capital/support
In this week's interview, we discuss personal finance and how moving to Israel can impact one's finances. We discuss what issues can come up and how they can be solved. For more information on this topic check out our website: https://www.pureconversation.com/pure-conversation-podcast/pure-conversation-personal-finance-and-aliyah-with-yaakov-ehrenkranz/ This is a Pure Conversation Podcast. To find out more, go to pureconversation.com. Pure Conversation is an opinion-based podcast. We are not financial consultants. All financial decisions should be made with consultation from an expert. Pure Conversation is hosted by Joseph Miller and Darren May.
"The great commission, this mission of the Church, is still in its infancy. Jesus leaves the Church with one thing: the great commission, to go and make disciples." There are 7 billion people in the world, and only 2 billion of them are Christian. Like it or not, Jesus has given each of us a mission to share his good news with others and make disciples. Joseph Miller comes from a family who has dedicated their lives to fulfilling their part of the great commission. He shares some stories about mission work around the world, including places where evangelization and conversion are illegal, but also gives some pointers about how we can evangelize in our own neighborhoods and workplaces. Will you go all in for Jesus? Check out www.FamilyMissionsCompany.com for more of their great work, and stay tuned for Into the Deep, coming in September.
Our guest Joseph Miller of the RAC Teen Center discusses what they have done differently to continue supporting students during social distancing mandates and COVID Regulations. Hear about "Dancing with Dad" and His Ale-8 identifying skills. Check their youtube page out by visiting this link: https://www.youtube.com/channel/UCXUgrbqD86EHwnRgUJeyoSg
It is reported that today about 7 billion people live on Earth, and no two of them alike.. Global citizens can be small and large and in many colours. We wear different clothes and have different ideas of beauty is also part of culture.WORKS CITEDBolander, Thomas (2013). "Self-Reference". The Metaphysics Research Lab, Stanford University. Retrieved 21 June 2016.Chang, Bok-Myung (2010). "Cultural Identity in Korean English". Journal of Pan-Pacific Association of Applied Linguistics. 14 (1): 131–145.Crossley, J.N.; Ash, C.J.; Brickhill, C.J.; Stillwell, J.C.; Williams, N.H. (1972). What is mathematical logic?. London-Oxford-New York: Oxford University Press. pp. 59–60. ISBN 0-19-888087-1. Zbl 0251.02001Cyril Edwin Black, ed. The Modernization of Japan and Russia: a comparative study(1975) Cyril Edwin Black, The dynamics of modernization: a study in comparative history(1966)David Brion Davis, The Problem of Slavery in Western Culture (1966); Seymour Drescher, Abolition: A History of Slavery and Antislavery (Cambridge University Press, 2009); Paul Finkelman, and Joseph Miller, eds. Macmillan Encyclopedia of World Slavery (Macmillan, 2 vol 1998)David A. Waldman and David E. Bowen (2016), Learning to Be a Paradox-Savvy Leader, Academy of Management -- Perspectives, vol. 30, no. 3, 316-327, doi:10.5465/amp.2015.0070 Doyle (1986)Eliason, James L. (March–April 1996). "Using Paradoxes to Teach Critical Thinking in Science". Journal of College Science Teaching. 15 (5): 341–44. (Subscription required (help)) Gilbert Rozman, Urban networks in Ching China and Tokugawa Japan (1974) Gilbert Rozman, The Modernization of China (1982)"Earthdance: Chapter 20 - The Indigenous Way". Archived from the original on 2007-09-05. Retrieved 2007-09-08.a systematic methodology in the social sciences involving the construction of theories through methodical gathering and analysis of data.This research methodology uses inductive reasoning, in contrast to the hypothetico-deductive model model of the scientific method.Faggiolani, C. (2011). "Perceived Identity: Applying Grounded Theory in Libraries". JLIS.it. University of Florence. 2 (1). doi:10.4403/jlis.it-4592. Retrieved 29 June 2013.Revised and Second Ebook edition and digital format, The Jews of Kurdistan and their Tribal Chieftains: A Study in Survival, Mordechai (Moti) Zaken (Jerusalem, 2015); Jewish Subjects and their Tribal Chieftains in Kurdistan: A Study in Survival (Leiden and Boston): Brill, 2007. ISBN 978-9004161900 Yahud Kurdistan wa-ru'as'uhum al-qabaliyun: Dirasa fi fan al-baqa'. Transl., Su'ad M. Khader; Reviewers: Abd al-Fatah Ali Yihya and Farast Mir'i; Published by the Center for Academic Research, Beirut, 2013; يهود كردستان ورؤساؤهم القبليون : (دراسة في فن البقاء) / تأليف مردخاي زاكن ؛ ترجمة عن الانكليزية سعاد محمد خضر ؛ مراجعة عبد الفتاح علي يحيى، فرست مرعي. زاكن، مردخاي، ١٩٥٨م-;خضر،Support the show (http://www.buzzsprout.com/429292)
We roll out our transition to Phase 4 of the Coronavirus Outbreak response as a business. In doing so, we're launching a campaign with local restaurants and support of the Boone Chamber of Commerce. And we have a special guest, Joseph Miller of COBO Sushi, to help us kick it off! --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app
This week on the podcast Greg Kelley interviews two people, Joseph Miller and John VandenOever from InTouch Ministries. Founded by Dr. Charles Stanley, InTouch Ministries has actually been a forerunner in using technology to reach unreached people groups. This ministry is known for their devotionals and Christian resources, but you might not know that they also send audio Bibles—many in minority languages---to countries all over the world. They’re truly living out Matt 28:19 where God tells us to go and make disciples. Make sure to listen! Links: Download the Great Commission Action Guide Watch the International Day for the Unreached Webcast Follow us on Facebook Learn more about InTouch Ministries
Seven statuettes would represent his seven victims. Joseph Miller has been called the most prolific serial killer in Dauphin County's history. Listen to the story of his crimes and how one brave woman fought to live and helped bring down his reign of terror.
Joseph Miller is the General Manager at C&I Studios and an avid gamer. He is starring in an upcoming lifestyle sketch docuseries, titled Heart Piece Plus, as his alter ego, Master Joe. Heart Piece Plus creates a dialogue around social responsibility, using video games as the framework for a grand, interconnected (and shared) coming of age story. You can learn more about his upcoming series on our portfolio. Joseph Miller handles all of the hiring (and firing) here at the studio. He is the first line of defense for the 30 sum applications we receive daily and the last line of offense (if you know what we mean). So, who better than to swap horror stories of life at the studio? Tune in to minute 11:00 for the workhouse gossip and to hear about that one guy who quit because “all of the cocaine gave him the jitters.” From there, we get into what makes people succeed in the workplace. Is it a defining character trait — something you can pinpoint? Or is it passion, pure and simple? Find out at minute 20:00! If you have dreams of working with our crew, then this is the segment for you. If you’re in need of a happy dose of brotherly love, keep watching! Joshua and Joseph are going down memory lane to discover how they ended up here in the first place. Catharsis? You’ll have to decide that for yourself.
We discuss dictionaries, up and down on maps, and excellence in seminar conversation. Joseph Miller, Suggestions for Law School Seminars (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3425608) Seminar Skills – Learning Collaboratively (https://sjcadmissionsblog.com/2019/07/22/seminar-skills-learning-collaboratively/)
Wednesday Bible Study - August 14th, 2019 Rev Joseph Miller - Even Now John 11
Pentecost Sunday | June 9th, 2019 Bro Joseph Miller
A conversation with classicist Joseph Miller
A conversation with classicist Joseph Miller about leaving Mormonism, finding the classics, and living with stoic dignity amidst the crumbling ruins of American Academia.
A conversation with classicist Joseph Miller about leaving Mormonism, finding the classics, and living with stoic dignity amidst the crumbling ruins of American Academia.
Today on the show the guys start by talking about Mattress Firms new Snoozeternship (the internship you can sleep through). Then Travis unveils his idea for Hemp Hotel and discusses whether or not he’s ready for New York City (he’s not). Then to close out the hour Joseph Miller from the Lift For Life Gym comes in and talks about the upcoming St. Louis Microfest. Follow @weareliveradio on Facebook/Instagram/Twitter for the latest WAL updates Check out www.midcoast.media for studio, production, apparel, and media capabilities. WAL@weareliveradio.com for inquiries --- Send in a voice message: https://anchor.fm/we-are-live-with-chris-denman/message
Joe becomes the guest guest and Mike Madison the guest host, as we talk about Joe's new research into the web of law and what citations tell us about what law means. As one might expect for a show which is ostensibly about legal theory but actually, as all good argunauts know, an extended meditation on Being Joe, this is a very special episode of Oral Argument. This show’s links: Joe Miller's faculty profile (http://www.law.uga.edu/profile/joseph-s-miller) and writing (https://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=104702) Mike Madison’s website (http://madisonian.net/home/), writing (http://madisonian.net/home/?page_id=85), and blog (http://madisonian.net) Joseph Miller, Law's Semantic Self-Portrait: Discerning Doctrine with Co-Citation Networks and Keywords (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3212131) Joseph Miller, Charting Supreme Court Patent Law, Near and Far (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3125510); Joseph Miller, Which Supreme Court Cases Influenced Recent Supreme Court IP Decisions? A Citation Study (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3012262) Charles Barzun, Three Forms of Legal Pragmatism (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3178155) Andrew Green and Albert Yoon, Triaging the Law: Developing the Common Law on the Supreme Court of India (https://onlinelibrary.wiley.com/doi/full/10.1111/jels.12161) Frank Pasquale, A Rule of Persons, Not Machines: The Limits of Legal Automation (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3135549) James Scott, Seeing Like a State (https://books.google.com/books?id=PqcPCgsr2u0C&printsec=frontcover&source=gbs_ge_summary_r&cad=0#v=onepage&q&f=false) Special Guest: Mike Madison.
Deacon Joseph Miller | 2 Corinthians 5.1-10
Deacon Joseph Miller | Ephesians 4.1-7, 11-16
Back after a long hiatus, we talk about Joe’s latest work on patent law and Supreme Court citations networks. Opening a banana, the opening of corpse flowers, the eclipse, news non-roundup, DACA and naming, and, finally, Joe’s paper, examining the steep increase in patent cases before the Supreme Court over the last two decades by mapping citation networks among intellectual property cases, at 20:02. This show’s links: Joseph Miller, Which Supreme Court Cases Influenced Recent Supreme Court IP Decisions? A Citation Study (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3012262) How to peel a banana (http://www.slate.com/articles/arts/food/2014/02/banana_peeling_trick_video_shows_bottom_first_best_method.html) Three Corpse Flowers Bloomed at USBG in 2017 (https://www.usbg.gov/three-corpse-flowers-bloomed-usbg-2017) Fred Espenak, Periodicity of Solar Eclipses (http://www.eclipsewise.com/solar/SEhelp/SEperiodicity.html) About Predestination (https://en.wikipedia.org/wiki/Predestination_(film)) (which is based on All You Zombies by Robert Heinlein) (Warning: Do Not Read the Plot Summary, just see the movie) About Saturday morning cartoon preview specials (https://en.wikipedia.org/wiki/Saturday_morning_preview_specials) About the Court of Appeals for the Federal Circuit (https://en.wikipedia.org/wiki/United_States_Court_of_Appeals_for_the_Federal_Circuit) Court Listener’s Supreme Court Citation Networks tool (https://www.courtlistener.com/visualizations/scotus-mapper/) Scott Dodson and Colin Starger, Mapping Supreme Court Doctrine: Civil Pleading (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2422160) (Starger’s other papers on SSRN trace NFIB and Windsor)
With Zahr Said, we discuss what makes creative works similar and the role of the “reader” in constructing a work’s meaning. Christian derails with a James Bond commercial. But we get back on track and talk about paintings, poems, Star Wars, textualism, and the Big Sick. This show’s links: Zahr Said’s faculty profile (https://www.law.washington.edu/directory/profile.aspx?ID=602) and writing (https://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=1030166) Zahr Said, A Transactional Theory of the Reader in Copyright Law (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2902765) Joseph Miller, Hoisting Originality (http://digitalcommons.law.uga.edu/cgi/viewcontent.cgi?article=1778&context=fac_artchop) About Louise Rosenblatt (https://en.wikipedia.org/wiki/Louise_Rosenblatt) Oral Argument 132: The Soul of Music (http://oralargument.org/132) (guest Joe Fishman) Joseph Fishman, Music as a Matter of Law (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2931091) Mark A. Lemley, Our Bizarre System for Proving Copyright Infringement (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1661434) Shyamkrishna Balganesh, The Normativity of Copying in Copyright Law (http://dlj.law.duke.edu/article/the-normativity-of-copying-in-copyright-law/) Laura Heymann, Reading Together and Apart: Juries, Courts, and Substantial Similarity in Copyright Law (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2958263) Jacob Lawrence, The Studio (https://www.wikiart.org/en/jacob-lawrence/the-studio-1977) Rebecca Tushnet, Worth a Thousand Words: The Images of Copyright Law (http://scholarship.law.georgetown.edu/fwps_papers/148/) Adrienne Rich, Turbulence (http://wwnorton.tumblr.com/post/11358653379/turbulence-adrienne-rich) The Big Sick (https://en.wikipedia.org/wiki/The_Big_Sick) Special Guest: Zahr Said.
What is music? With IP scholar Joe Fishman, we talk about music to work by, whether being unable to imagine doing anything else is a sign you’re doing the right thing, and, mostly, what in music should be protected by copyright. Is the essence of music just melody? And should copyright aim at any such essence? How does our choice about legal protection affect the kind of music people make - and should we worry about that? Why am I asking so many questions? Will these show notes ever end? This show’s links: Joseph Fishman’s faculty profile (https://law.vanderbilt.edu/bio/joseph-fishman) and writing (https://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=1347567) Joseph Fishman, Music as a Matter of Law (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2931091) About Alban Berg (https://en.wikipedia.org/wiki/Alban_Berg) (unsurprising that Joe takes a strident and wrong position against both Christian and guest Joe (writing the show notes has fringe benefits)) Lisa Bernstein, Opting out of the Legal System: Extralegal Contractual Relations in the Diamond Industry (http://www.jstor.org/stable/pdf/724403.pdf?seq=1#page_scan_tab_contents) Justice Nelson’s opinions: Hotchkiss v. Greenwood (https://scholar.google.com/scholar_case?case=16500126654571728643) and Jollie v. Jacques (http://mcir.usc.edu/cases/Before1900/Pages/jolliejaques.html) About “rhythm changes” in jazz (https://en.wikipedia.org/wiki/Rhythm_changes) Joseph Miller, Hoisting Originality (http://digitalcommons.law.uga.edu/cgi/viewcontent.cgi?article=1778&context=fac_artchop) Andres Guadamuz, Is It Time to Examine the Concept of Originality in Musical Works? (http://ip.jotwell.com/is-it-time-to-examine-the-concept-of-originality-in-musical-works/) (discussing cases like the “Blurred Lines” case, Williams v. Gaye (now before the Ninth Circuit), as illuminated by Emma Steel, Original Sin: Reconciling Originality in Copyright with Music as an Evolutionary Art Form (unavailable thanks to copyright law)) Song Exploder (http://songexploder.net) (and the episode on the Moonlight soundtrack (http://songexploder.net/moonlight)) Special Guest: Joseph Fishman.
SPOILERS FOR LUKE CAGE (entire series) Claire chats and geeks out with Warner Joseph Miller aka “Tone” from Luke Cage to ask him how it feels to play a Marvel villain, whether he is on Team Iron Man or Team Captain America, and what they are both hoping to see in the upcoming Defenders series. […]
The merits of going live-to-tape, RSS woes, podcasts, mailbag, judges and voting, decisionmaking machines, breaking the law by not facilitating others’ breaking the law, shipping Perceiving Law, cutting one’s favorite scene, a mysterious phone call. This show’s links: Info about the Technology Law Institute seminar at which we will record an episode in front of a live studio audience Michael Clemente, A Reassessment of Common Law Protections for “Idiots”; Michael Clemente, Executing Idiots; Adam Liptak, Supreme Court to Consider Legal Standard Drawn from “Of Mice and Men” Christian Turner, Perceiving Law (ssrn or socarxiv) Joseph Miller, A Modest Proposal for Expediting Manuscript Selection at Less Prestigious Law Reviews (ssrn or digital commons)
A show about, among other things, the morality of the law journal system. We start with Joe’s ailments and our scheduling issues. (You’re welcome; we know this is why people tune in.) Then a little about online review sessions, Slack, online classes, and video conferencing (2:32). Radiohead, Trump, and Ted Cruz (9:02). Next we open the mail and Twitter bags: Carl Malamud, the re-christened Indigo Book, and the possibility of a transcript of one of our episodes, all followed by Chris Walker’s posts on Prawfsblawg about student law journal podcasts (13:19). Next, listener Justin on laptops in classrooms and unconstitutional and re-constitutional statutes (17:38), Bunny on Oral ArgCon cosplay (25:27). And then this week’s main topic: The weird world of law review publishing and the moral aspects of our participation in it (28:23), including Joe’s description of the process, Christian’s calling Expresso “Espresso” (35:03), the transition to electronic submission and the rise of “expedites” (47:00). “Just tell me what your thesis is.” “Why don’t you tell me what it is?” and morality (52:54). Joe’s world (1:08:19). Christian’s world (1:13:53). This show’s links: Joseph Miller, The Immorality of Requesting Expedited Review Slack Oral Argument 94: Bonus Zoom About Burn the Witch Andrew Sullivan, Democracies End When They Are Too Democratic; Jedediah Purdy, What Trump’s Rise Means for Democracy Richard Cytowic, Why Ted Cruz’s Facial Expression Make Me Uneasy Carl Malamud on Twitter Oral Argument 91: Baby Blue (guest Chris Sprigman) The Indigo Book (also available as a PDF) Chris Walker, Complete Junior Law Prawfs FAQs Series (and, particularly, What About Podcasts? and Rethinking Law Review Podcasts) Nathan H. Saunders, Student-Edited Law Reviews: Reflections and Responses of an Inmate Mark Twain, The War Prayer; a beautiful animated version
Broadband can effectively resolve some of the most pressing and chronic healthcare problems facing African Americans, as well as other minority communities, if used effectively by people with vision. In April this year, the Joint Center for Political and Economic Studies released "Minorities, Mobile Broadband, and the Management of Chronic Diseases." This report highlights several benefits mobile broadband offers these communities, as well as presents recommendations for new or improved broadband policies. Our guest, Joseph Miller, is a co-author of the report and an Acting Director and Senior Policy Counsel for the Joint Center. He discusses several healthcare challenges that broadband of all types can address, such as capturing digital images of home-bound patients, self-administered preventative care and monitoring medical equipment.
For this Snippet, we read Your CMS Sucks, Here's Why That Matters by Joseph Miller. (http://www.pagebreakpodcast.com/snippets/your-cms-sucks)
Service members from the 1st TSC and down trace units share their thoughts on the tenth anniversary of 9/11. Soundbites from Sgt. 1st Class Robert Croxdale, Staff Sgt. Lashonda Pringle, Sgt. 1st Class Bruno Resto, Sgt. Joseph Miller, Capt. Kandi King.