Manifesto on workplace diversity
POPULARITY
Commentary (and Explainer) This podcast is largely an op-ed about the "feminisation" of Western militaries and the effect of so-called "diversity, equity and inclusion" ideologies on modern defence forces. I look at what the purpose of a military is and the tension between its aims and the ways some arms of some defence forces are "marketing" themselves to their own people. Of chief concern to some has been the way the US army has chosen to attempt to recruit people into its ranks. Timestamps/Chapters and references below. 00:00 Introduction secs: concerns of late about military decline/changes in standards 00:25 Concerns about politicising the military 1:23 - Should physical standards be lowered to accommodate “diversity” in the military? 2:18 The Military as a deterrent (as nuclear weapons are a special case of) First Pass 3:08: The “Mutually Assured Destruction” trope is a lie 5:01 - Chinese/Russian tech (military & other) is a stolen, poor imitation of Western innovation 6:20 - A comparison of military forces in terms of numbers (expenditure, land/air/manpower, materiel, capabilities) 10:30 The difference in Military Cultures (Jocko Willink on decentralised command). 12:14 - Gandalf's cameo 12:18 - raw soldier numbers vs army cultures and capabilities 12:31 Case Study: North Korea “the world's 4th largest army”. 13:54 - Case Study: The Russia Ukraine war 14:24 Case Study: The First Gulf War - The Tank Battle 16:18 Case study: the 6 day Israeli war 17:46 The Military as a deterrent (Second Pass) 19:08 Gratuitous Holiday Snap 19:56 The Mother Military Thesis 21:56 North Korea/Communism and Cancel Culture 22:51 Contemporary Western Nations and Cancel Culture 23:27 Woke Culture and Toxic Masculinity 26:23 Is there sexism in custody proceedings? 27:54 - Toxic “Father” Masculinity vs A Feminist “Mother” Military 28:39 The US Army recruitment advertisement controversy 30:53 Alan Watts on “Prickles and Goo” 31:21 Masculine and Feminine traits: some comparisons 33:50 James Damore and “the Google Memo” - commentary 34:19 The function and public face of the military 35:46 Pathological Goo: Jordan Peterson and The Devouring Mother 37:14 Early Signs of military culture rot? 39:04 Exclusive Clubs, Bouncers and Gay Culture 39:57 The Military as a Deterrent (Third Pass) 40:34 - Comparison of Recruitment Tactics - US Army vs US Marines or US Army vs Russian/Chinese Army. 41:24 Conclusions 42:11 Credits, How to support the channel, podcast and me References: 1. https://youtu.be/MIYGFSONKbk?si=C8mFqnObEburxXqz 2. A recent article from Australian media about coercion of "LGBTQIA+" ideologies on cadets: https://www.abc.net.au/news/2023-08-27/adf-academy-cadets-claim-they-were-pressured-to-remove-uniforms/102780562 3. US Marine corps recruitment ad for comparison: https://www.youtube.com/watch?v=O9gTAjbiQEM 4. Humorous Aussie analysis of US vs Russian & Chinese army recruitment commercials: https://www.youtube.com/watch?v=FXmyWdZfdgk 5. Ex-Marine comments on US Military Matters (Jameson's Travels) - https://www.youtube.com/@UC-N44TadAniwC7v8Zj858nQ #woke #feminism #military #transition #army #philosophy "Like" my video and "subscribe" to my channel :)
The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
Yann LeCun is VP & Chief AI Scientist at Meta and Silver Professor at NYU affiliated with the Courant Institute of Mathematical Sciences & the Center for Data Science. He was the founding Director of FAIR and of the NYU Center for Data Science. After a postdoc in Toronto he joined AT&T Bell Labs in 1988, and AT&T Labs in 1996 as Head of Image Processing Research. He joined NYU as a professor in 2003 and Meta/Facebook in 2013. He is the recipient of the 2018 ACM Turing Award for "conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing". Huge thanks to David Marcus for helping to make this happen. In Today's Episode with Yann LeCun: 1.) The Road to AI OG: How did Yann first hear about machine learning and make his foray into the world of AI? For 10 years plus, machine learning was in the shadows, how did Yan not get discouraged when the world did not appreciate the power of AI and ML? What does Yann know now that he wishes he had known when he started his career in machine learning? 2.) The Next Five Years of AI: Hope or Horror: Why does Yann believe it is nonsense that AI is dangerous? Why does Yann think it is crazy to assume that AI will even want to dominate humans? Why does Yann believe digital assistants will rule the world? If digital assistants do rule the world, what interface wins? Search? Chat? What happens to Google when digital assistants rule the world? 3.) Will Anyone Have Jobs in a World of AI: From speaking to many economists, why does Yann state "no economist thinks AI will replace jobs"? What jobs does Yann expect to be created in the next generation of the AI economy? What jobs does Yann believe are under more immediate threat/impact? Why does Yann expect the speed of transition to be much slower than people anticipate? Why does Yann believe Elon Musk is wrong to ask for the pausing of AI developments? 4.) Open or Closed: Who Wins: Why does Yann know that the open model will beat the closed model? Why is it superior for knowledge gathering and idea generation? What are some core historical precedents that have proved this to be true? What did Yann make of the leaked Google Memo last week? 5.) Startup vs Incumbent: Who Wins: Who does Yann believe will win the next 5 years of AI; startups or incumbents? How important are large models to winning in the next 12 months? In what ways does regulation and legal stop incumbents? How has he seen this at Meta? Has his role at Meta ever stopped him from being impartial? How does Yan deal with that?
Our 122nd episode with a summary and discussion of last week's big AI news! Read out our text newsletter and comment on the podcast at https://lastweekin.ai/ Email us your questions and feedback at contact@lastweekin.ai Check out the No Priors podcast: https://link.chtbl.com/lastweekinainopriors Check out Jeremie's new book Quantum Physics Made Me Do It Outline: (00:00) Intro (05:20) Response to listener comments / corrections (07:20) News Preview Tools & Apps (07:50) Microsoft 365's AI-powered Copilot is getting more features and paid access (11:26) Informatica goes all in on generative AI with Claire GPT (15:11) LinkedIn's new AI will write messages to hiring managers (17:30) Waymo One doubles service area in Phoenix and continues growing in San Francisco Applications & Business (20:58) "We Have No Moat, And Neither Does OpenAI" (27:30) AI will create ‘a serious number of losers', DeepMind co-founder warns (31:37) IBM takes another shot at Watson as A.I. boom picks up steam (34:11) IBM to Pause Hiring for Jobs That AI Could Do (36:30) Peter Thiel's Palantir is seeing ‘unprecedented' demand for its military A.I. that its CEO calls ‘a weapon that will allow you to win' (38:42) Palantir Demos AI to Fight Wars But Says It Will Be Totally Ethical Don't Worry About It (40:42) Chegg CEO calls 48% stock plunge over ChatGPT fears 'extraordinarily overblown' (43:03) Microsoft Working With AMD on Expansion Into AI Processors (45:35) Generative AI startup Runway just raised $100 million at a $1.5 billion valuation from a cloud service provider (47:50) Top ex-Google AI researchers raise funding from Thrive Capital Projects & Open Source (51:22) Meta open-sources multisensory AI model that combines six types of data (54:45) No Cloud Required: Chatbot Runs Locally on iPhones, Old PCs (57:15) Hugging Face and ServiceNow release a free code-generating model Research & Advancements (59:27) Meet LLaVA: A Large Language Multimodal Model and Vision Assistant that Connects a Vision Encoder and Vicuna for General-Purpose Visual and Language Understanding (01:04:29) Language models can explain neurons in language models (01:11:55) A.I. Is Getting Better at Mind-Reading (01:14:27) AI could run a million microbial experiments per year (01:15:54) Scurrying centipedes inspire many-legged robots that can traverse difficult landscapes (01:17:38) Little Robots Learn to Drive Fast in the Real World (01:20:03) Latest pitch for AI: DeepMind-trained soccer robots Policy & Safety (01:21:47) China's AI industry barely slowed by US chip export rules (01:26:10) Anthropic thinks ‘constitutional AI' is the best way to train models + Claude's Constitution (01:32:45) An AI Scraping Tool Is Overwhelming Websites With Traffic (01:36:16) ‘Mom, these bad men have me': She believes scammers cloned her daughter's voice in a fake kidnapping (01:39:18) Bill would require disclosure of AI-generated content in political ads Art & Fun Stuff (01:40:46) Unions Representing Hollywood Writers and Actors Seek Limits on A.I. and Chatbots (01:44:56) Inside the Discord Where Thousands of Rogue Producers Are Making AI Music (01:46:45) Spotify removes thousands of AI-generated songs (01:49:19) Amnesty International Uses AI-Generated Images of Colombian Human Rights Abuses (01:53:20) Midjourney 5.1 Arrives - And It's Another Leap Forward For AI Art Indiana Jones 5' will feature a de-aged Harrison Ford for the first 25 minutes (01:55:20) Listener Question - AI as a career + what to do in college
What Meta is doing on the A.I. front is going to impact all marketers and businesses. Kipp and Kieran dive into what the leaked Google memo means and why Meta is secretly winning the A.I. wars. Learn why open source is going to win the next generation of the internet, the power of open source, and why A.I. will be democratized. Mentions Google's leaked memo https://www.semianalysis.com/p/google-we-have-no-moat-and-neither Vicuna https://lmsys.org/blog/2023-03-30-vicuna/ We're on Social Media! Follow us for everyday marketing wisdom straight to your feed YouTube: https://www.youtube.com/channel/UCGtXqPiNV8YC0GMUzY-EUFg Twitter: https://twitter.com/matgpod TikTok: https://www.tiktok.com/@matgpod Thank you for tuning into Marketing Against The Grain! Don't forget to hit subscribe and follow us on Apple Podcasts (so you never miss an episode)! https://podcasts.apple.com/us/podcast/marketing-against-the-grain/id1616700934 If you love this show, please leave us a 5-Star Review https://link.chtbl.com/h9_sjBKH and share your favorite episodes with friends. We really appreciate your support. Host Links: Kipp Bodnar, https://twitter.com/kippbodnar Kieran Flanagan, https://twitter.com/searchbrat ‘Marketing Against The Grain' is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Produced by Darren Clarke.
It's now almost 6 months since Google declared Code Red, and the results — Jeff Dean's recap of 2022 achievements and a mass exodus of the top research talent that contributed to it in January, Bard's rushed launch in Feb, a slick video showing Google Workspace AI features and confusing doubly linked blogposts about PaLM API in March, and merging Google Brain and DeepMind in April — have not been inspiring. Google's internal panic is in full display now with the surfacing of a well written memo, written by software engineer Luke Sernau written in early April, revealing internal distress not seen since Steve Yegge's infamous Google Platforms Rant. Similar to 2011, the company's response to an external challenge has been to mobilize the entire company to go all-in on a (from the outside) vague vision.Google's misfortunes are well understood by now, but the last paragraph of the memo: “We have no moat, and neither does OpenAI”, was a banger of a mic drop.Combine this with news this morning that OpenAI lost $540m last year and will need as much as $100b more funding (after the complex $10b Microsoft deal in Jan), and the memo's assertion that both Google and OpenAI have “no moat” against the mighty open source horde have gained some credibility in the past 24 hours.Many are criticising this memo privately:* A CEO commented to me yesterday that Luke Sernau does not seem to work in AI related parts of Google and “software engineers don't understand moats”. * Emad Mostaque, himself a perma-champion of open source and open models, has repeatedly stated that “Closed models will always outperform open models” because closed models can just wrap open ones.* Emad has also commented on the moats he does see: “Unique usage data, Unique content, Unique talent, Unique product, Unique business model”, most of which Google does have, and OpenAI less so (though it is winning on the talent front)* Sam Altman famously said that “very few to no one is Silicon Valley has a moat - not even Facebook” (implying that moats don't actually matter, and you should spend your time thinking about more important things)* It is not actually clear what race the memo thinks Google and OpenAI are in vs Open Source. Neither are particularly concerned about running models locally on phones, and they are perfectly happy to let “a crazy European alpha male” run the last mile for them while they build actually monetizable cloud infrastructure.However moats are of intense interest by everybody keen on productized AI, cropping up in every Harvey, Jasper, and general AI startup vs incumbent debate. It is also interesting to take the memo at face value and discuss the searing hot pace of AI progress in open source. We hosted this discussion yesterday with Simon Willison, who apart from being an incredible communicator also wrote a great recap of the No Moat memo. 2,800 have now tuned in on Twitter Spaces, but we have taken the audio and cleaned it up here. Enjoy!Timestamps* [00:00:00] Introducing the Google Memo* [00:02:48] Open Source > Closed?* [00:05:51] Running Models On Device* [00:07:52] LoRA part 1* [00:08:42] On Moats - Size, Data* [00:11:34] Open Source Models are Comparable on Data* [00:13:04] Stackable LoRA* [00:19:44] The Need for Special Purpose Optimized Models* [00:21:12] Modular - Mojo from Chris Lattner* [00:23:33] The Promise of Language Supersets* [00:28:44] Google AI Strategy* [00:29:58] Zuck Releasing LLaMA* [00:30:42] Google Origin Confirmed* [00:30:57] Google's existential threat* [00:32:24] Non-Fiction AI Safety ("y-risk")* [00:35:17] Prompt Injection* [00:36:00] Google vs OpenAI* [00:41:04] Personal plugs: Simon and TravisTranscripts[00:00:00] Introducing the Google Memo[00:00:00] Simon Willison: So, yeah, this is a document, which Kate, which I first saw at three o'clock this morning, I think. It claims to be leaked from Google. There's good reasons to believe it is leaked from Google, and to be honest, if it's not, it doesn't actually matter because the quality of the analysis, I think stands alone.[00:00:15] If this was just a document by some anonymous person, I'd still think it was interesting and worth discussing. And the title of the document is We Have No Moat and neither does Open ai. And the argument it makes is that while Google and OpenAI have been competing on training bigger and bigger language models, the open source community is already starting to outrun them, given only a couple of months of really like really, really serious activity.[00:00:41] You know, Facebook lama was the thing that really kicked us off. There were open source language models like Bloom before that some G P T J, and they weren't very impressive. Like nobody was really thinking that they were. Chat. G P T equivalent Facebook Lama came out in March, I think March 15th. And was the first one that really sort of showed signs of being as capable maybe as chat G P T.[00:01:04] My, I don't, I think all of these models, they've been, the analysis of them has tend to be a bit hyped. Like I don't think any of them are even quite up to GT 3.5 standards yet, but they're within spitting distance in some respects. So anyway, Lama came out and then, Two weeks later Stanford Alpaca came out, which was fine tuned on top of Lama and was a massive leap forward in terms of quality.[00:01:27] And then a week after that Vicuna came out, which is to this date, the the best model I've been able to run on my own hardware. I, on my mobile phone now, like, it's astonishing how little resources you need to run these things. But anyway, the the argument that this paper made, which I found very convincing is it only took open source two months to get this far.[00:01:47] It's now every researcher in the world is kicking it on new, new things, but it feels like they're being there. There are problems that Google has been trying to solve that the open source models are already addressing, and really how do you compete with that, like with your, it's closed ecosystem, how are you going to beat these open models with all of this innovation going on?[00:02:04] But then the most interesting argument in there is it talks about the size of models and says that maybe large isn't a competitive advantage, maybe actually a smaller model. With lots of like different people fine tuning it and having these sort of, these LoRA l o r a stackable fine tuning innovations on top of it, maybe those can move faster.[00:02:23] And actually having to retrain your giant model every few months from scratch is, is way less useful than having small models that you can tr you can fine tune in a couple of hours on laptop. So it's, it's fascinating. I basically, if you haven't read this thing, you should read every word of it. It's not very long.[00:02:40] It's beautifully written. Like it's, it's, I mean, If you try and find the quotable lines in it, almost every line of it's quotable. Yeah. So, yeah, that's that, that, that's the status of this[00:02:48] Open Source > Closed?[00:02:48] swyx: thing. That's a wonderful summary, Simon. Yeah, there, there's so many angles we can take to this. I, I'll just observe one, one thing which if you think about the open versus closed narrative, Ima Mok, who is the CEO of Stability, has always been that open will trail behind closed, because the closed alternatives can always take.[00:03:08] Learnings and lessons from open source. And this is the first highly credible statement that is basically saying the exact opposite, that open source is moving than, than, than closed source. And they are scared. They seem to be scared. Which is interesting,[00:03:22] Travis Fischer: Travis. Yeah, the, the, the, a few things that, that I'll, I'll, I'll say the only thing which can keep up with the pace of AI these days is open source.[00:03:32] I think we're, we're seeing that unfold in real time before our eyes. And. You know, I, I think the other interesting angle of this is to some degree LLMs are they, they don't really have switching costs. They are going to be, become commoditized. At least that's, that's what a lot of, a lot of people kind of think to, to what extent is it Is it a, a rate in terms of, of pricing of these things?[00:03:55] , and they all kind of become roughly the, the, the same in, in terms of their, their underlying abilities. And, and open source is gonna, gonna be actively pushing, pushing that forward. And, and then this is kind of coming from, if it is to be believed the kind of Google or an insider type type mentality around you know, where is the actual competitive advantage?[00:04:14] What should they be focusing on? How can they get back in into the game? When you know, when, when, when, when currently the, the, the external view of, of Google is that they're kind of spinning their wheels and they have this code red,, and it's like they're, they're playing catch up already.[00:04:28] Like how could they use the open source community and work with them, which is gonna be really, really hard you know, from a structural perspective given Google's place in the ecosystem. But a, a lot, lot, a lot of jumping off points there.[00:04:42] Alessio Fanelli: I was gonna say, I think the Post is really focused on how do we get the best model, but it's not focused on like, how do we build the best product around it.[00:04:50] A lot of these models are limited by how many GPUs you can get to run them and we've seen on traditional open source, like everybody can use some of these projects like Kafka and like Alaska for free. But the reality is that not everybody can afford to run the infrastructure needed for it.[00:05:05] So I, I think like the main takeaway that I have from this is like, A lot of the moats are probably around just getting the, the sand, so to speak, and having the GPUs to actually serve these models. Because even if the best model is open source, like running it at large scale for an end is not easy and like, it's not super convenient to get a lot, a lot of the infrastructure.[00:05:27] And we've seen that model work in open source where you have. The opensource project, and then you have a enterprise cloud hosted version for it. I think that's gonna look really different in opensource models because just hosting a model doesn't have a lot of value. So I'm curious to hear how people end up getting rewarded to do opensource.[00:05:46] You know, it's, we figured that out in infrastructure, but we haven't figured it out in in Alans[00:05:51] Running Models On Device[00:05:51] Simon Willison: yet. I mean, one thing I'll say is that the the models that you can run on your own devices are so far ahead of what I ever dreamed they would be at this point. Like Vicuna 13 b i i, I, I think is the current best available open mo model that I've played with.[00:06:08] It's derived from Facebook Lama, so you can't use it for commercial purposes yet. But the point about MCK 13 B is it runs in the browser directly on web gpu. There's this amazing web l l M project where you literally, your browser downloaded a two gigabyte file. And it fires up a chat g D style interface and it's quite good.[00:06:27] It can do rap battles between different animals and all of the kind of fun stuff that you'd expect to be able to do the language model running entirely in Chrome canary. It's shocking to me that that's even possible, but that kind of shows that once, once you get to inference, if you can shrink the model down and the techniques for shrinking these models, the, the first one was the the quantization.[00:06:48] Which the Lama CPP project really sort of popularized Matt can by using four bits instead of 16 bit floating point numbers, you can shrink it down quite a lot. And then there was a paper that came out days ago suggesting that you can prune the models and ditch half the model and maintain the same level of quality.[00:07:05] So with, with things like that, with all of these tricks coming together, it's really astonishing how much you can get done on hardware that people actually have in their pockets even.[00:07:15] swyx: Just for completion I've been following all of your posts. Oh, sorry. Yes. I just wanna follow up, Simon. You're, you said you're running a model on your phone. Which model is it? And I don't think you've written it up.[00:07:27] Simon Willison: Yeah, that one's vina. I did, did I write it up? I did. I've got a blog post about how it it, it, it knows who I am, sort of, but it said that I invented a, a, a pattern for living called bear or bunny pattern, which I definitely didn't, but I loved that my phone decided that I did.[00:07:44] swyx: I will hunt for that because I'm not yet running Vic on my phone and I feel like I should and, and as like a very base thing, but I'll, okay.[00:07:52] Stackable LoRA Modules[00:07:52] swyx: Also, I'll follow up two things, right? Like one I'm very interesting and let's, let's talk about that a little bit more because this concept of stackable improvements to models I think is extremely interesting.[00:08:00] Like, I would love to MPM install abilities onto my models, right? Which is really awesome. But the, the first thing thing is under-discussed is I don't get the panic. Like, honestly, like Google has the most moats. I I, I was arguing maybe like three months ago on my blog. Like Google has the most mote out of a lot of people because, hey, we have your calendar.[00:08:21] Hey, we have your email. Hey, we have your you know, Google Docs. Like, isn't that a, a sufficient mode? Like, why are these guys panicking so much? I don't, I still don't get it. Like, Sure open source is running ahead and like, it's, it's on device and whatev, what have you, but they have so much more mode.[00:08:36] Like, what are we talking about here? There's many dimensions to compete on.[00:08:42] On Moats - Size, Data[00:08:42] Travis Fischer: Yeah, there's like one of, one of the, the things that, that the author you know, mentions in, in here is when, when you start to, to, to have the feeling of what we're trailing behind, then you're, you're, you're, you're brightest researchers jump ship and go to OpenAI or go to work at, at, at academia or, or whatever.[00:09:00] And like the talent drain. At the, the level of the, the senior AI researchers that are pushing these things ahead within Google, I think is a serious, serious concern. And my, my take on it's a good point, right? Like, like, like, like what Google has modes. They, they, they're not running outta money anytime soon.[00:09:16] You know, I think they, they do see the level of the, the defensibility and, and the fact that they want to be, I'll chime in the, the leader around pretty much anything. Tech first. There's definitely ha ha have lost that, that, that feeling. Right? , and to what degree they can, they can with the, the open source community to, to get that back and, and help drive that.[00:09:38] You know all of the llama subset of models with, with alpaca and Vicuna, et cetera, that all came from, from meta. Right. Like that. Yeah. Like it's not licensed in an open way where you can build a company on top of it, but is now kind of driving this family of, of models, like there's a tree of models that, that they're, they're leading.[00:09:54] And where is Google in that, in that playbook? Like for a long time they were the one releasing those models being super open and, and now it's just they, they've seem to be trailing and there's, there's people jumping ship and to what degree can they, can they, can they. Close off those wounds and, and focus on, on where, where they, they have unique ability to, to gain momentum.[00:10:15] I think is a core part of my takeaway from this. Yeah.[00:10:19] Alessio Fanelli: And think another big thing in the post is, oh, as long as you have high quality data, like you don't need that much data, you can just use that. The first party data loops are probably gonna be the most important going forward if we do believe that this is true.[00:10:32] So, Databricks. We have Mike Conover from Databricks on the podcast, and they talked about how they came up with the training set for Dolly, which they basically had Databricks employees write down very good questions and very good answers for it. Not every company as the scale to do that. And I think products like Google, they have millions of people writing Google Docs.[00:10:54] They have millions of people using Google Sheets, then millions of people writing stuff, creating content on YouTube. The question is, if you wanna compete against these companies, maybe the model is not what you're gonna do it with because the open source kind of commoditizes it. But how do you build even better data?[00:11:12] First party loops. And that's kind of the hardest thing for startups, right? Like even if we open up the, the models to everybody and everybody can just go on GitHub and. Or hugging face and get the waste to the best model, but get enough people to generate data for me so that I can still make it good. That's, that's what I would be worried about if I was a, a new company.[00:11:31] How do I make that happen[00:11:32] Simon Willison: really quickly?[00:11:34] Open Source Models are Comparable on Data[00:11:34] Simon Willison: I'm not convinced that the data is that big a challenge. So there's this PO project. So the problem with Facebook LAMA is that it's not available for, for commercial use. So people are now trying to train a alternative to LAMA that's entirely on openly licensed data.[00:11:48] And that the biggest project around that is this red pajama project, which They released their training data a few weeks ago and it was 2.7 terabytes. Right? So actually tiny, right? You can buy a laptop that you can fit 2.7 terabytes on. Got it. But it was the same exact data that Facebook, the same thing that Facebook Lamb had been trained on.[00:12:06] Cuz for your base model. You're not really trying to teach it fact about the world. You're just trying to teach it how English and other languages work, how they fit together. And then the real magic is when you fine tune on top of that. That's what Alpaca did on top of Lama and so on. And the fine tuning sets, it looks like, like tens of thousands of examples to kick one of these role models into shape.[00:12:26] And tens of thousands of examples like Databricks spent a month and got the 2000 employees of their company to help kick in and it worked. You've got the open assistant project of crowdsourcing this stuff now as well. So it's achievable[00:12:40] swyx: sore throat. I agree. I think it's a fa fascinating point. Actually, so I've heard through the grapevine then red pajamas model.[00:12:47] Trained on the, the data that they release is gonna be releasing tomorrow. And it's, it's this very exciting time because the, the, there, there's a, there's a couple more models that are coming down the pike, which independently we produced. And so yeah, that we, everyone is challenging all these assumptions from, from first principles, which is fascinating.[00:13:04] Stackable LoRA[00:13:04] swyx: I, I did, I did wanted to, to like try to get a little bit more technical in terms of like the, the, the, the specific points race. Cuz this doc, this doc was just amazing. Can we talk about LoRA. I, I, I'll open up to Simon again if he's back.[00:13:16] Simon Willison: I'd rather someone else take on. LoRA, I've, I, I know as much as I've read in that paper, but not much more than that.[00:13:21] swyx: So I thought it was this kind of like an optimization technique. So LoRA stands for lower rank adaptation. But this is the first mention of LoRA as a form of stackable improvements. Where he I forget what, let, just, let me just kind of Google this. But obviously anyone's more knowledgeable please.[00:13:39] So come on in.[00:13:40] Alessio Fanelli: I, all of Lauren is through GTS Man, about 20 minutes on GT four, trying to figure out word. It was I study computer science, but this is not this is not my area of expertise. What I got from it is that basically instead of having to retrain the whole model you can just pick one of the ranks and you take.[00:13:58] One of like the, the weight matrix tests and like make two smaller matrixes from it and then just two to be retrained and training the whole model. So[00:14:08] swyx: it save a lot of Yeah. You freeze part of the thing and then you just train the smaller part like that. Exactly. That seems to be a area of a lot of fruitful research.[00:14:15] Yeah. I think Mini GT four recently did something similar as well. And then there's, there's, there's a, there's a Spark Model people out today that also did the same thing.[00:14:23] Simon Willison: So I've seen a lot of LoRA stable, the stable diffusion community has been using LoRA a lot. So they, in that case, they had a, I, the thing I've seen is people releasing LoRA's that are like you, you train a concept like a, a a particular person's face or something you release.[00:14:38] And the, the LoRA version of this end up being megabytes of data, like, which is, it's. You know, it's small enough that you can just trade those around and you can effectively load multiple of those into the model. But what I haven't realized is that you can use the same trick on, on language models. That was one of the big new things for me in reading the the leaks Google paper today.[00:14:56] Alessio Fanelli: Yeah, and I think the point to make around on the infrastructure, so what tragedy has told me is that when you're figuring out what rank you actually wanna do this fine tuning at you can have either go too low and like the model doesn't actually learn it. Or you can go too high and the model overfit those learnings.[00:15:14] So if you have a base model that everybody agrees on, then all the subsequent like LoRA work is done around the same rank, which gives you an advantage. And the point they made in the, that, since Lama has been the base for a lot of this LoRA work like they own. The, the mind share of the community.[00:15:32] So everything that they're building is compatible with their architecture. But if Google Opensources their own model the rank that they chose For LoRA on Lama might not work on the Google model. So all of the existing work is not portable. So[00:15:46] Simon Willison: the impression I got is that one of the challenges with LoRA is that you train all these LoRAs on top of your model, but then if you retrain that base model as LoRA's becoming invalid, right?[00:15:55] They're essentially, they're, they're, they're built for an exact model version. So this means that being the big company with all of the GPUs that can afford to retrain a model every three months. That's suddenly not nearly as valuable as it used to be because now maybe there's an open source model that's five years old at this point and has like multiple, multiple stacks of LoRA's trained all over the world on top of it, which can outperform your brand new model just because there's been so much more iteration on that base.[00:16:20] swyx: I, I think it's, I think it's fascinating. It's I think Jim Fan from Envidia was recently making this argument for transformers. Like even if we do come up with a better. Architecture, then transformers, they're the sheer hundreds and millions of dollars that have been invested on top of transformers.[00:16:34] Make it actually there is some switching costs and it's not exactly obvious that better architecture. Equals equals we should all switch immediately tomorrow. It's, it's, it's[00:16:44] Simon Willison: kinda like the, the difficulty of launching a new programming language today Yes. Is that pipeline and JavaScript have a million packages.[00:16:51] So no matter how good your new language is, if it can't tap into those existing package libraries, it's, it's not gonna be useful for, which is why Moji is so clever, because they did build on top of Pips. They get all of that existing infrastructure, all of that existing code working already.[00:17:05] swyx: I mean, what, what thought you, since you co-create JAO and all that do, do we wanna take a diversion into mojo?[00:17:10] No, no. I[00:17:11] Travis Fischer: would, I, I'd be happy to, to, to jump in, and get Simon's take on, on Mojo. 1, 1, 1 small, small point on LoRA is I, I, I just think. If you think about at a high level, what the, the major down downsides are of these, these large language models. It's the fact that they well they're, they're, they're difficult to, to train, right?[00:17:32] They, they tend to hallucinate and they are, have, have a static, like, like they were trained at a certain date, right? And with, with LoRA, I think it makes it a lot more amenable to Training new, new updates on top of that, that like base model on the fly where you can incorporate new, new data and in a way that is, is, is an interesting and potentially more optimal alternative than Doing the kind of in context generation cuz, cuz most of like who at perplexity AI or, or any of these, these approaches currently, it's like all based off of doing real-time searches and then injecting as much into the, the, the local context window as possible so that you, you try to ground your, your, your, your language model.[00:18:16] Both in terms of the, the information it has access to that, that, that helps to reduce hallucinations. It can't reduce it, but helps to reduce it and then also gives it access to up-to-date information that wasn't around for that, that massive like, like pre-training step. And I think LoRA in, in, in mine really makes it more, more amenable to having.[00:18:36] Having constantly shifting lightweight pre-training on top of it that scales better than than normal. Pre I'm sorry. Fine tune, fine tuning. Yeah, that, that was just kinda my one takeaway[00:18:45] Simon Willison: there. I mean, for me, I've never been, I want to run models on my own hard, I don't actually care about their factual content.[00:18:52] Like I don't need a model that's been, that's trained on the most upstate things. What I need is a model that can do the bing and bar trick, right? That can tell when it needs to run a search. And then go and run a search to get extra information and, and bring that context in. And similarly, I wanted to be able to operate tools where it can access my email or look at my notes or all of those kinds of things.[00:19:11] And I don't think you need a very powerful model for that. Like that's one of the things where I feel like, yeah, vicuna running on my, on my laptop is probably powerful enough to drive a sort of personal research assistant, which can look things up for me and it can summarize things for my notes and it can do all of that and I don't care.[00:19:26] But it doesn't know about the Ukraine war because the Ukraine war training cutoff, that doesn't matter. If it's got those additional capabilities, which are quite easy to build the reason everyone's going crazy building agents and tools right now is that it's a few lines of Python code, and a sort of couple of paragraphs to get it to.[00:19:44] The Need for Special Purpose Optimized Models[00:19:44] Simon Willison: Well, let's, let's,[00:19:45] Travis Fischer: let's maybe dig in on that a little bit. And this, this also is, is very related to mojo. Cuz I, I do think there are use cases and domains where having the, the hyper optimized, like a version of these models running on device is, is very relevant where you can't necessarily make API calls out on the fly.[00:20:03] and Aug do context, augmented generation. And I was, I was talking with, with a a researcher. At Lockheed Martin yesterday, literally about like, like the, the version of this that's running of, of language models running on, on fighter jets. Right? And you, you talk about like the, the, the amount of engineering, precision and optimization that has to go into, to those type of models.[00:20:25] And the fact that, that you spend so much money, like, like training a super distilled ver version where milliseconds matter it's a life or death situation there. You know, and you couldn't even, even remotely ha ha have a use case there where you could like call out and, and have, have API calls or something.[00:20:40] So I, I do think there's like keeping in mind the, the use cases where, where. There, there'll be use cases that I'm more excited about at, at the application level where, where, yeah, I want to to just have it be super flexible and be able to call out to APIs and have this agentic type type thing.[00:20:56] And then there's also industries and, and use cases where, where you really need everything baked into the model.[00:21:01] swyx: Yep. Agreed. My, my favorite piece take on this is I think DPC four as a reasoning engine, which I think came from the from Nathan at every two. Which I think, yeah, I see the hundred score over there.[00:21:12] Modular - Mojo from Chris Lattner[00:21:12] swyx: Simon, do you do you have a, a few seconds on[00:21:14] Simon Willison: mojo. Sure. So Mojo is a brand new program language you just announced a few days ago. It's not actually available yet. I think there's an online demo, but to zooming it becomes an open source language we can use. It's got really some very interesting characteristics.[00:21:29] It's a super set of Python, so anything written in Python, Python will just work, but it adds additional features on top that let you basically do very highly optimized code with written. In Python syntax, it compiles down the the main thing that's exciting about it is the pedigree that it comes from.[00:21:47] It's a team led by Chris Latner, built L L V M and Clang, and then he designed Swift at Apple. So he's got like three, three for three on, on extraordinarily impactful high performance computing products. And he put together this team and they've basically, they're trying to go after the problem of how do you build.[00:22:06] A language which you can do really high performance optimized work in, but where you don't have to do everything again from scratch. And that's where building on top of Python is so clever. So I wasn't like, if this thing came along, I, I didn't really pay attention to it until j Jeremy Howard, who built Fast ai put up a very detailed blog post about why he was excited about Mojo, which included a, there's a video demo in there, which everyone should watch because in that video he takes Matrix multiplication implemented in Python.[00:22:34] And then he uses the mojo extras to 2000 x. The performance of that matrix multiplication, like he adds a few static types functions sort of struck instead of the class. And he gets 2000 times the performance out of it, which is phenomenal. Like absolutely extraordinary. So yeah, that, that got me really excited.[00:22:52] Like the idea that we can still use Python and all of this stuff we've got in Python, but we can. Just very slightly tweak some things and get literally like thousands times upwards performance out of the things that matter. That's really exciting.[00:23:07] swyx: Yeah, I, I, I'm curious, like, how come this wasn't thought of before?[00:23:11] It's not like the, the, the concept of a language super set hasn't hasn't, has, has isn't, is completely new. But all, as far as I know, all the previous Python interpreter approaches, like the alternate runtime approaches are like they, they, they're more, they're more sort of, Fit conforming to standard Python, but never really tried this additional approach of augmenting the language.[00:23:33] The Promise of Language Supersets[00:23:33] swyx: I, I'm wondering if you have many insights there on, like, why, like why is this a, a, a breakthrough?[00:23:38] Simon Willison: Yeah, that's a really interesting question. So, Jeremy Howard's piece talks about this thing called M L I R, which I hadn't heard of before, but this was another Chris Latner project. You know, he built L L VM as a low level virtual machine.[00:23:53] That you could build compilers on top of. And then M L I R was this one that he initially kicked off at Google, and I think it's part of TensorFlow and things like that. But it was very much optimized for multiple cores and GPU access and all of that kind of thing. And so my reading of Jeremy Howard's article is that they've basically built Mojo on top of M L I R.[00:24:13] So they had a huge, huge like a starting point where they'd, they, they knew this technology better than anyone else. And because they had this very, very robust high performance basis that they could build things on. I think maybe they're just the first people to try and build a high, try and combine a high level language with M L A R, with some extra things.[00:24:34] So it feels like they're basically taking a whole bunch of ideas people have been sort of experimenting with over the last decade and bundled them all together with exactly the right team, the right level of expertise. And it looks like they've got the thing to work. But yeah, I mean, I've, I've, I'm. Very intrigued to see, especially once this is actually available and we can start using it.[00:24:52] It, Jeremy Howard is someone I respect very deeply and he's, he's hyping this thing like crazy, right? His headline, his, and he's not the kind of person who hypes things if they're not worth hyping. He said Mojo may be the biggest programming language advanced in decades. And from anyone else, I'd kind of ignore that headline.[00:25:09] But from him it really means something.[00:25:11] swyx: Yes, because he doesn't hype things up randomly. Yeah, and, and, and he's a noted skeptic of Julia which is, which is also another data science hot topic. But from the TypeScript and web, web development worlds there has been a dialect of TypeScript that was specifically optimized to compile, to web assembly which I thought was like promising and then, and, and eventually never really took off.[00:25:33] But I, I like this approach because I think more. Frameworks should, should essentially be languages and recognize that they're language superset and maybe working compilers that that work on them. And then that is the, by the way, that's the direction that React is going right now. So fun times[00:25:50] Simon Willison: type scripts An interesting comparison actually, cuz type script is effectively a superset of Java script, right?[00:25:54] swyx: It's, but there's no, it's purely[00:25:57] Simon Willison: types, right? Gotcha. Right. So, so I guess mojo is the soup set python, but the emphasis is absolutely on tapping into the performance stuff. Right.[00:26:05] swyx: Well, the just things people actually care about.[00:26:08] Travis Fischer: Yeah. The, the one thing I've found is, is very similar to the early days of type script.[00:26:12] There was the, the, the, the most important thing was that it's incrementally adoptable. You know, cuz people had a script code basis and, and they wanted to incrementally like add. The, the, the main value prop for TypeScript was reliability and the, the, the, the static typing. And with Mojo, Lucia being basically anyone who's a target a large enterprise user of, of Mojo or even researchers, like they're all going to be coming from a, a hardcore.[00:26:36] Background in, in Python and, and have large existing libraries. And the the question will be for what use cases will mojo be like a, a, a really good fit for that incremental adoption where you can still tap into your, your, your massive, like python exi existing infrastructure workflows, data tooling, et cetera.[00:26:55] And, and what does, what does that path to adoption look like?[00:26:59] swyx: Yeah, we, we, we don't know cuz it's a wait listed language which people were complaining about. They, they, the, the mojo creators were like saying something about they had to scale up their servers. And I'm like, what language requires essential server?[00:27:10] So it's a little bit suss, a little bit, like there's a, there's a cloud product already in place and they're waiting for it. But we'll see. We'll see. I mean, emojis should be promising in it. I, I actually want more. Programming language innovation this way. You know, I was complaining years ago that programming language innovation is all about stronger types, all fun, all about like more functional, more strong types everywhere.[00:27:29] And, and this is, the first one is actually much more practical which I, which I really enjoy. This is why I wrote about self provisioning run types.[00:27:36] Simon Willison: And[00:27:37] Alessio Fanelli: I mean, this is kind of related to the post, right? Like if you stop all of a sudden we're like, the models are all the same and we can improve them.[00:27:45] Like, where can we get the improvements? You know, it's like, Better run times, better languages, better tooling, better data collection. Yeah. So if I were a founder today, I wouldn't worry as much about the model, maybe, but I would say, okay, what can I build into my product and like, or what can I do at the engineering level that maybe it's not model optimization because everybody's working on it, but like you said, it's like, why haven't people thought of this before?[00:28:09] It's like, it's, it's definitely super hard, but I'm sure that if you're like Google or you're like open AI or you're like, Databricks, we got smart enough people that can think about these problems, so hopefully we see more of this.[00:28:21] swyx: You need, Alan? Okay. I promise to keep this relatively tight. I know Simon on a beautiful day.[00:28:27] It is a very nice day in California. I wanted to go through a few more points that you have pulled out Simon and, and just give you the opportunity to, to rant and riff and, and what have you. I, I, are there any other points from going back to the sort of Google OpenAI mode documents that, that you felt like we, we should dive in on?[00:28:44] Google AI Strategy[00:28:44] Simon Willison: I mean, the really interesting stuff there is the strategy component, right? The this idea that that Facebook accidentally stumbled into leading this because they put out this model that everyone else is innovating on top of. And there's a very open question for me as to would Facebook relic Lama to allow for commercial usage?[00:29:03] swyx: Is there some rumor? Is that, is that today?[00:29:06] Simon Willison: Is there a rumor about that?[00:29:07] swyx: That would be interesting? Yeah, I saw, I saw something about Zuck saying that he would release the, the Lama weights officially.[00:29:13] Simon Willison: Oh my goodness. No, that I missed. That is, that's huge.[00:29:17] swyx: Let me confirm the tweet. Let me find the tweet and then, yeah.[00:29:19] Okay.[00:29:20] Simon Willison: Because actually I met somebody from Facebook machine learning research a couple of weeks ago, and I, I pressed 'em on this and they said, basically they don't think it'll ever happen because if it happens, and then somebody does horrible fascist stuff with this model, all of the headlines will be Meg releases a monster into the world.[00:29:36] So, so hi. His, the, the, the, a couple of weeks ago, his feeling was that it's just too risky for them to, to allow it to be used like that. But a couple of weeks is, is, is a couple of months in AI world. So yeah, it wouldn't be, it feels to me like strategically Facebook should be jumping right on this because this puts them at the very.[00:29:54] The very lead of, of open source innovation around this stuff.[00:29:58] Zuck Releasing LLaMA[00:29:58] swyx: So I've pinned the tweet talking about Zuck and Zuck saying that meta will open up Lama. It's from the founder of Obsidian, which gives it a slight bit more credibility, but it is the only. Tweet that I can find about it. So completely unsourced,[00:30:13] we shall see. I, I, I mean I have friends within meta, I should just go ask them. But yeah, I, I mean one interesting angle on, on the memo actually is is that and, and they were linking to this in, in, in a doc, which is apparently like. Facebook got a bunch of people to do because they, they never released it for commercial use, but a lot of people went ahead anyway and, and optimized and, and built extensions and stuff.[00:30:34] They, they got a bunch of free work out of opensource, which is an interesting strategy.[00:30:39] There's okay. I don't know if I.[00:30:42] Google Origin Confirmed[00:30:42] Simon Willison: I've got exciting piece of news. I've just heard from somebody with contacts at Google that they've heard people in Google confirm the leak. That that document wasn't even legit Google document, which I don't find surprising at all, but I'm now up to 10, outta 10 on, on whether that's, that's, that's real.[00:30:57] Google's existential threat[00:30:57] swyx: Excellent. Excellent. Yeah, it is fascinating. Yeah, I mean the, the strategy is, is, is really interesting. I think Google has been. Definitely sleeping on monetizing. You know, I, I, I heard someone call when Google Brain and Devrel I merged that they would, it was like goodbye to the Xerox Park of our era and it definitely feels like Google X and Google Brain would definitely Xerox parks of our, of our era, and I guess we all benefit from that.[00:31:21] Simon Willison: So, one thing I'll say about the, the Google side of things, like the there was a question earlier, why are Google so worried about this stuff? And I think it's, it's just all about the money. You know, the, the, the engine of money at Google is Google searching Google search ads, and who uses Chachi PT on a daily basis, like me, will have noticed that their usage of Google has dropped like a stone.[00:31:41] Because there are many, many questions that, that chat, e p t, which shows you no ads at all. Is, is, is a better source of information for than Google now. And so, yeah, I'm not, it doesn't surprise me that Google would see this as an existential threat because whether or not they can be Bard, it's actually, it's not great, but it, it exists, but it hasn't it yet either.[00:32:00] And if I've got a Chatbook chatbot that's not showing me ads and chatbot that is showing me ads, I'm gonna pick the one that's not showing[00:32:06] swyx: me ads. Yeah. Yeah. I, I agree. I did see a prototype of Bing with ads. Bing chat with ads. I haven't[00:32:13] Simon Willison: seen the prototype yet. No.[00:32:15] swyx: Yeah, yeah. Anyway, I I, it, it will come obviously, and then we will choose, we'll, we'll go out of our ways to avoid ads just like we always do.[00:32:22] We'll need ad blockers and chat.[00:32:23] Excellent.[00:32:24] Non-Fiction AI Safety ("y-risk")[00:32:24] Simon Willison: So I feel like on the safety side, the, the safety side, there are basically two areas of safety that I, I, I sort of split it into. There's the science fiction scenarios, the AI breaking out and killing all humans and creating viruses and all of that kind of thing. The sort of the terminated stuff. And then there's the the.[00:32:40] People doing bad things with ai and that's latter one is the one that I think is much more interesting and that cuz you could u like things like romance scams, right? Romance scams already take billions of dollars from, from vulner people every year. Those are very easy to automate using existing tools.[00:32:56] I'm pretty sure for QNA 13 b running on my laptop could spin up a pretty decent romance scam if I was evil and wanted to use it for them. So that's the kind of thing where, I get really nervous about it, like the fact that these models are out there and bad people can use these bad, do bad things.[00:33:13] Most importantly at scale, like romance scamming, you don't need a language model to pull off one romance scam, but if you wanna pull off a thousand at once, the language model might be the, the thing that that helps you scale to that point. And yeah, in terms of the science fiction stuff and also like a model on my laptop that can.[00:33:28] Guess what comes next in a sentence. I'm not worried that that's going to break out of my laptop and destroy the world. There. There's, I'm get slightly nervous about the huge number of people who are trying to build agis on top of this models, the baby AGI stuff and so forth, but I don't think they're gonna get anywhere.[00:33:43] I feel like if you actually wanted a model that was, was a threat to human, a language model would be a tiny corner of what that thing. Was actually built on top of, you'd need goal setting and all sorts of other bits and pieces. So yeah, for the moment, the science fiction stuff doesn't really interest me, although it is a little bit alarming seeing more and more of the very senior figures in this industry sort of tip the hat, say we're getting a little bit nervous about this stuff now.[00:34:08] Yeah.[00:34:09] swyx: So that would be Jeff Iton and and I, I saw this me this morning that Jan Lacoon was like happily saying, this is fine. Being the third cheer award winner.[00:34:20] Simon Willison: But you'll see a lot of the AI safe, the people who've been talking about AI safety for the longest are getting really angry about science fiction scenarios cuz they're like, no, the, the thing that we need to be talking about is the harm that you can cause with these models right now today, which is actually happening and the science fiction stuff kind of ends up distracting from that.[00:34:36] swyx: I love it. You, you. Okay. So, so Uher, I don't know how to pronounce his name. Elier has a list of ways that AI will kill us post, and I think, Simon, you could write a list of ways that AI will harm us, but not kill us, right? Like the, the, the non-science fiction actual harm ways, I think, right? I haven't seen a, a actual list of like, hey, romance scams spam.[00:34:57] I, I don't, I don't know what else, but. That could be very interesting as a Hmm. Okay. Practical. Practical like, here are the situations we need to guard against because they are more real today than that we need to. Think about Warren, about obviously you've been a big advocate of prompt injection awareness even though you can't really solve them, and I, I worked through a scenario with you, but Yeah,[00:35:17] Prompt Injection[00:35:17] Simon Willison: yeah.[00:35:17] Prompt injection is a whole other side of this, which is, I mean, that if you want a risk from ai, the risk right now is everyone who's building puts a building systems that attackers can trivially subvert into stealing all of their private data, unlocking their house, all of that kind of thing. So that's another very real risk that we have today.[00:35:35] swyx: I think in all our personal bios we should edit in prompt injections already, like in on my website, I wanna edit in a personal prompt injections so that if I get scraped, like I all know if someone's like reading from a script, right? That that is generated by any iBot. I've[00:35:49] Simon Willison: seen people do that on LinkedIn already and they get, they get recruiter emails saying, Hey, I didn't read your bio properly and I'm just an AI script, but would you like a job?[00:35:57] Yeah. It's fascinating.[00:36:00] Google vs OpenAI[00:36:00] swyx: Okay. Alright, so topic. I, I, I think, I think this this, this mote is is a peak under the curtain of the, the internal panic within Google. I think it is very val, very validated. I'm not so sure they should care so much about small models or, or like on device models.[00:36:17] But the other stuff is interesting. There is a comment at the end that you had by about as for opening open is themselves, open air, doesn't matter. So this is a Google document talking about Google's position in the market and what Google should be doing. But they had a comment here about open eye.[00:36:31] They also say open eye had no mode, which is a interesting and brave comment given that open eye is the leader in, in a lot of these[00:36:38] Simon Willison: innovations. Well, one thing I will say is that I think we might have identified who within Google wrote this document. Now there's a version of it floating around with a name.[00:36:48] And I look them up on LinkedIn. They're heavily involved in the AI corner of Google. So my guess is that at Google done this one, I've worked for companies. I'll put out a memo, I'll write up a Google doc and I'll email, email it around, and it's nowhere near the official position of the company or of the executive team.[00:37:04] It's somebody's opinion. And so I think it's more likely that this particular document is somebody who works for Google and has an opinion and distributed it internally and then it, and then it got leaked. I dunno if it's necessarily. Represents Google's sort of institutional thinking about this? I think it probably should.[00:37:19] Again, this is such a well-written document. It's so well argued that if I was an executive at Google and I read that, I would, I would be thinking pretty hard about it. But yeah, I don't think we should see it as, as sort of the official secret internal position of the company. Yeah. First[00:37:34] swyx: of all, I might promote that person.[00:37:35] Cuz he's clearly more,[00:37:36] Simon Willison: oh, definitely. He's, he's, he's really, this is a, it's, I, I would hire this person about the strength of that document.[00:37:42] swyx: But second of all, this is more about open eye. Like I'm not interested in Google's official statements about open, but I was interested like his assertion, open eye.[00:37:50] Doesn't have a mote. That's a bold statement. I don't know. It's got the best people.[00:37:55] Travis Fischer: Well, I, I would, I would say two things here. One, it's really interesting just at a meta, meta point that, that they even approached it this way of having this public leak. It, it, it kind of, Talks a little bit to the fact that they, they, they felt that that doing do internally, like wasn't going to get anywhere or, or maybe this speaks to, to some of the like, middle management type stuff or, or within Google.[00:38:18] And then to the, the, the, the point about like opening and not having a moat. I think for, for large language models, it, it, it will be over, over time kind of a race to the bottom just because the switching costs are, are, are so low compared with traditional cloud and sas. And yeah, there will be differences in, in, in quality, but, but like over time, if you, you look at the limit of these things like the, I I think Sam Altman has been quoted a few times saying that the, the, the price of marginal price of intelligence will go to zero.[00:38:47] Time and the marginal price of energy powering that intelligence will, will also hit over time. And in that world, if you're, you're providing large language models, they become commoditized. Like, yeah. What, what is, what is your mode at that point? I don't know. I think they're e extremely well positioned as a team and as a company for leading this space.[00:39:03] I'm not that, that worried about that, but it is something from a strategic point of view to keep in mind about large language models becoming a commodity. So[00:39:11] Simon Willison: it's quite short, so I think it's worth just reading the, in fact, that entire section, it says epilogue. What about open ai? All of this talk of open source can feel unfair given open AI's current closed policy.[00:39:21] Why do we have to share if they won't? That's talking about Google sharing, but the fact of the matter is we are already sharing everything with them. In the form of the steady flow of poached senior researchers until we spent that tide. Secrecy is a moot point. I love that. That's so salty. And, and in the end, open eye doesn't matter.[00:39:38] They are making the same mistakes that we are in their posture relative to open source. And their ability to maintain an edge is necessarily in question. Open source alternatives. Canned will eventually eclipse them. Unless they change their stance in this respect, at least we can make the first move. So the argument this, this paper is making is that Google should go, go like meta and, and just lean right into open sourcing it and engaging with the wider open source community much more deeply, which OpenAI have very much signaled they are not willing to do.[00:40:06] But yeah, it's it's, it's read the whole thing. The whole thing is full of little snippets like that. It's just super fun. Yes,[00:40:12] swyx: yes. Read the whole thing. I, I, I also appreciate that the timeline, because it set a lot of really great context for people who are out of the loop. So Yeah.[00:40:20] Alessio Fanelli: Yeah. And the final conspiracy theory is that right before Sundar and Satya and Sam went to the White House this morning, so.[00:40:29] swyx: Yeah. Did it happen? I haven't caught up the White House statements.[00:40:34] Alessio Fanelli: No. That I, I just saw, I just saw the photos of them going into the, the White House. I've been, I haven't seen any post-meeting updates.[00:40:41] swyx: I think it's a big win for philanthropic to be at that table.[00:40:44] Alessio Fanelli: Oh yeah, for sure. And co here it's not there.[00:40:46] I was like, hmm. Interesting. Well, anyway,[00:40:50] swyx: yeah. They need, they need some help. Okay. Well, I, I promise to keep this relatively tight. Spaces do tend to have a, have a tendency of dragging on. But before we go, anything that you all want to plug, anything that you're working on currently maybe go around Simon are you still working on dataset?[00:41:04] Personal plugs: Simon and Travis[00:41:04] Simon Willison: I am, I am, I'm having a bit of a, so datasets my open source project that I've been working on. It's about helping people analyze and publish data. I'm having an existential crisis of it at the moment because I've got access to the chat g p T code, interpreter mode, and you can upload the sequel light database to that and it will do all of the things that I, on my roadmap for the next 12 months.[00:41:24] Oh my God. So that's frustrating. So I'm basically, I'm leaning data. My interest in data and AI are, are rapidly crossing over a lot harder about the AI features that I need to build on top of dataset. Make sure it stays relevant in a chat. G p t can do most of the stuff that it does already. But yeah the thing, I'll plug my blog simon willis.net.[00:41:43] I'm now updating it daily with stuff because AI move moved so quickly and I have a sub newsletter, which is effectively my blog, but in email form sent out a couple of times a week, which Please subscribe to that or RSS feed on my blog or, or whatever because I'm, I'm trying to keep track of all sorts of things and I'm publishing a lot at the moment.[00:42:02] swyx: Yes. You, you are, and we love you very much for it because you, you are a very good reporter and technical deep diver into things, into all the things. Thank you, Simon. Travis are you ready to announce the, I guess you've announced it some somewhat. Yeah. Yeah.[00:42:14] Travis Fischer: So I'm I, I just founded a company.[00:42:16] I'm working on a framework for building reliable agents that aren't toys and focused on more constrained use cases. And you know, I I, I look at kind of agi. And these, these audigy type type projects as like jumping all the way to str to, to self-driving. And, and we, we, we kind of wanna, wanna start with some more enter and really focus on, on reliable primitives to, to start that.[00:42:38] And that'll be an open source type script project. I'll be releasing the first version of that soon. And that's, that's it. Follow me you know, on here for, for this type of stuff, I, I, I, everything, AI[00:42:48] swyx: and, and spa, his chat PT bot,[00:42:50] Travis Fischer: while you still can. Oh yeah, the chat VT Twitter bot is about 125,000 followers now.[00:42:55] It's still running. I, I'm not sure if it's your credit. Yeah. Can you say how much you spent actually, No, no. Well, I think probably totally like, like a thousand bucks or something, but I, it's, it's sponsored by OpenAI, so I haven't, I haven't actually spent any real money.[00:43:08] swyx: What? That's[00:43:09] awesome.[00:43:10] Travis Fischer: Yeah. Yeah.[00:43:11] Well, once, once I changed, originally the logo was the Chachi VUI logo and it was the green one, and then they, they hit me up and asked me to change it. So it's now it's a purple logo. And they're, they're, they're cool with that. Yeah.[00:43:21] swyx: Yeah. Sending take down notices to people with G B T stuff apparently now.[00:43:26] So it's, yeah, it's a little bit of a gray area. I wanna write more on, on mos. I've been actually collecting and meaning to write a piece of mos and today I saw the memo, I was like, oh, okay. Like I guess today's the day we talk about mos. So thank you all. Thanks. Thanks, Simon. Thanks Travis for, for jumping on and thanks to all the audience for engaging on this with us.[00:43:42] We'll continue to engage on Twitter, but thanks to everyone. Cool. Thanks everyone. Bye. Alright, thanks everyone. Bye. Get full access to Latent Space at www.latent.space/subscribe
NTD Business News: 1/20/20231. U.S. Existing Home Sales Lowest Since 20102. T-Mobile Investigating Extensive Data Breach3. 12,000 Layoffs at Google: Memo to Employees4. Vox Media Cuts Staff as Industry Struggles5. Boeing Ordered to Court Over Fraud Case
Adam and Mark open this week's episode of Reasonable Doubt talking about the popularity of the true crime genre, which leads to some discussion on the Central Park 5 and Netflix's new drama 'When They See Us'. Then the guys talk about the attack on free speech coming from the left with the tech giants leading the way as Adam dives into the various PragerU videos that Youtube has censored simply because they don't agree with the views being expressed. Please Support Our Sponsors: TrueCar.com for all your new & used car buying needs Go to LegalZoom.com & use code DOUBT at checkout Go to TeenCounciling.com to get help for your teenager Sleep smarter, go to EightSleep.com/Doubt Download Pluto TV on all your favorite devices
A chat with James Damore. We talk about his memo, get a [...]
Kofi, Marcus and Kent talk : The Google Memo, Gender Biases in the Workplace and More!
Kofi, Marcus and Kent talk : The Google Memo, Gender Biases in the Workplace and More!
In this episode of the Making Sense podcast, Sam Harris speaks with Martie Haselton about sex and gender, the role of hormones in human psychology, “Darwinian feminism,” the unique hormonal experience of women, transgenderism, the Google Memo, and other topics. SUBSCRIBE to continue listening and gain access to all content on samharris.org/subscribe.
Sam Harris speaks with Martie Haselton about sex and gender, the role of hormones in human psychology, “Darwinian feminism,” the unique hormonal experience of women, transgenderism, the Google Memo, and other topics. Martie Haselton is an interdisciplinary evolutionary scientist and Professor of Psychology at UCLA. She is the author of Hormonal: The Hidden Intelligence of Hormones – How They Drive Desire, Shape Relationships, Influence Our Choices, and Make Us Wiser. Twitter: @haselton
Robert Wright is a former senior editor at The New Republic, and he currently hosts The Wright Show. He’s also the author of several bestselling books on evolution and society. His latest book Is Why Buddhism Is True. Behind Bob’s Mindful Resistance Newsletter [0:00] Tribal tweets and popularity [5:28] Evaluating Heterodox Academy [16:00] The Google Memo [21:40] The intellectual dark web/Evolutionary psychology [25:25] Bob’s near-term plans [31:45] Mindfulness and De-Biasing Oneself [37:46]
For Wrongspeak's introductory episode, Debra speaks with ex-Google engineer James Damore and takes a close look at the science behind his infamous “Google Memo” on gender in the tech field. Was Damore punished for his inconvenient brain?
This week Sense and Theory tackle a listener requested topic in cognitive dissonance. From Stormy Daniels and Roy Moore to NFL Protests and the Google Memo the fellas explore how cognitive dissonance can lead us astray of the very values and beliefs we seek to protect.
In Part 3 of The Image of God & the Feminine Experience, Philosopher Dr. Rachel Douchant leads us from Thomas Aquinas to the "Google Memo," teasing out how Enlightenment rationalism continues to impact women today. Rachel Douchant is Professor of Philosophy at Lindenwood University, Co-chair of the Lindenwood Honors College, and Director of the Liberty & Ethics Center. Her research interests include Hume’s classical liberalism, the Philosophy of Economics, and Aristotelian Virtue Theory. 'r2up8tnv'
James Damore and his ex-colleague filed a class-action [...]
REFERENCES: Interviews watched: FDR: https://www.youtub [...]
In early August, 2017, James Damore was fired from Google for writing the now famous #GoogleMemo. In it he documented Google's internal echo chamber, some of the dangers the companies policies toward men and women present, and some suggestions for how to change moving forward. The document wasn't received as he had hoped, however, and he was fired after the memo went viral. In this 10th episode, Jason Hartman talks to James about the dangers of a company as big as Google becoming something akin to a thought police, whether algorithms should be public, what parts of the memo were ignored by the media (but incredibly important), and why we're seeing less and less diversity of thought in the world today. He has recently filed a class action lawsuit against Google. Key Takeaways: [2:38] How the Google memo came along [6:06] People talk about diversity, but don't surround themselves with any diversity of thought [7:42] An overview of the memo and what people considered so offensive [13:49] Some aspects of the Google memo were ignored by the media [15:46] We need to stop defining success solely in monetary terms, and women are now facing the stress related health concerns previously almost exclusively held by men [19:32] If we don't acknowledge, and celebrate, our differences as people, EVERYONE loses Website: www.FiredForTruth.com www.Twitter.com/JamesADamore
In early August, 2017, James Damore was fired from Google for writing the now famous #GoogleMemo. In it he documented Google's internal echo chamber, some of the dangers the companies policies toward men and women present, and some suggestions for how to change moving forward. The document wasn't received as he had hoped, however, and he was fired after the memo went viral. In this 10th episode, Jason Hartman talks to James about the dangers of a company as big as Google becoming something akin to a thought police, whether algorithms should be public, what parts of the memo were ignored by the media (but incredibly important), and why we're seeing less and less diversity of thought in the world today. Key Takeaways: [2:38] How the Google memo came along [6:06] People talk about diversity, but don't surround themselves with any diversity of thought [7:42] An overview of the memo and what people considered so offensive [13:49] Some aspects of the Google memo were ignored by the media [15:46] We need to stop defining success solely in monetary terms, and women are now facing the stress related health concerns previously almost exclusively held by men [19:32] If we don't acknowledge, and celebrate, our differences as people, EVERYONE loses Website: www.FiredForTruth.com www.Twitter.com/JamesADamore
Why do we get offended? What's the biological basis for it and what are some of the social pros and cons to being morally outraged? Or, in this 21st century, are we becoming too emotional, hyper senstitive and far too easily offended? Featuring in this episode is James Damore, the former Google employee who wrote the infamous Google Memo on diversity, Professor Stephanie Preston a psychologist from the University of Michighan, Dr Zachary Rothschild who has researched extensively into moral outrage, The Broadcasting Authority of Ireland, comedian, actress and writer Tara Flynn and Clinical Psychologist Dr. Shawn Smith. If you enjoy the episode please rate and review on iTunes, share it online and help spread the word. You can also become a patron to the show on Patreon.
For some people, yes, computers are necessary and valuable, but for a lot of other people, they are simply accelerating and enabling this useless information consumption. In this episode, we cover how the Internet, social media, television, and technology is ruining our abilities to think, reason, entertain ourselves, and what to do about it. Amusing Ourselves to Death is one of both of our favorite books, and it was fun to see how much it related to the other topics we’ve been covering. We covered a wide range of topics, including: How various forms of information affect our perception The prevalence of fake news now People concerned about others more than themselves Technology negatively affecting our attention spans The psychological aspects of the media and commercials Minimizing technological distractions How technology has changed our conversations Enjoy! If you want more on Amusing Ourselves to Death, be sure to check out Nat’s notes on the book and to pick up a copy yourself! If you enjoyed this episode, be sure to listen to our episode on The Sovereign Individual, to better prepare yourself for the cyber-economic future, and to our episode on In Praise of Idleness, to reduce the guilt to work so much and to improve your leisure time. Mentioned in the show: Orwell’s essays [2:37] USA Today [12:50] Buzzfeed [13:05] Business Insider [13:10] Lincoln and Douglas debates [17:09] Pulp Fiction [21:10] Nat’s article on most popular internet sites [28:20] Alexa [28:22] Nat’s 5-day water fast article [30:45] Nat’s article on Buzzfeed vs WSJ [33:46] Neil’s website [33:13] Fushimi-Inari-Taisha Shrine [40:59] The Daily Show [1:02:24] The Colbert Report [1:02:25] Jon Stewart interview fake news [1:05:05] Jon Stewart interview on Crossfire [1:05:37] Crossfire show [1:05:37] Free speech issue on campuses article [1:06:59] Trump’s policies [1:12:55] Trump’s speech in Virginia [1:13:35] The Google Memo [1:16:10] (Nat’s article on this) Made You Think episode on The Sovereign Individual [1:22:05] Estee Lauder [1:25:10] Sesame Street [1:27:50] Duolingo [1:29:18] Nat Chat podcast [1:31:12] Slack [1:36:18] Nat’s Facebook setup [1:41:06] Second Life [1:53:04] Books mentioned: Amusing Ourselves to Death [1:05] (Nat’s Notes) Brave New World [1:32] 1984 [1:18] Antifragile [9:13] (Antifragile’s Made You Think episode) (Nat’s Notes) It’s Charisma, Stupid [9:25] Thomas Paine's Common Sense [21:56] The Subtle Art of Not Giving a Fuck [22:41] (Nat’s Notes) 50 Shades of Grey [23:15] Musashi [31:36] The 4-Hour Workweek [1:36:50] (Nat’s Notes) People mentioned: Neil Postman [1:07] George Orwell [1:18] Aldous Huxley [1:32] William Taft [7:20] Abraham Lincoln [7:25] Franklin D. Roosevelt [7:55] Donald Trump [8:30] Barack Obama [8:40] George Bush [8:41] Bill Clinton [8:43] Ronald Reagan [8:44] John F. Kennedy [8:47] Chris Christie [8:52] Paul Graham [9:24] Shakespeare [17:02] Stephen A. Douglas [17:09] Samuel L. Jackson [21:19] John Travolta [21:19] Thomas Paine [21:56] Mark Manson [22:39] James Patterson [26:54] Walden [37:41] Jim Kramer [51:55] Bernie Sanders [1:00:04] Plato [1:09:50] Socrates [1:09:50] Nassim Nicholas Taleb [1:10:12] Hillary Clinton [1:21:00] Scott Adams [1:21:07] Ted Cruz [1:21:07] Justin Mares [1:36:16] Tim Ferriss [1:36:56] 0:00 - Intro to the book’s discussion, an excerpt being read, and the book’s background. 4:14 - Discussion on how the form of the information portrayed affects how we perceive that information, and some of the informational form shifts that we’ve had so far. 6:57 - The visual components of information, and the power of appearance and charisma on success and popularity. 9:58 - Thoughts on the validity of written things versus other forms of information. 12:20 - Discussion on the media and the change of what now passes for quality knowledge. 17:17 - Talk on the lengthy Lincoln and Douglas debates in the 1800’s and how people were able to sit and maintain focus for upwards of seven hours. Also, discussion on how frequently television changes the screen on you. 21:48 - How much more of a book culture it was back in the day. Also, discussion on how reading and typing in full sentences improves speech. 24:49 - Before the internet, the ability to pay attention was much greater, but now there are constant distractions from the internet that diminish that. Also, talk on how many fewer people are reading longer and tougher books now. 31:59 - Discussion on information requiring much more context and evidence, and talk on the click-baity information out there. Talk on websites pushing information that maximizes ad revenue, instead of quality information. 35:28 - The impact that improved informational transfer speed has had on us, positively and negatively. 38:07 - Thoughts on how so many people are fixated on the lives of others, and the negative impact that social media and technology on us by disconnecting us from the present moment. Also, the social pressure of these things. 47:09 - How little the news affects our decisions and how little we actually do to change things that we don’t necessarily like. 52:05 - The large amount of cases where value is added to meaningless data, especially in the news. Also, the news constantly making small issues seem much larger and promoting fake scenarios.56:11 - Discussion on the “peek-a-boo” events that pop up quick, blow up, and then disappear, mostly for entertainment. 57:35 - How television has changed conversation, political changes, and the president using the media to get elected. 1:01:15 - People taking news sources seriously, even though the information is taken out of context and misconstrued. 1:06:40 - The issue with us magnifying small differences and making huge deals out of them and some examples of this. 1:11:33 - How frequent the story changes on the news or on social media “the infinite scroll”, and the media manipulating stories so often, making it extremely hard to trust them. 1:19:30 - Commercials being addressed to the psychological needs of the viewer and not the actual product being sold. Also, politicians using catchy sound bites to have people pay attention to them. 1:27:50 - Discussion on various methods of teaching and the huge number of flaws in these teaching methods. Also, how these widespread methods and technology negatively impact us and our attention span. 1:35:18 - How to have an effective schedule for minimizing these technological distractions and some thoughts on this. 1:41:44 - Discussion on us never needing to be bored again due to technology, and the possible negative impact this has on creativity. 1:44:58 - How much computers really help us, and how they accelerate the intake of useless information. Also, the possible future impacts that current technology will have on us and the workforce. 1:54:06 - Some things that will need to change in teaching systems to fix our shrinking attention spans. 1:57:34 - Wrap-up. Be sure to let us know your thoughts on the episode on Twitter! Simply being able to pay attention will be an extremely valuable skill that ninety percent of us won’t have. If you enjoyed this episode, don’t forget to subscribe at https://madeyouthinkpodcast.com
A critical discussion about the social and politics ramifications in response to the firing of James Damore for his Google memo.
I’m back, babes, and better than ever, or rather, not actively sick any more. And after spending some time talking about sickness and rest (with some reading recommendations below), I’m thrilled to be bringing you my conversation with PhD student and feminist shit-talker Aadita Chaudhury. After Aadita tells me exactly what STS is (you have to … Continue reading Episode 1.13 Laughing Disparagingly at the Google Memo Dude with Aadita Chaudhury
On an episode of Dave Rubin, James Damore lamented the fact that no one on the left had been willing to have a long form discussion with him and that his previous ones had been edited to be misleading. So, I wanted to show that I, as a lefty, would be more than willing to have a long form, unedited discussion with him. Here is his original memo. Here's probably the best source of scientific refutations of his memo. Google Unconscious Bias Training I referenced. Really good source on sexism in tech. Leave Thomas a voicemail! (916) 750-4746, remember short and to the point! Support us on Patreon at: patreon.com/seriouspod Follow us on Twitter: @seriouspod Facebook: https://www.facebook.com/seriouspod For comments, email thomas@seriouspod.com
Show #179 | Guests: Jim Brosnahan, Deborah Rhode, and Peter Scheer | Show Summary: From the Google Memo to public statues to campus protests, accusations of quashed free-speech rights are flying. Is picketing a college speaker an effort to shut down discourse? How protected is an employee writing internal memos on company policy? If residents feel a memorial expresses their history, can the majority take that away? How do so many Americans mistake, say, moderation of comment sections as a breach of their First Amendment rights? In Deep has pulled together a panel reflecting deep experience in activism, the courtroom, and the classroom to address these thorny questions.
TGIF, Football, Fantasy Football, and the Google Memo
John and I discuss the Google memo that got the engineer fired and related issues in the work place.
Today I'm joined by Felicia Entwistle of the Utah Outcasts to talk about that Google memo and the science of gender differences. I think we did a good job of representing the nuance of the science and outlining some of the fallacies in the discourse over this whole thing. Lots of links! Original Memo With Links; NPR Why Women Stopped Coding; Justices Interrupted; Gender Differences Study; Another Gender Differences Study; Women Face More Stress; Stress Study; Women from Google; Sexism in Tech; Debra So Article Leave Thomas a voicemail! (916) 750-4746, remember short and to the point! Support us on Patreon at: patreon.com/seriouspod Follow us on Twitter: @seriouspod Facebook: https://www.facebook.com/seriouspod For comments, email thomas@seriouspod.com
This week Rafael and I get lost in Google’s increasingly large and increasingly subjective maze of products and controversies, Rafael almost invents a new musical genre and Jeremy neglects the emotional needs of his power outlets. Metrograph, Rafael’s new favourite local theatre http://metrograph.com/ Zabriskie Point, Michelangelo Antonioni http://www.imdb.com/title/tt0066601/ Cancel Netflix https://www.netflix.com/CancelPlan Alien Covenant on Rotten Tomatoes https://www.rottentomatoes.com/m/alien_covenant/ Guardians of the Galaxy Vol 2 on RT https://www.rottentomatoes.com/m/guardians_of_the_galaxy_vol_2/ Frank Costanza https://www.youtube.com/watch?v=7gVi-kIVY4I Farewell Hulu, Hello Criterion Collection https://www.criterion.com/current/posts/4293-farewell-hulu-hello-criterion-channel Tiff Bell Lightbox Theatre https://www.tiff.net/ David Ross https://www.youtube.com/watch?v=PHk0XatMV6s Radical Candor, Kim Scott https://www.radicalcandor.com/ Loose Lips Sink Ships https://en.wikipedia.org/wiki/Loose_lips_sink_ships James Damore’s Google Memo https://www.wired.com/story/the-pernicious-science-of-james-damores-google-memo/ Culture Amp https://www.cultureamp.com/ Project Include http://projectinclude.org/ Alphabet https://abc.xyz/ History of high heel shoes http://history-of-heels.weebly.com/origins-of-high-heels.html Playground http://playground.global/ Andy Rubin wants to unleash AI on the world https://www.wired.com/2016/02/android-inventor-andy-rubin-playground-artificial-intelligence/ Google X, the moonshot company https://x.company/ Google Wave https://en.wikipedia.org/wiki/Apache_Wave Slack https://slack.com Asana https://asana.com/ Workplace by Facebook https://www.facebook.com/workplace Bing https://www.bing.com/ DuckDuckGo https://duckduckgo.com/ Back Rub http://www.businessinsider.com/the-true-story-behind-googles-first-name-backrub-2015-10 How Meta Tags affect search engines http://www.wordstream.com/meta-tags Google RankBrain https://en.wikipedia.org/wiki/RankBrain 22 Immutable Laws of Marketing https://www.slideshare.net/GrahamMcInnes1/22-immutable-laws-of-marketing Killer robots: World’s top AI and robotics companies urge United Nations to ban lethal autonomous weapons https://futureoflife.org/2017/08/20/killer-robots-worlds-top-ai-robotics-companies-urge-united-nations-ban-lethal-autonomous-weapons/ Noise Dub https://www.last.fm/tag/noise+dub Postinternet http://www.artspace.com/magazine/interviews_features/trend_report/post_internet_art-52138 Google’s Associate Product Manager Program https://www.wired.com/2012/07/marissas-secret-weapon-for-recruiting-new-yahoo-talent/ In The Plex: How Google Thinks, Works, and Shapes Our Lives https://www.amazon.com/Plex-Google-Thinks-Works-Shapes/dp/1416596585/ Swype keyboard http://www.swype.com/ Android fragmentation http://bgr.com/2017/07/07/android-market-share-versions-july-2017/ Self Employment in the USA (10.1%) https://www.bls.gov/spotlight/2016/self-employment-in-the-united-states/pdf/self-employment-in-the-united-states.pdf Tom’s gel toothpaste https://www.amazon.com/Maine-Fluoride-Natural-Toothpaste-Spearmint/dp/B01IAE0OX0/ref=sr_1_2_a_it?ie=UTF8&qid=1503877330&sr=8-2&keywords=tom%27s+gel+toothpaste+8+pack Prime Pantry https://www.amazon.com/Prime-Pantry/ Constant Dullaart http://constantdullaart.com/ See Saw – Gallery Guide https://itunes.apple.com/ca/app/see-saw-gallery-guide/id791643418?mt=8 Avertising Break – Zirkel http://zirkelgame.com/ Google 20% time is dead https://9to5google.com/2013/08/16/googles-20-percent-time-birthplace-of-gmail-google-maps-adsense-now-effectively-dead/ Rafael Rozendaal image search https://www.google.ca/search?q=rafael+rozendaal&rlz=1C5CHFA_enCA727CA728&source=lnms&tbm=isch&sa=X&ved=0ahUKEwj9-6ygzfjVAhVl3IMKHQS3C0sQ_AUICigB&biw=1262&bih=882 Taylor Swift deletes social media https://www.theverge.com/tldr/2017/8/18/16169342/taylor-swift-social-media-black-out-reddit-theories Field Recording: Pyry Qvick, Tampere, Finland
Did you know that every year you eat 3 spiders in your sleep? But did you know that fact is most likely made up? But did you know that lots of facts are made up? But did you know that Kevin makes up 86% of the things he says at any given point in time? But did you know that any time he makes up a statistic he set 86%? But did you know that the average velocity of an unladen swallow is in direct proportion to it's wingspan? But did you know Shakespeare invented the word eyeball? But did you know that this bit was stolen from Movies with Mikey? But did you know that we'll never get in trouble for stealing it because we will never reach even a fraction of a percent of the audience of that admittedly minor youtube show? But did you know that worrying about obscurity on the internet is one of the worst things to worry about? But did you know that if you scream into a jar and close it fast enough you can release the scream in the face of your enemies? But did you know having too many enemies can dramatically increase your blood pressure? But did you know petting a cat can lower your blood pressure? Where was I going with this? (Recorded on August 14, 2017.) Links Kevin (still) recommends Master of None. (Jesse still hasn't seen it.) Co-creator of Master of None, Alan Yang, also directed Moonlight for JAY-Z. Kevin enjoyed Ali Wong's standup special Baby Cobra. Kevin did not enjoy Jiro Dreams of Sushi. The creator of Jiro Dreams of Sushi, David Gelb, also made the similarly problemed Chef's Table. Go read some creepypastas at r/nosleep, specifically this one. Chris Straub's Scared Yet is a series of creepypasta reviews. Don't think about the peepee poopoo man. Sarah Jeong wrote the ultimate piece on the Google Memo. XKCD explains free speech. Mikey Neumann's latest is about Pan's Labyrinth. Every Frame a Painting's video on the Coen Brother's is incredible. YourMovieSucksDOTorg is Synecdoche, New Yorking-ing their video series on Synecdoche, New York.
Colin Anderson and Candice Chetta, once again, find themselves flabbergasted at the things some new White Dude Of The Week, James Denmore, felt the need to post online. James went ahead and posted his entire behind on the internet for the whole world to see. Colin and Candice take this memo apart, piece by piece. … Continue reading "Ep8: “Google Memo” Bro And White Male Fragility"
In this week's episode, Natalia, Neil, and Niki debate the role of neo-Nazis in the white nationalist violence in Charlottesville, the Google memo, and the political power of blondeness.
A male product manager at a tech company gives his take on the Google Memo that's been a big topic of discussion in recent weeks.To bring you up to speed: a software developer at Google called James Demore wrote a memo about the company's diversity policies. This memo spread fast internally before making it's way outside Google and causing a lot of controversy, and led to him being fired by Google. Demore wrote largely about so-called scientific and psychological differences between men and women and how they explain the gender balance in tech, and provdes research to back up his claims. (See the links below for more details along with two very good articles discussing the memo.)Alec Molloy has been a product manager for the past five years, and has worked in San Francisco, London and Malmo. He talks about how in some ways the memo accurately describes the situation within the tech industry, how it's calm language is both dangerous and also something he thinks these debates could benefit more from, and what he hopes future generations will take away from the discussions about gender diversity in tech.Host: Nas aka Nastaran Tavakoli-FarGuest: Alec Molloy, product manager at a tech companyLinks: ‘The Google Memo' http://bit.ly/2ubOmpP) by James Damore On Google's firing of Damore http://read.bi/2uMBV3c‘The email Larry Page should have written to James Damore' http://econ.st/2vMFoAP ‘I'm a woman in computer science, let me ladysplain the Google Memo to you' http://bit.ly/2uvAif6 by Cynthia Lee in Vox ‘The most common error in the media's coverage of the Google Memo' http://theatln.tc/2vgKtCN by Conor Friedersdorf in The Atlantic The Gender Knot www.thegenderknot.com
Haley from the Bored At Work podcast, joins us to give the female perspective on the Google Memo.
Darren is back and he has some thoughts about the ongoing reporting and discussions related to the Google memo. In preparation for the solar eclipse on August 21, Cristina checks into some commonly held beliefs about eclipses. Lastly, Adam looks into the ideas that twins run in the family and skip a generation.
Tech 411 dives deep into the controversial "Google Memo" that sent diversity shock waves from Silicon Valley across our nation. A new technology that can filter the Internet from toxic thoughts. Plus our apps of the week.
This week we talk about new iPhone rumors and of course we had to chop it up about that Google memo that's been going around. Does James Damore raise any good points or is he just anti-diversity? Our discussion might surprise you. Be sure to subscribe and leave a review! Check out Witty: Women in Tech Talk to Yaz! Join Yasmin every week as she chats with female techies about their career, the challenges of the industry, tech news and more. Her guests range from young engineers to top execs; working at the largest tech goliaths to the smallest start ups; hailing from North America to Asia https://itunes.apple.com/us/podcast/witty-women-in-tech-talk-to-yaz/id1187617974?mt=2
La tertulia semanal en la que repasamos las últimas noticias de la actualidad científica. En el episodio de hoy: Nuevas estudios sobre las diferencias fisiológicas entre cerebros de hombres y de mujeres; El Manifiesto de Google: Debatimos los diferentes puntos de vista; Pareidolia: Los monos también ven caras en las cosas; Observaciones de la estratosfera de un exoplaneta; Eclipse solar: Paranoias apocalípticas (y dale) y cómo observarlo de forma segura. En la foto, de izquierda a derecha: Héctor Socas, Carlos González, Carmen Agustín, Nacho Trujillo. Todos los comentarios vertidos durante la tertulia representan únicamente la opinión de quien los hace… y a veces ni eso. CB:SyR es una colaboración entre el Área de Investigación y la Unidad de Comunicación y Cultura Científica (UC3) del Instituto de Astrofísica de Canarias.
La tertulia semanal en la que repasamos las últimas noticias de la actualidad científica. En el episodio de hoy: Nuevas estudios sobre las diferencias fisiológicas entre cerebros de hombres y de mujeres; El Manifiesto de Google: Debatimos los diferentes puntos de vista; Pareidolia: Los monos también ven caras en las cosas; Observaciones de la estratosfera de un exoplaneta; Eclipse solar: Paranoias apocalípticas (y dale) y cómo observarlo de forma segura. En la foto, de izquierda a derecha: Héctor Socas, Carlos González, Carmen Agustín, Nacho Trujillo. Todos los comentarios vertidos durante la tertulia representan únicamente la opinión de quien los hace… y a veces ni eso. CB:SyR es una colaboración entre el Área de Investigación y la Unidad de Comunicación y Cultura Científica (UC3) del Instituto de Astrofísica de Canarias.
This week Rafa and Kevin are joined by Michael Flarup to talk about public speaking and the newly released Speakerdex!Follow up Hey, the Google Memo dude was fired
In this episode, Yaron revisits the Google Memo and the Charlottesville fallout, what these actions say about the direction we are heading and the overall threat on free speech.Each week Yaron will explore key components of Objectivism and apply Objectivist values to current events. He welcomes your questions altruism, virtue, productiveness and living Objectivism, so call in, email or tweet!Didn't get a chance to call in? Got Questions or hot topics you want to hear Yaron address? Email Yaron at AskYaron@YaronBrookShow.comContinue the discussions anywhere on line after show time using #YaronBrookShow. Connect with Yaron via Tweet @YaronBrook or follow him on Facebook @ybrook and YouTube (/YaronBrook) where the Facebook Live videos of the BTR shows are now available for your viewing pleasure.Want more episodes? Tune in to the new Yaron Brook Show on Blaze Radio at http://www.theblaze.com/radio-shows/the-yaron-brook-show/ on Sundays at 2 PM ET for live shows or go to BlogTalkRadio (www.blogtalkradio.com/yaronbrook) for on-demand shows.
Local News Chat (0:00)City Hall Selfie Day with Chad Doran (13:45)The Takeaway: Understanding Different Perspectives (18:42)Betsy Borns on Understanding and Solving Homelessness (29:18)Tommy Clifford on Charlottesville and Google Memo (58:10)
A white nationalist drives his car into a crowd of peaceful protesters - Trump fails to show leadership. PLUS we discuss the Google Memo, and what it means to be a white man in the face of a diversifying world.
Groups and Identity Politics Never Help - We Need To Treat Individuals Individually --- Send in a voice message: https://anchor.fm/thinkfuture/message Support this podcast: https://anchor.fm/thinkfuture/support
We discuss all things Charlottesville, look at some of the mainstream news media coverage of the Google memo, share Ashley Judd's airport struggle and more! Support the show and help us make it better! Become a Patron: http://www.patreon.com/beautyandthebeta Make a one-time contribution on PayPal: http://www.paypal.me/beautyandthebeta Blonde's channel: http://bit.ly/23RrR3z Blonde's Twitter: http://bit.ly/2t41Wvc Matt's Twitter: http://bit.ly/2ib6eKr Email the show: beautyandthebeta@gmail.com Beauty & the Beta on demand: http://bit.ly/1TUcepj Listen on iTunes: http://apple.co/23YM9rM Listen on Google Play: http://bit.ly/2iFWOqD Listen on Soundcloud: http://bit.ly/1TUce8E Listen on Stitcher: http://bit.ly/1TlubhE Listen on Podbean: http://bit.ly/1TUcnJ8 MUSIC Bearing and SugarTits' cover of "Catch the Wind" http://bit.ly/2fu9qUO "Dog Park" and "Odahviing" written and performed by AENEAS: http://bit.ly/2sibPZ7 GOOGLE MEMO LINKS Read James Damore's full memo: http://bit.ly/2uAynWh James Damore with Jordan Peterson: https://youtu.be/SEDuVF7kiPU James Damore with Stefan Molyneux: https://youtu.be/TN1vEfqHGro ITEMS REFERENCED Tom's art: http://bit.ly/2uAQs6T Chase's art: http://bit.ly/2uBb9ja Car footage: http://bit.ly/2uSPHBs About the suspect: http://bit.ly/2uArwMG Trump's response: https://youtu.be/ivvFbSciGiY Terry McAuliffe's response: http://bit.ly/2uTI2ma Politico calls out Trump for not disavowing white supremacists: http://politi.co/2uzCKRM CBS' coverage of the Google memo: https://youtu.be/iW2nGw7J9ks CNN/MSNBC quick hits: https://youtu.be/jA6-I9EXte0 Five Thirty-Eight analysis of 2018 Democratic electoral prospects: http://53eig.ht/2uBaqyi Bill Maher says make Russia an issue: https://youtu.be/A3XmejZehaM Yvette Felarca goes to court: https://youtu.be/dfwnecz5fww Ashley Judd at the airport part 1: http://bit.ly/2uBkXtd Ashley Judd at the airport part 2: http://bit.ly/2uAWxA4 Surprise cringe "dad" gives birth: http://bit.ly/2uG3ECD Surprise cringe gender fluid disabled sex worker: http://bit.ly/2uBaY7a
This week, Brianna shares a heartwarming story about burning a Trump guild to ashes, Georgia is displeased with Disney, and Mikah is dismayed about the Google Memo. Steve showed up, which was brave, given that the Hearthstone expansion was released.
This week we depart from our new format and take a deep dive into the 'anti-diversity' Google memo. Join us! Sponsor: Missional Wear - the gift shop for Reformed theology enthusiasts! Topic: The 'Anti-diversity' Google Memo Read the entire memo in full - Gizmodo “Why I Was Fired by Google” by James Damore – Wall Street Journal A Google manufacturing robot believes humans are biologically unfit to have jobs in tech - McSweeney's The New/Old Way Our Culture Pressures Us to Conform - Tim Challies One columnist even thinks “Sundar Pichai should resign” - The New York Times Ways to Contact Us Connect with us in Slack: slack.techreformation.com Visit our website to search for past shows and topics Shout out at us on Twitter at @techreformation! Review us on Apple Podcasts and recommend us on Overcast, or even better - share Tech Reformation with a friend! Music used by special permission of Matthew Parker. Check him out on SoundCloud and iTunes!
Tom, Tim and Cameron discuss 'the Google memo guy' and whether or not he should have been fired; they also weigh up President Trump's options in dealing with North Korea and lastly the boys give their review of 'Shin Godzilla'!!! Music used under creative commons license: "Hungaria" by Latché Swing (http://www.latcheswing.fr/) "Epic" by Bensound (http://www.bensound.com)
This week on The Vergecast, Nilay, Lauren, Dieter, and Paul begin by discussing the controversy over the Google engineer who was fired over writing a 10-page viral memo about diversity. The story illustrates a deeper problem in Silicon Valley, which Lauren has discussed in her podcast recently, so the cast talks about the science of the claims, the responsibly of Google, and what it means in the larger tech industry. In the second half of the show, the crew runs through the latest leaks, releases, and controversies in the gadget world, including Paul’s segment he does every week, “FROYO PODS.” 01:46 - Google engineer fired over memo files labor complaint 33:09 - Consumer Reports stops recommending Microsoft Surface PCs over reliability concerns 37:41 - The new iPhone could have a resizable home button and face recognition for payments 46:22 - 4K Apple TV with HDR spotted in HomePod firmware 48:54 - Essential promises a new phone release date 'in a week' 56:19 - Another Pixel 2 leak shows the phone’s large front bezels 58:14 - Paul’s weekly segment “FROYO PODS” Learn more about your ad choices. Visit megaphone.fm/adchoices
Google’s reputation for openness took a tumble when its CEO fired James Damore, the author of a memo questioning the company’s efforts to achieve gender parity. Amy Webb, founder of the Future Today Institute, blames the internet. She says easy access to data is allowing us to make dumb arguments. In the Spiel, Mike has more thoughts on the Google memo. Guess what? He dislikes it. Learn more about your ad choices. Visit megaphone.fm/adchoices
INTRO 0:23 -Travis Is Trying to Buy A House -Plugs -Bevs Like These -CocaCola Zero Sugar BEYOND THE HEADLINES 8:11 -Google Memo -Eric Bolling -Podcast Patent -Sarahah POLITICS ROUND-UP 54:15-Propaganda Folder -Trump TV -Paul Manafort Raid -RAISE Act -North Korea Wi-Five of the Week 1:36:52 Outro 1:38:54
I felt crappy but Greg stopped in to bail me out. We get you up to date on the Korea insanity, and go over the controversial Google memo. Then it's on to the latest illegal immigrant propaganda, Sinead O'Connor audio, dude blows off vagina with shotgun, furries molest kid dressed as Tony the Tiger. Yes, you read that correctly: Glenn Campbell/"Rhinestone Cowboy"
Friday with Fritz special: Let's talk about the Google "memo-manifesto-of-doom." Music courtesy of bensound.com
08-08-2017 - James Damore and his Google Memo on Diversity (complete) - audio English
CEO Sundar Pichai calls off a public debate about a now-dismissed engineer's views that biological factors may explain why there are fewer women in tech. What's next?
With Aaron away, this is a solo episode of the News Roundup by Jan. Because there's only one of us talking, there are more topics than usual on this week's episode: six, to be precise. These are: the controversial Google memo; Uber board shenanigans; Netflix and Disney's announcements this week; Facebook's new Watch tab; Consumer Reports' decision to withdraw its recommendation of two Surface models from Microsoft; and the conclusion of the recent SoundCloud funding saga. The show notes below include links to all these stories on Tech Narratives and original sources as appropriate. News stories we covered: • Google Memo: Two pieces from Jan: https://www.technarratives.com/2017/08/07/google-employee-pens-memo-arguing-against-diversity-initiatives/ https://www.technarratives.com/2017/08/11/google-cancels-diversity-town-hall-over-harassment-and-threats/ Business Insider on the cancelation of the Town Hall: http://www.businessinsider.com/google-cancels-diversity-town-hall-2017-8 Motherboard's original reporting on the memo: https://motherboard.vice.com/en_us/article/kzbm4a/employees-anti-diversity-manifesto-goes-internally-viral-at-google • Uber Board: Jan's piece on the Benchmark lawsuit: https://www.technarratives.com/2017/08/10/uber-investor-benchmark-sues-kalanick-over-fraud-to-remove-him-from-board/ Axios's scoop on this story: https://www.axios.com/benchmark-capital-sues-travis-kalanick-for-fraud-2471455477.html • Netflix and Disney: Jan's take on the Netflix Millarworld acquisition: https://www.technarratives.com/2017/08/07/netflix-acquires-millarworld-comic-book-company-to-hedge-against-loss-of-marvel/ Jan's take on Disney earnings: https://www.technarratives.com/2017/08/08/★-disney-reports-earnings-will-acquire-bamtech-launch-streaming-services/ Disney's announcements: https://thewaltdisneycompany.com/walt-disney-company-acquire-majority-ownership-bamtech/ Netflix announcement: http://files.shareholder.com/downloads/NFLX/3667621494x0x952713/155017DE-F0EF-47FA-972C-F9B002997979/PDF_Netflix_Acquires_Millarworld_-_Final_1_.pdf • Facebook Watch: Jan's take for Tech Narratives: https://www.technarratives.com/2017/08/09/facebook-launches-watch-a-new-tab-for-video-including-original-content/ Jan's Techpinions column: https://techpinions.com/the-risks-of-facebooks-video-pivot/50781 Facebook's announcement: https://newsroom.fb.com/news/2017/08/introducing-watch-a-new-platform-for-shows-on-facebook/ Variety on some of the new shows: http://variety.com/2017/digital/news/facebook-premium-shows-launch-1202521520/ • Consumer Reports and Surface: Jan's take: https://www.technarratives.com/2017/08/10/consumer-reports-withdraws-surface-recommendations-over-reliability-issues/ USA Today story: https://www.usatoday.com/story/tech/columnist/baig/2017/08/10/consumer-reports-pulls-recommendations-surface-laptops/554302001/ • SoundCloud saga: Jan's take on the latest news: https://www.technarratives.com/2017/08/11/soundcloud-closes-new-investment-shuffles-management-pivots-business/ TechCrunch article: https://techcrunch.com/2017/08/11/soundcloud-saved/ Bloomberg interview: https://www.bloomberg.com/news/articles/2017-08-11/soundcloud-gets-new-life-with-fresh-170-million-investment You can also find the Beyond Devices Podcast on iTunes (https://itunes.apple.com/us/podcast/beyond-devices-podcast/id1002197313), in the Overcast app (https://overcast.fm/itunes1002197313/beyond-devices-podcast), or in your podcast app of choice. As ever, we welcome your feedback via Twitter (@jandawson / @aaronmiller), the website (podcast.beyonddevic.es), or email (jan@jackdawresearch.com).
BR 8-10-17: On this edition of Beyond Reason Radio Yaffee reacts to Trump's tough talk against North Korea and how the reaction to his statements have been BEYOND REASON! He also explains Trump's overall strategy against North Korea and how he is getting support from many different places. ALSO Yaffee talks about the latest reaction to the Google Memo on diversity, the smears against Gorka, and MORE. Listen here now!
Google’s reputation for openness took a tumble when its CEO fired James Damore, the author of a memo questioning the company’s efforts to achieve gender parity. Amy Webb, founder of the Future Today Institute, blames the internet. She says easy access to data is allowing us to make dumb arguments. In the Spiel, Mike has more thoughts on the Google memo. Guess what? He dislikes it. Learn more about your ad choices. Visit megaphone.fm/adchoices
BR 8-10-17: On this edition of Beyond Reason Radio Yaffee reacts to Trump's tough talk against North Korea and how the reaction to his statements have been BEYOND REASON! He also explains Trump's overall strategy against North Korea and how he is getting support from many different places. ALSO Yaffee talks about the latest reaction to the Google Memo on diversity, the smears against Gorka, and MORE. Listen here now!
Conversations about the latest apple news and the infamous Google Memo.
Charlie, Reihan, and Michael Brendan Dougherty discuss the escalating situation in North Korea, and the now infamous "Google Memo."
Jesse and Brittany discuss their Target adventures, listener emails and voicemails related to carnival rides and the Google memo, Pat Robertson's defense of Eric Bolling of Fox News, reports that Donald Trump reads a folder full of positive news about himself twice a day, North Korea's nuclear weapon development and Trump's warning, Donald Trump's foolproof... The post #327 – “Target Adventures, Google Memo, Pat Robertson's Defense of Eric Bolling, Trump's Positivity Folder, North Korean Escalation, Opioid Crisis, and Trump TV.” appeared first on I Doubt It Podcast.
The Google Memo is everywhere, including on the podcast! Comedians Amanda Cohen and Dan D'Aprile come on to talk about the most exciting workplace document since Charles got suspended! Isaac is on board with the memo but others are unconvinced of its value. Is the memo dated misogynist nonsense or does it raise points worth considering? Why are women underrepresented in tech? Are diversity goals benefiting women and minorities or just hurting white men? Also discussed: people counting (good and bad), the butchering of 1984's Two Minutes Hate, and how white men are the Snickers of American culture...? Send your angry emails to chucknjoe@hotmail.com.
You must've heard about the Google "anti-diversity" memo. What the author was clearly missing and where 'scientific' and 'deductive reasoning' completely misses out, and why he shouldn't have been fired. My frustration mostly. Intro: Nirvana - Come As You Are.
Lee Jussim Lee Jussim is a professor of social psychology at Rutgers University and was a Fellow and Consulting Scholar at the Center for Advanced Study in the Behavioral Sciences at Stanford University (2013-15). He has served as chair of the Psychology Department at Rutgers University and has received the Gordon Allport Intergroup Relations Prize, and the APA Early Career Award for Distinguished Contributions to Psychology. He has published numerous articles and chapters and edited several books on social perception, accuracy, self-fulfilling prophecies, and stereotypes. His most recent book, Social Perception and Social Reality: Why Accuracy Dominates Bias and Self-Fulfilling Prophecy, ties that work together to demonstrate that people are far more reasonable and rational, and their judgments are typically far more accurate than social psychological conventional wisdom usually acknowledges. You can follow the twitter account: @PsychRabble for updates from his lab. The author of the Google essay on issues related to diversity gets nearly all of the science and its implications exactly right. Its main points are that: 1. Neither the left nor the right gets diversity completely right; 2. The social … The post The Google Memo: Four Scientists Respond appeared first on Quillette.