For-profit and non-profit artificial intelligence research company
Sejam bem-vindos ao milésimo octingentésimo trigésimo nono Spin de Notícias, o seu giro diário de informações científicas... em escala sub-atômica. E nesse Spin de Notícias falaremos sobre...Inteligência Artificial! *Este episódio, assim como tantos outros projetos vindouros, só foi possível por conta do Patronato do SciCast. Se você quiser mais episódios assim, contribua conosco!*
Kenneth Stanley is a former Professor of Computer Science at the University of Central Florida and most recently led the Open-Endedness Team at OpenAI.This interview will open your mind on goal setting and success.Kenneth isn't an investor, he's deep in the AI space ... and for all of us, understanding where the world is at with AI will definitely help our investing decisions now ... and for the future.Book: Why Greatness Cannot Be Planned: The Myth of the Objective - Kenneth O. Stanley*****Looking for an investing book gift for a loved one this christmas? Order ‘Get Started Investing', written by Equity Mates Alec and Bryce. Available on Booktopia and Amazon now!If you want to let Alec or Bryce know what you think of an episode, contact them here. Stay engaged with the Equity Mates community by joining our forum. Make sure you don't miss anything about Equity Mates - visit this page if you want to support our work.Have you just started investing? Listen to Get Started Investing – Equity Mates series that breaks down all the fundamentals you need to feel confident to start your journey.Want more Equity Mates? Come to our website and subscribe to Equity Mates Investing Podcast, social media channels, Thought Starters mailing list and more at or check out our Youtube channel.*****In the spirit of reconciliation, Equity Mates Media and the hosts of Equity Mates Investing Podcast acknowledge the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respects to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander people today. *****Equity Mates Investing Podcast is a product of Equity Mates Media. This podcast is intended for education and entertainment purposes. Any advice is general advice only, and has not taken into account your personal financial circumstances, needs or objectives. Before acting on general advice, you should consider if it is relevant to your needs and read the relevant Product Disclosure Statement. And if you are unsure, please speak to a financial professional. Equity Mates Media operates under Australian Financial Services Licence 540697.Equity Mates is part of the Acast Creator Network. Hosted on Acast. See acast.com/privacy for more information.
Augmented reveals the stories behind the new era of industrial operations, where technology will restore the agility of frontline workers. In this episode of the podcast, the topic is Human-First AI. Our guest is Christopher Nguyen (https://www.linkedin.com/in/ctnguyen/), CEO, and Co-Founder of Aitomatic (https://www.aitomatic.com/). In this conversation, we talk about the why and the how of human-first AI because it seems that digital AI is one thing, but physical AI is a whole other ballgame in terms of finding enough high-quality data to label the data correctly. The fix is to use AI to augment existing workflows. We talk about fishermen at Furuno, human operators in battery factories at Panasonic, and energy optimization at Westinghouse. If you like this show, subscribe at augmentedpodcast.co (https://www.augmentedpodcast.co/). If you like this episode, you might also like Episode 80: The Augmenting Power of Operational Data, with Tulip's CTO, Rony Kubat (https://www.augmentedpodcast.co/80). Augmented is a podcast for industry leaders, process engineers, and shop floor operators, hosted by futurist Trond Arne Undheim (https://trondundheim.com/) and presented by Tulip (https://tulip.co/). Follow the podcast on Twitter (https://twitter.com/AugmentedPod) or LinkedIn (https://www.linkedin.com/company/75424477/). Trond's Takeaway: Physical AI is much more interesting of a challenge than pure digital AI. Imagine making true improvements to the way workers accomplish their work, helping them be better, faster, and more accurate. This is the way technology is supposed to work, augmenting humans, not replacing them. In manufacturing, we need all the human workers we can find. As for what happens after the year 2100, I agree that we may have to model what that looks like. But AIs might be even more deeply embedded in the process, that's for sure. Transcript: TROND: Welcome to another episode of the Augmented Podcast. Augmented brings industrial conversations that matter, serving up the most relevant conversations in industrial tech. Our vision is a world where technology will restore the agility of frontline workers. In this episode of the podcast, the topic is Human-First AI. Our guest is Christopher Nguyen, CEO, and Co-Founder of Aitomatic. In this conversation, we talk about the why and the how of human-first AI because it seems that digital AI is one thing, but physical AI is a whole other ballgame in terms of finding enough high-quality data to label the data correctly. The fix is to use AI to augment existing workflows. We talk about fishermen at Furuno, human operators in battery factories at Panasonic, and energy optimization at Westinghouse. Augmented is a podcast for industrial leaders, process engineers, and for shop floor operators hosted by futurist Trond Arne Undheim and presented by Tulip. Christopher, how are you? And welcome. CHRISTOPHER: Hi, Trond. How are you? TROND: I'm doing great. I thought we would jump into a pretty important subject here on human-first AI, which seems like a juxtaposition of two contradictory terms, but it might be one of the most important types of conversations that we are having these days. I wanted to introduce you quickly before we jump into this. So here's what I've understood, and you correct me if I'm wrong, but you are originally from Vietnam. This is back in the late '70s that you then arrived in the U.S. and have spent many years in Silicon Valley mostly. Berkeley, undergrad engineering, computer science, and then Stanford Ph.D. in electrical engineering. You're a sort of a combination, I guess, of a hacker, professor, builder. Fairly typical up until this point of a very successful, accomplished sort of Silicon Valley immigrant entrepreneur, I would say, and technologist. And then I guess Google Apps is something to point out. You were one of the first engineering directors and were part of Gmail, and Calendar, and a bunch of different apps there. But now you are the CEO and co-founder of Aitomatic. What we are here to talk about is, I guess, what you have learned even in just the last five years, which I'm thrilled to hear about. But let me ask you this first, what is the most formational and formative experience that you've had in these years? So obviously, immigrant background and then a lot of years in Silicon Valley, what does that give us? CHRISTOPHER: I guess I can draw from a lot of events. I've always had mentors. I can point out phases of my life and one particular name that was my mentor. But I guess in my formative years, I was kind of unlucky to be a refugee but then lucky to then end up in Silicon Valley at the very beginning of the PC revolution. And my first PC was a TI-99/4A that basically the whole household could afford. And I picked it up, and I have not stopped hacking ever since. So I've been at this for a very long time. TROND: So you've been at this, which is good because actually, good hacking turns out takes a while. But there's more than that, right? So the story of the last five years that's interesting to me because a lot of people learn or at least think they learn most things early. And you're saying you have learned some really fundamental things in the last five years. And this has to do with Silicon Valley and its potential blindness to certain things. Can you line that up for us? What is it that Silicon Valley does really well, and what is it that you have discovered that might be an opportunity to improve upon? CHRISTOPHER: Well, I learn new things every four or five years. I actually like to say that every four or five years, I look back, and I say, "I was so stupid five years ago." [laughs] So that's been the case. TROND: That's a very humbling but perhaps a very smart knowledge acquisition strategy, right? CHRISTOPHER: Yeah. And in the most recent five years...so before co-founding Aitomatic, which is my latest project and really with the same team...and I can talk a lot more about that. We've worked with each other for about ten years now. But in the intervening time, there's a four-and-a-half-year block when we were part of Panasonic. So we had a company called Arimo that was acquired by Panasonic for our machine learning AI skills and software. And I would say if you look at my entire history, even though I did start with my degree in semiconductor all the way down to device physics and Intel and so on, but in terms of a professional working career, that was the first time we actually faced the physical world as a Silicon Valley team. And anybody who's observed Silicon Valley in the last 15-20 years, certainly ten years, has seen a marked change in terms of the shift from hardware to software. And my friend Marc Andreessen likes to say, "Software is eating the world." If you look at education, you know, the degrees people are getting, it has shifted entirely from engineering all the way to computer science. And the punch line, I guess, the observation is that we Silicon Valley people do not get physical. We don't understand the manufacturing world. We don't know how to do HVAC and so on. And so when we build software, we tend to go for the digital stuff. TROND: Christopher, it's almost surprising given the initial thrust of Silicon Valley was, of course, hardware. So it's not surprising to me, I guess because I've been observing it as well. But it is striking more than surprising that a region goes through paradigms. CHRISTOPHER: Yeah. Yeah. And it's a global trend. It's the offshoring of low-end, shall we say, low-value manufacturing and so on. And we're discovering that we actually went a little too far. So we don't have the skill set, the expertise anymore. And it's become a geopolitical risk. TROND: Right. Well, a little bit too far, maybe, or not far enough. Or, I mean, tell us what it is that you're losing when you lose the hardware perspective, particularly in this day and age with the opportunities that we're about to talk about. CHRISTOPHER: Well, I can talk specifically about the things that touch my immediate spheres. Maybe you can think abstractly about the lack of tooling expertise and manufacturing know-how, and so on. But as part of Panasonic, the acquisition was all about taking a Silicon Valley team and injecting AI, machine learning across the enterprise. And so we were part of that global AI team reporting to the CTO office. And we found out very quickly that a lot of the software techniques, the machine learning, for example, when you think about people saying data is the fuel for machine learning and specifically labeled data, right? In the digital world, the Google place that I came from, it was very easy to launch a digital experiment and collect labels, decisions made by users. You can launch that in the morning, and by evening you're building examples. You can't do that in the physical world. Atoms move a lot more slowly. And so when you try to do something like predictive maintenance, you don't have enough failure examples to train machine learning models from. So all of the techniques, all of the algorithms that we say we developed from machine learning that seem to work so well, it turns out it worked so well because the problem space that we worked on has been entirely digital, and they all fail when it comes to manufacturing, the things that you can touch and feel, you know, cars that move and so on. TROND: I want to ask you this, Christopher, because the first company you helped co-found was, in fact, a contract manufacturer. Do you think that reflecting on this long career of yours and these various experiences, what was it that convinced you before others? I mean, you're not the only one now in the Valley that has started to focus on manufacturing and including hardware again, but it is rare still. What does it require to not just think about manufacturing but actually start to do compute for manufacturing? Is it just a matter of coming up with techniques? Or is it a whole kind of awareness that takes longer? So, in your case, you've been aware of manufacturing, acutely aware of it for decades. CHRISTOPHER: I would say there are two things, one is obvious, and the other was actually surprising to me. The obvious one is, of course, knowledge and experience. When we work on sonar technology that shoots a beam down an echogram that comes back to detect fish in the ocean, it's very necessary, not just convenient, but necessary for the engineers that work on that to understand the physics of sound waves travel underwater, and so on. So that education, I have long debates, and it's not just recently. When we were trying to structure a syllabus for a new university, I had long debates with my machine-learning friends, and they said, "We don't need physics." And I said, "We need physics." That's one thing. But you can concretely identify you need to know this. You need to know this. So if you're going to do this, learn the following thing. The thing that was more unexpected for me in the last five years as I sort of sound this bell of saying, hey, we need to modify our approach; we need to optimize our algorithms for this world, is a cultural barrier. It's kind of like the story of if you have a hammer, you want to go look for nails. So Silicon Valley today does not want to look for screwdrivers yet for this world. TROND: So you're saying Silicon Valley has kind of canceled the physical world? If you want to be really sort of parabolic about this, it's like software is eating the world, meaning software is what counts, and it's so efficient. Why go outside this paradigm, basically? If there's a problem that apparently can't be fixed by software, it's not a valuable problem. CHRISTOPHER: Or I can't solve that problem with my current approach. I just have to squint at it the right way. I have to tweak the problem this way and so on despite the fact that it's sort of an insurmountable challenge if you tried to do so. And concretely, it is like, just give me enough data, and I'll solve it. And if you don't have enough data, you know what? Go back and get more data. [chuckles] That's what I myself literally said. But people don't have the luxury of going back to get more data. They have to go to market in six months, and so on. TROND: Right. And so manufacturing...and I can think of many use cases where obviously failure, for example, is not something...you don't really want to go looking for more failure than you have or artificially create failure in order to stress test something unless that's a very safe way of doing so. So predictive maintenance then seems like a, I guess, a little bit of a safer space. But what is it about that particular problem that then lends itself to this other approach to automating labeling? Or what exactly is it that you are advocating one should do to bridge to digital and the physical AIs? CHRISTOPHER: I actually disagree that it is a safer space. TROND: Oh, it's not a safer space to you. CHRISTOPHER: That itself there's a story in that, so let's break that down. TROND: Let's do it. CHRISTOPHER: So, again, when I say Silicon Valley, it is a symbol for a larger ecosystem that is primarily software and digital. And when I say we, because I've worn many hats, I have multiple wes, including academia; I've been a professor as well. When we approach the predictive maintenance problem, if you approach it as machine learning, you got to say, "Do this with machine learning," the first thing you ask for...let's say I'm a data scientist; I'm an AI engineer. You have this physical problem. It doesn't matter what it is; just give me the dataset. And the data set must have rows and columns, and the rows are all the input variables. And then there should be some kind of column label. And in this case, it'll be a history of failures of compressors failing, you know, if the variables are such, then it must be a compressor. If the variables are such, it must be the air filter, and so on. And it turns out when you ask for that kind of data, you get ten rows. [laughs] That's not enough to do machine learning on. So then people, you know, machine learning folks who say they've done predictive maintenance, they actually have not done predictive maintenance. That's the twist. What they have done is anomaly detection, which machine learning can do because, with anomaly detection, I do not need that failure label. It just gives me all the sensor data. What anomaly detection really does is it learns the normal patterns. If you give it a year's worth of data, it'll say, okay, now I've seen a year's worth of data. If something comes along that is different from the past patterns; I will tell you that it's different. That's only halfway to predictive maintenance. That is detecting that something is different today. That is very different from, and it isn't predicting, hey, that compressor is likely to fail about a month from now. And that when we were part of Panasonic, it turns out the first way...and we solved it exactly the way I've described. We did it with the anomaly detection. And then we threw it over the wall to the engineer experts and said, "Well, now that you have this alert, go figure out what may be wrong." And half of the time, they came back and said, "Oh, come on, it was just a maintenance event. Why are you bothering me with this?" TROND: But, Christopher, leveraging human domain expertise sounds like a great idea. But it can't possibly be as scalable as just leveraging software. So how do you work with that? And what are the gains that you're making? CHRISTOPHER: I can show you the messenger exchange I had with another machine-learning friend of mine who said exactly the same thing yesterday, less than 24 hours ago. TROND: [laughs] CHRISTOPHER: He said, "That's too labor-intensive." And I can show you the screen. TROND: And how do you disprove this? CHRISTOPHER: Well, [chuckles] it's not so much disproving, but the assumption that involving humans is labor-intensive is only true if you can't automate it. So the key is to figure out a way, and 10-20 years ago, there was limited technology to automate or extract human knowledge, expert systems, and so on. But today, technologies...the understanding of natural language and so on, machine learning itself has enabled that. That turns out to be the easier problem to solve. So you take that new tool, and you apply it to this harder physical problem. TROND: So let's go to a hard, physical problem. You and I talked about this earlier, and let's share it with people. So I was out fishing in Norway this summer. And I, unfortunately, didn't get very much fish, which obviously was disappointing on many levels. And I was a little surprised, I guess, of the lack of fish, perhaps. But I was using sonar to at least identify different areas where people had claimed that there were various types of fish. But I wasn't, I guess, using it in a very advanced way, and we weren't trained there in the boat. So we sort of had some sensors, but we were not approaching it the right way. So that helped me...and I know you work with Furuno, and Garmin is the other obviously player in this. So fish identification and detection through sonar technology is now the game, I guess, in fishery and, as it turns out, even for individuals trying to fish these days. What is that all about? And how can that be automated, and what are the processes that you've been able to put in place there? CHRISTOPHER: By the way, that's a perfect segue into it. I can give a plug perhaps for this conference that I'm on the organizing committee called Knowledge-First World. And Furuno is going to be presenting their work exactly, talking a lot about what you're talking about. This is kind of coming up in November. It is the first conference of its kind because this is AI Silicon Valley meets the physical world. I think you're talking about the fish-finding technology from companies like Furuno, and they're the world's largest market share in marine navigation and so on. And the human experts in this are actually not even the engineers that build these instruments; it's the fishermen, right? The fishermen who have been using this for a very long time combine it with their local knowledge, you know, warm water, cold water, time of day, and so on. And then, after a while, they recognize patterns that come back in this echogram that match mackerel, or tuna, or sardines, and so on. And Furuno wants to capture that knowledge somehow and then put that model into the fish-finding machine that you and I would hold. And then, instead of seeing this jumbled mess of the echogram data, we would actually see a video of fish, for example. It's been transformed by this algorithm. TROND: So, I mean, I do wish that we lived in a world where there was so much fish that we didn't have to do this. But I'm going to join your experiment here. And so what you're telling me is by working with these experts who are indeed fishermen, they're not experts in sonar, or they're not experts in any kind of engineering technology, those are obviously the labelers, but they are themselves giving the first solutions for how they are thinking about the ocean using these technologies. And then somehow, you are turning that into an automatable, an augmented solution, essentially, that then can find fish in the future without those fishermen somehow being involved the next time around because you're building a model around it. CHRISTOPHER: I'll give you a concrete explanation, a simplified version of how it works, without talking about the more advanced techniques that are proprietary to Furuno. The conceptual approach is very, very easy to understand, and I'll talk about it from the machine learning perspective. Let's say if I did have a million echograms, and each echogram, each of these things, even 100,000, is well-labeled. Somebody has painstakingly gone through the task of saying, okay, I'm going to circle this, and that is fish. And that is algae, and that's sand, and that's marble. And by the way, this is a fish, and this is mackerel, and so on. If somebody has gone through the trouble of doing that, then I can, from a human point of view, just run an algorithm and train it. And then it'll work for that particular region, for that particular time. Okay, well, we need to go collect more data, one for Japan, the North Coast, and one for Southwestern. So that's kind of a lot of work to collect essentially what this pixel data is, this raw data. When you present it to an experienced fisherman, he or she would say, "Well, you see these bubbles here, these circles here with a squiggly line..." So they're describing it in terms of human concepts. And then, if you sit with them for a day or two, you begin to pick up these things. You don't need 100,000-pixel images. You need these conceptual descriptions. TROND: So you're using the most advanced AI there is, which is the human being, and you're using them working with these sonar-type technologies. And you're able to extract very, very advanced models from it. CHRISTOPHER: The key technology punch line here is if you have a model that understands the word circle and squiggly line, which we didn't before, but more recently, we begin to have models, you know, there are these advances called large language models. You may have heard of GPT-3 and DALL-E and so on, you know, some amazing demonstrations coming out of OpenAI and Google. In a very simplified way, we have models that understand the world now. They don't need raw pixels. These base models are trained from raw pixels, but then these larger models understand concepts. So then, we can give directions at this conceptual level so that they can train other models. That's sort of the magic trick. TROND: So it's a magic trick, but it is still a difficult world, the world of manufacturing, because it is physical. Give me some other examples. So you worked with Panasonic. You're working with Furuno in marine navigation there and fishermen's knowledge. How does this work in other fields like robotics, or with car manufacturing, or indeed with Panasonic with kind of, I don't know, battery production or anything that they do with electronics? CHRISTOPHER: So, to give you an example, you mentioned a few things that we worked on, you know, robotics in manufacturing, robotics arm, sort of the manufacturing side, and the consistency of battery sheets coming off the Panasonic manufacturing line in Sparks, Nevada as well as energy optimization at Westinghouse. They supply into data centers, and buildings, and so on. And so again, in every one of these examples, you've got human expertise. And, of course, this is much more prevalent in Asia because Asia is still building things, but some of that is coming back to the U.S. There are usually a few experts. And by the way, this is not about thousands of manufacturing line personnel. This is about three or four experts that are available in the entire company. And they would be able to give heuristics. –They will be able to describe at the conceptual level how they make their decisions. And if you have the technology to capture that in a very efficient way, again, coming back to the idea that if you make them do the work or if you automate their work, but in a very painstaking way like thousands of different rules, that's not a good proposition. But if you have some way to automate the automation, automate the capturing of that knowledge, you've got something that can bridge this physical, digital divide. MID-ROLL AD: In the new book from Wiley, Augmented Lean: A Human-Centric Framework for Managing Frontline Operations, serial startup founder Dr. Natan Linder and futurist podcaster Dr. Trond Arne Undheim deliver an urgent and incisive exploration of when, how, and why to augment your workforce with technology, and how to do it in a way that scales, maintains innovation, and allows the organization to thrive. The key thing is to prioritize humans over machines. Here's what Klaus Schwab, Executive Chairman of the World Economic Forum, says about the book: "Augmented Lean is an important puzzle piece in the fourth industrial revolution." Find out more on www.augmentedlean.com, and pick up the book in a bookstore near you. TROND: How stable is that kind of model knowledge? Because I'm just thinking about it in the long run here, are these physical domain experts that are giving up a little bit of their superpower are they still needed then in a future scenario when you do have such a model? Or will it never be as advanced as they are? Or is it actually going to be still kind of an interface that's going to jump between machines and human knowledge kind of in a continuous loop here? CHRISTOPHER: Yeah, in the near term, it turns out we're not working on replacing experts as much as scaling experts. Almost every case we've worked on, companies are in trouble largely because the experts are very, very few and far between, and they're retiring. They're leaving. And that needs to be scaled somehow. In the case of, for example, the cold chain industry all of Japan servicing the supermarkets, you know, there's 7-ELEVEN, there's FamilyMart, and so on, there are three experts who can read the sensor data and infer what's likely to fail in the next month. So in the near term, it's really we need these humans, and we need more of them. TROND: I'm glad to hear that even that is a bit of a contrarian message. So you're saying physical infrastructure and the physical world matters. You're saying humans matter. [laughs] It's interesting. Yeah, that's contrarian in Silicon Valley, I'll tell you that. CHRISTOPHER: It is. And, in fact, related to that problem, Hussmann, which is a refrigeration company, commercial refrigeration supplies to supermarkets. It was a subsidiary of Panasonic. It has a really hard time getting enough service personnel, and they have to set up their own universities, if you will, to train them. And these are jobs that pay very well. But everybody wants to be in software these days. Coming back to the human element, I think that long-term I'm an optimist, not a blind optimist but a rational one. I think we're still going to need humans to direct machines. The machine learning stuff is data that reflects the past, so patterns of the past, and you try to project that in the future. But we're always trying to effect some change to the status quo. Tomorrow should be a better day than today. So is that human intent that is still, at least at present, lacking in machines? And so we need humans to direct that. TROND: So what is the tomorrow of manufacturing then? How fast are we going to get there? Because you're saying, well, Silicon Valley has a bit of a learning journey. But there is language model technology or progress in language models that now can be implemented in software and, through humans, can be useful in manufacturing already today. And they're scattered examples, and you're putting on an event to show this. What is the path forward here, and how long is this process? And will it be an exponential kind of situation here where you can truly integrate amazing levels of human insight into these machine models? Or will it take a while of tinkering before you're going to make any breakthroughs? Because one thing is the breakthrough in understanding human language, but what you're saying here is even if you're working only with a few experts, you have to take domain by domain, I'm assuming, and build these models, like you said, painstakingly with each expert in each domain. And then, yes, you can put that picture together. But the question is, how complex of a picture is it that you need to put together? Is it like mapping the DNA, or is it bigger? Or what kind of a process are we looking at here? CHRISTOPHER: If we look at it from the dimension of, say, knowledge-based automation, in a sense, it is a continuation. I believe everything is like an s-curve. So there's acceleration, and then there's maturity, and so on. But if you look back in the past, which is sort of instructive for the future, we've always had human knowledge-based automation. I remember the first SMT, the Surface Mount Technology, SMT wave soldering machine back in the early '90s. That was a company that I helped co-found. It was about programming the positioning of these chips that would just come down onto the solder wave. And that was human knowledge for saying, move it up half a millimeter here and half a millimeter there. But of course, the instructions there are very micro and very specific. What machine learning is doing...I don't mean to sort of bash machine learning too much. I'm just saying culturally, there's this new tool really that has come along, and we just need to apply the tool the right way. Machine learning itself is contributing to what I described earlier, that is, now, finally, machines can understand us at the conceptual level that they don't have to be so, so dumb as to say, move a millimeter here, and if you give them the wrong instruction, they'll do exactly that. But we can communicate with them in terms of circles and lines, and so on. So the way I see it is that it's still a continuous line. But what we are able to automate, what we're able to ask our machines to do, is accelerating in terms of their understanding of these instructions. So if you can imagine what would happen when this becomes, let's say, ubiquitous, the ability to do this, and I see this happening over the next...Certainly, the base technology is already there, and the application always takes about a decade. TROND: Well, the application takes a decade. But you told me earlier that humans should at least have this key role in this knowledge-first application approach until 2100, you said, just to throw out a number out there. That's, to some people, really far away. But the question is, what are you saying comes after that? I know you throw that number out. But if you are going to make a distinction between a laborious process of painful progress that does progress, you know, in each individual context that you have applied to human and labeled it, and understood a little case, what are we looking at, whether it is 2100, 2075, or 2025? What will happen at that moment? And is it really a moment that you're talking about when machines suddenly will grasp something very, very generic, sort of the good old moment of singularity, or are you talking about something different? CHRISTOPHER: Yeah, I certainly don't think it's a moment. And, again, the HP-11C has always calculated Pi far faster and with more digits than I have. So in that sense, in that particular narrow sense, it's always been more intelligent than I am. TROND: Yeah. Well, no one was questioning whether a calculator could do better calculations than a human. For a long time -- CHRISTOPHER: Hang on. There's something more profound to think about because we keep saying, well, the minute we do something, it's okay; that's not intelligence. But what I'm getting to is the word that I would refer to is hyper-evolution. So there's not a replacement of humans by machines. There's always been augmentation, and intelligence is not going to be different. It is a little disturbing to think about for some of us, for a lot of us, but it's not any different from wearing my glasses. Or I was taking a walk earlier this morning listening to your podcast, and I was thinking how a pair of shoes as an augmented device would seem very, very strange to humans living, say, 500 years ago, the pair of shoes that I was walking with. So I think in terms of augmenting human intelligence, there are companies that are working on plugging in to the degree that that seems natural or disturbing. It is inevitable. TROND: Well, I mean, if you just think about the internet, which nowadays, it has become a trope to think about the internet. I mean, not enough people think about the internet as a revolutionary technology which it, of course, is and has been, but it is changing. But whether you're thinking about shoes, or the steam engine, or nuclear power, or whatever it is, the moment it's introduced, and people think they understand it, which most people don't, and few of us do, it seems trivial because it's there. CHRISTOPHER: That's right. TROND: But your point is until it's there, it's not trivial at all. And so the process that you've been describing might sound trivial, or it might sound complex, but the moment it's solved or is apparently solved to people, we all assume that was easy. So there's something unfair about how knowledge progresses, I guess. CHRISTOPHER: That's right. That's right. We always think, yeah, this thing that you describe or I describe is very, very strange. And then it happens, and you say, "Of course, that's not that interesting. Tell me about the future." TROND: Well, I guess the same thing has happened to cell phones. They were kind of a strange thing that some people were using. It was like, okay, well, how useful is it to talk to people without sitting by your desk or in the corner of your house? CHRISTOPHER: I totally remember when we were saying, "Why the hell would I want to be disturbed every moment of the day?" [laughs] I don't want the phone with me, and now I -- TROND: Right. But then we went through the last decade or so where we were saying, "I can't believe my life before the phone." And then maybe now the last two, three years, I would say a lot of people I talk to or even my kids, they're like, "What's the big deal here? It's just a smartphone," because they live with a smartphone. And they've always had it. CHRISTOPHER: They say, "How did you get around without Google Maps?" And then somebody says, "We used maps." And I said, "Before Google Maps." [laughter] TROND: Yeah. So I guess the future here is an elusive concept. But I just want to challenge you one more time then on manufacturing because manufacturing, for now, is a highly physical exercise. And, of course, there's virtual manufacturing as well, and it builds on a lot of these techniques and machine learning and other things. How do you see manufacturing as an industry evolve? Is it, like you said, for 75 years, it's going to be largely very recognizable? Is it going to look the same? Is it going to feel the same? Is the management structure the way engineers are approaching it, and the way workers are working? Are we going to recognize all these things? Or is it going to be a little bit like the cell phone, and we're like, well, of course, it's different. But it's not that different, and it's not really a big deal to most people. CHRISTOPHER: Did you say five years or 50 years? TROND: Well, I mean, you give me the timeframe. CHRISTOPHER: Well, in 5 years, we will definitely recognize it, but in 50 years, we will not TROND: In 50 years, it's going to be completely different, look different, feel different; factories are all going to be different. CHRISTOPHER: Right, right. I mean, the cliché is that we always overestimate what happens in 5 and underestimate what happens in 50. But the trend, though, is there's this recurring bundling and unbundling of industries; it's a cycle. Some people think it's just, you know, they live ten years, and they say it's a trend, but it actually goes back and forth. But they're sort of increasing specialization of expertise. So, for example, the supply chain over the last 30 years, we got in trouble because of that because it has become so discrete if you want to use one friendly word, but you can also say fragmented in another word. Like, everybody has been focused on just one specialization, and then something like COVID happens and then oh my God, that was all built very precisely for a particular way of living. And nobody's in the office anymore, and we live at home, and that disrupts the supply chain. I think if you project 50 years out, we will learn to essentially matrix the whole industry. You talked about the management of these things. The whole supply chain, from branding all the way down to raw materials, is it better to be completely vertically integrated to be part of this whole mesh network? I think the future is going to be far more distributed. But there'll be fits and starts. TROND: So then my last question is, let's say I buy into that. Okay, let's talk about that for a second; the future is distributed or decentralized, whatever that means. Does that lessen or make globalization even more important and global standardization, I guess, across all geographical territories? I'm just trying to bring us back to where you started with, which was in the U.S., Silicon Valley optimized for software and started thinking that software was eating the world. But then, by outsourcing all of the manufacturing to Asia, it forgot some essential learning, which is that when manufacturing evolves, the next wave looks slightly different. And in order to learn that, you actually need to do it. So does that lesson tell you anything about how the next wave of matrix or decentralization is going to occur? Is it going to be...so one thought would be that it is physically distributed, but a lot of the insights are still shared. So, in other words, you still need global insight sharing, and all of that is happening. If you don't have that, you're going to have pockets that are...they might be very decentralized and could even be super advanced, but they're not going to be the same. They're going to be different, and they're going to be different paths and trajectories in different parts of the world. How do you see this? Do you think that our technology paradigms are necessarily converging along the path of some sort of global master technology and manufacturing? Or are we looking at scattered different pictures that are all decentralized, but yet, I don't know, from a bird's eye view, it kind of looks like a matrix? CHRISTOPHER: I think your question is broader than just manufacturing, although manufacturing is a significant example of that, right? TROND: It's maybe a key example and certainly under-communicated. And on this podcast, we want to emphasize manufacturing, but you're right, yes. CHRISTOPHER: The word globalization is very loaded. There's the supposedly positive effect in the long run. But who is it that said...is it Keynes that said, "In the long run, we're all dead?" [laughs] In the short run, the dislocations are very real. A skill set of a single human being can't just shift from hardware to software, from manufacturing to AI, within a few months. But I think your question is, let's take it seriously on a scale of, say, decades. I think about it in terms of value creation. There will always be some kind of disparity. Nature does not like uniformity. Uniformity is coldness; it is death. There have to be some gradients. You're very good at something; I'm very good at something else. And that happens at the scale of cities and nations as well. TROND: And that's what triggers trade, too, right? CHRISTOPHER: Exactly. TROND: Because if we weren't different, then there would be no incentive to trade. CHRISTOPHER: So when we think about manufacturing coming back to the U.S., and we can use the word...it is correct in one sense, but it's incorrect in another sense. We're not going back to manufacturing that I did. We're not going back to surface mount technology. In other words, the value creation...if we follow the trajectory of manufacturing alone and try to learn that history, what happens is that manufacturing has gotten better and better. Before, we were outsourcing the cheap stuff. We don't want to do that. But then that cheap stuff, you know, people over there build automation and skills, and so on. And so that becomes actually advanced technology. So in a sense, what we're really doing is we're saying, hey, let's go advanced at this layer. I think it's going to be that give and take of where value creation takes place, of course, layered with geopolitical issues and so on. TROND: I guess I'm just throwing in there the wedge that you don't really know beforehand. And it was Keynes, the economist, that said that the only thing that matters is the short term because, in the end, we are all dead eventually. But the point is you don't really know. Ultimately, what China learned from manufacturing pretty pedestrian stuff turned out to be really fundamental in the second wave. So I'm just wondering, is it possible to preempt that because you say, oh, well, the U.S. is just going to manufacture advanced things, and then you pick a few things, and you start manufacturing them. But if you're missing part of the production process, what if that was the real advancement? I guess that is what happened. CHRISTOPHER: Okay. So when I say that, I think about the example of my friend who spent, you know, again, we were a Ph.D. group at Stanford together. And whereas I went off to academia and did startups and so on, he stayed at Intel for like 32 years. He's one of the world's foremost experts in semiconductor process optimization. So that's another example where human expertise, even though semiconductor manufacturing is highly automated, you still need these experts to actually optimize these things. He's gone off to TSMC after three decades of being very happy at one place. So what I'm getting to is it is actually knowable what are the secret recipes, where the choke points are, what matters, and so on. And interestingly, it does reside in the human brain. But when I say manufacturing coming back to the U.S. and advanced manufacturing, we are picking and choosing. We're doing battery manufacturing. We're doing semiconductor, and we're not doing wave soldering. So I think it is possible to also see this trend that anybody who's done something and going through four or five iterations of that for a long time will become the world's expert at it. I think that is inevitable. You talk of construction, for example; interestingly, this company in Malaysia that is called Renong that is going throughout Southeast Asia; they are the construction company of the region because they've been doing it for so long. I think that is very, very predictable, but it does require the express investment in that direction. And that's something that Asia has done pretty well. TROND: Well, these are fascinating things. We're not going to solve them all on this podcast. But definitely, becoming an expert in something is important, whether you're an individual, or a company, or a country for sure. What that means keeps changing. So just stay alert, and stay in touch with both AI and humans and manufacturing to boot. It's a mix of those three, I guess. In our conversation, that's the secret to unlocking parts of the future. Thank you, Christopher, for enlightening us on these matters. I appreciate it. CHRISTOPHER: It's my pleasure. TROND: You have just listened to another episode of the Augmented Podcast with host Trond Arne Undheim. The topic was Human-First AI. Our guest was Christopher Nguyen, CEO, and Co-Founder of Aitomatic. In this conversation, we talked about the why and the how of human-first AI because it seems that digital AI is one thing, but physical AI is a whole other ballgame. My takeaway is that physical AI is much more interesting of a challenge than pure digital AI. Imagine making true improvements to the way workers accomplish their work, helping them be better, faster, and more accurate. This is the way technology is supposed to work, augmenting humans, not replacing them. In manufacturing, we need all the human workers we can find. As for what happens after the year 2100, I agree that we may have to model what that looks like. But AIs might be even more deeply embedded in the process, that's for sure. Thanks for listening. If you liked the show, subscribe at augmentedpodcast.co or in your preferred podcast player, and rate us with five stars. If you liked this episode, you might also like Episode 80: The Augmenting Power of Operational Data, with Tulip's CTO, Rony Kubat as our guest. Hopefully, you'll find something awesome in these or in other episodes, and if so, do let us know by messaging us. We would love to share your thoughts with other listeners. The augmented podcast is created in association with Tulip, the frontline operation platform that connects the people, machines, devices, and systems used in a production and logistics process in a physical location. Tulip is democratizing technology and empowering those closest to operations to solve problems. Tulip is also hiring. You can find Tulip at tulip.co. Please share this show with colleagues who care about where industry and especially about how industrial tech is going. To find us on social media is easy; we are Augmented Pod on LinkedIn and Twitter and Augmented Podcast on Facebook and on YouTube. Augmented — industrial conversations that matter. See you next time. Special Guest: Christopher Nguyen.
Taro Minowa さんをゲストに迎えて、Twitter, Mastodon, Rewind, Whisper などについて話しました。 Show Notes Elon Musk says he fired engineer who corrected him on Twitter Hundreds of Twitter employees resign after Elon Musk's ‘hardcore' ultimatum "Tweeps! What's the craziest infra incident you worked on at Twitter" Twitter skips most of 2015, locks users out of Android app Find Fediverse accounts of your Twitter followings How Twitter Works Evernote bought by Italian app maker Bending Spoons Rewind wants to revamp how you remember 君を愛したひとりの僕へ (ハヤカワ文庫JA) | 乙野 四方字 僕が愛したすべての君へ (ハヤカワ文庫JA) | 乙野 四方字 openai/whisper: Robust Speech Recognition via Large-Scale Weak Supervision ggerganov/whisper.cpp: Port of OpenAI's Whisper model in C/C++ "この「」自動で入るのはすごいな。"
Matt Healy says your next smart device is a $30 Kindle, Changelog sets up an instance as Mastodon takes off, Anurag Bhagsain puts OpenAI's GPT-3 in your CLI, Kirill Rogovoy argues that no architecture is better than bad architecture & we talk to Mish Manners at All Things Open 2022.
Lenny's Podcast: Product | Growth | Career ✓ Claim Podcast Notes Key Takeaways Check out Lenny's Podcast Episode Page & Show NotesRead the full notes @ podcastnotes.orgMatt Mochary, CEO of Mochary Method, is a full-time executive coach who has worked with some of the biggest names in tech and finance, including investor Naval Ravikant and the CEOs of Notion, OpenAI, Coinbase, Reddit, and many others. In today's podcast, we talk about the skill of firing people, why it's so important, and Matt's framework for approaching layoffs. We go deep on recognizing emotions like anger and fear, and what to pay attention to when you feel angry or fearful. He also shares how to build new products within a larger company, important tips on how to make sure everyone in the organization feels valued and heard, carving out time for your top goal, and how an energy audit can help you eliminate tasks that are draining your energy.—Find the full transcript here: https://www.lennyspodcast.com/how-to-fire-people-with-grace-work-through-fear-and-nurture-innovation-matt-mochary-ceo-coach/#transcript—Where to find Matt Mochary:• Twitter: https://twitter.com/mattmochary• LinkedIn: https://www.linkedin.com/in/matt-mochary-34bb4/• Website: http://www.mochary.com/—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• Twitter: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—Thank you to our wonderful sponsors for making this episode possible:• AssemblyAI: https://www.assemblyai.com/?utm_source=lennyspodcast&utm_medium=podcast&utm_campaign=nov10• Lemon.io: https://lemon.io/lenny• Vanta: https://vanta.com/lenny—Referenced:• The Great CEO Within: The Tactical Guide to Company Building: https://www.amazon.com/Great-CEO-Within-Tactical-Building-ebook/dp/B07ZLGQZYC• Mochary Method: https://mocharymethod.org/• Leo Polovets on Twitter: https://twitter.com/lpolovets• High Output Management: https://www.amazon.com/High-Output-Management-Andrew-Grove/dp/0679762884• The Hard Thing About Hard Things: Building a Business when There Are No Easy Answers: https://www.amazon.com/Hard-Thing-About-Things-Building/dp/0062273205• Andrej Karpathy on Lex Fridman's podcast: https://lexfridman.com/andrej-karpathy/• Wei Deng on LinkedIn: https://www.linkedin.com/in/dengwei/• Free Solo: https://films.nationalgeographic.com/free-solo• Ryan Hoover on Twitter: https://twitter.com/rrhoover• Essentialism: The Disciplined Pursuit of Less: https://gregmckeown.com/books/essentialism/• Make Time: How to Focus on What Matters Every Day: https://www.amazon.com/Make-Time-Focus-Matters-Every/dp/0525572422• Centered app: https://www.centered.app/• Diana Chapman at Conscious Leadership Group: https://conscious.is/team/diana-chapman• The Mochary Method curriculum doc: https://docs.google.com/document/d/18FiJbYn53fTtPmphfdCKT2TMWH-8Y2L-MLqDk-MFV4s/edit—In this episode, we cover:(04:43) Matt's background(07:39) Areas where even very successful founders struggle(12:24) How to address people to minimize defensiveness(13:24) The destructive nature of anger and how to feel your feelings so you don't hurt others(15:02) Which books led Matt to his coaching journey and software platform(19:03) When and how to let an employee go(31:47) How to make people feel heard(38:05) How Matt's coaching has evolved to include psychological obstacles to success(39:41) What is “top goal,” and how can it help you make massive gains?(41:25) Why Matt has an accountability partner for his top goal time(43:44) How to approach mass layoffs humanely(53:21) Matt's thoughts on the Twitter layoffs(54:10) How to innovate within a large company(1:01:53) How to do an energy audit—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email firstname.lastname@example.org. Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe
Elon Musk, owner and CEO of Twitter, founder and CEO at SpaceX and Tesla, and co-founder of Neuralink and OpenAI, delivers one of the best motivational speeches you will ever hear.SpeakerElon MuskElon Musk is the new owner and CEO of Twitter, the CEO at SpaceX, and CEO and co-founder of Tesla, founder of The Boring Company and co-founder of Neuralink and OpenAI. A centibillionaire, Musk is one of the richest people in the world.Follow Elon:https://twitter.com/elonmuskMusic:Audiojungle Hosted on Acast. See acast.com/privacy for more information.
Wojciech is a Co-Founder of OpenAI, where he leads both the Codex research and language teams. OpenAI is an AI research and deployment company aiming to develop and ensure that artificial general intelligence is safe and benefits all of humanity. OpenAI is known for developing GPT-3, a natural language model that creates human-like text, and DALL-E, a machine learning model to generate digital images from natural language descriptions. Follow Wojciech on Twitter @woj_zaremba. [2:18] - Wojciech's childhood interest in science experiments and explosives [11:14] - How following his curiosities guided Wojciech to AI [21:33] - Defining consciousness in the presence of AI [27:15] - The Alignment Problem - how to ensure AI does what humans want it to [45:13] - Why Wojciech believes AI augmentation can be powerful, and help to further advance AI itself [49:52] - The shortcomings of AI in communication and decision-making and how these will influence future development paths [1:03:11] - AI's opportunity to expand human empathy [1:10:40] - OpenAI's role in making AI a widely-distributed public good [1:18:10] - How Wojciech views other organizations' approaches to developing AI for mass consumption --- Support the show by checking out my sponsors: Join Levels and get personalized insights to learn about your metabolic health. Go to https://levels.link/jake. --- https://homeofjake.com
Twitter Now Asks Some Fired Workers to Please Come Back Twitter Employees File Class-Action Lawsuit for Mass Layoffs with No Notice Elon Musk’s response to fake verified Elon Twitter accounts: a new permanent ban policy for impersonation Twitter lost more than 1.3 million users in the week after Elon Musk bought it Facebook Parent Meta Is Preparing to Notify Employees of Large-Scale Layoffs This Week Signal is the latest app to roll out a Stories feature ‘Batteries on wheels’: EV buses could help decarbonize more than just transit Atlanta Public School gets federal funding for 25 electric buses AirBnB Displaying total price and simplifying checkout AirBnB thread from CEO New Startup Develops Potential Cure for Type 1 Diabetes Amazon’s leaked ‘Clinic’ would connect patients to telemedicine The lawsuit against Microsoft, GitHub and OpenAI that could change the rules of AI copyright Weird and Wacky: Oculus Founder Builds VR Headset That Kills User If They Die in Game Your next car might not drive if you've been drinking Tech Rec: Sanjay - Lose the Very Adam - Turtle Beach Stealth 600 Gen 2 Tech Talk Y’all is a production of Edgewise.Media --- Send in a voice message: https://anchor.fm/techtalkyall/message
Hoje é dia do "Influencers da Ciência", um Spin-Off do podcast "Intervalo de Confiança". Neste programa trazemos o nome de Influencers que de fato trouxeram algo de positivo para a sociedade, aqueles que expandiram as fronteiras do conhecimento científico e hoje permitiram o desenvolvimento de diversas áreas. E hoje o tema foi escolhido por nossos ouvintes!Nesse episódio, Igor Alcantara fala sobre a vida e obra de um dos mais importantes cientistas do Século XIX e de toda a História da ciência. Diferente de outros famosos cientistas, Michael Faraday nem mesmo estudou em uma universidade. De origem humilde, seu primeiro contato com um laboratório foi trabalhando limpando os vidros e janelas. De lá, ele expandiu o nosso aprendizado sobre a natureza e universo. Aumente o volume e nos acompanhe nessa incrível jornada de superação e conhecimento.A Pauta foi escrita por Tatiane do Vale. A edição foi feita por Leo Oliveira e a vitrine do episódio feita pela Inteligência Artificial Dall-E, da OpenAI. A coordenação de redação é de Tatiane do Vale e a gerência de projetos e de mídias sociais é de Kézia Nogueira. As vinhetas de todos os episódios foram compostas por Rafael Chino e Leo Oliveira.Visite nosso site em https://intervalodeconfianca.com.br/
The GOAT himself, Jim O'Shaughnessy is back for a record fourth appearance on Panic with Friends! And this time he's slightly panicked by not being panicked. I convinced him to put on some slippers and start showing up for work. The last time he was on, we talked about the real panic; I'd come back from vacation fired out of a cannon – only wanting to talk to my close friends or pimp some of our companies. But today I've got a lot to talk to Jim about. He was guest number one on the show, and he was on just a few weeks ago – since then the Fed's hiked another 75 basis points, Facebook's imploded, and we have a new owner at Twitter. So sure, it makes sense if Jim might be a little panicked about not being panicked. Once again, Jim is the calm voice of reason helping me navigate the turbulent times we've found ourselves in. I hope you learn as much from our conversation as I did. Guest - Jim O'Shaughnessy, Founder, Chairman & Co-Chief Investment Officer, OSAM LLC howardlindzon.com, osam.com, infiniteloopspodcast.com Twitter: @howardlindzon , @jposhaughnessy, @PanicwFriends, @knutjensen linkedin.com/in/james-o-shaughnessy-b072bb #fintech #invest #investment #venturecapital #stockmarket #finance Show Notes: Introduction (00:41) Welcome Jim (05:29) Strategy & Tactics (07:46) Decision time (10:30) Elon Musk (12:58) Fixing Twitter (13:39) Dealing with trolls (17:26) The real reason Elon bought Twitter (23:06) Excited about Artificial Intelligence (27:10) AI ethics (30:02) AI in medical research (31:45) AI legal use case (33:15) An Open AI is critical (36:05) First $1B single creator LLC (36:34) Old Models are collapsing (37:25) A “Thiel-style” scholarship (39:09) A revolution in Education (45:12) A real meritocracy (45:56) Human nature doesn't change (48:19) The next Robinhood could be Robinhood (49:46) Wrapping up (53:05) Closing thoughts (56:58)
Matt Mochary, CEO of Mochary Method, is a full-time executive coach who has worked with some of the biggest names in tech and finance, including investor Naval Ravikant and the CEOs of Notion, OpenAI, Coinbase, Reddit, and many others. In today's podcast, we talk about the skill of firing people, why it's so important, and Matt's framework for approaching layoffs. We go deep on recognizing emotions like anger and fear, and what to pay attention to when you feel angry or fearful. He also shares how to build new products within a larger company, important tips on how to make sure everyone in the organization feels valued and heard, carving out time for your top goal, and how an energy audit can help you eliminate tasks that are draining your energy.—Find the full transcript here: how-to-fire-people-with-grace-work-through-fear-and-nurture-innovation-matt-mochary-ceo-coach/#transcript—Where to find Matt Mochary:• Twitter: https://twitter.com/mattmochary• LinkedIn: https://www.linkedin.com/in/matt-mochary-34bb4/• Website: http://www.mochary.com/—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• Twitter: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—Thank you to our wonderful sponsors for making this episode possible:• AssemblyAI: https://www.assemblyai.com/?utm_source=lennyspodcast&utm_medium=podcast&utm_campaign=nov10• Lemon.io: https://lemon.io/lenny• Vanta: https://vanta.com/lenny—Referenced:• The Great CEO Within: The Tactical Guide to Company Building: https://www.amazon.com/Great-CEO-Within-Tactical-Building-ebook/dp/B07ZLGQZYC• Mochary Method: https://mocharymethod.org/• Leo Polovets on Twitter: https://twitter.com/lpolovets• High Output Management: https://www.amazon.com/High-Output-Management-Andrew-Grove/dp/0679762884• The Hard Thing About Hard Things: Building a Business when There Are No Easy Answers: https://www.amazon.com/Hard-Thing-About-Things-Building/dp/0062273205• Andrej Karpathy on Lex Fridman's podcast: https://lexfridman.com/andrej-karpathy/• Wei Deng on LinkedIn: https://www.linkedin.com/in/dengwei/• Free Solo: https://films.nationalgeographic.com/free-solo• Ryan Hoover on Twitter: https://twitter.com/rrhoover• Essentialism: The Disciplined Pursuit of Less: https://gregmckeown.com/books/essentialism/• Make Time: How to Focus on What Matters Every Day: https://www.amazon.com/Make-Time-Focus-Matters-Every/dp/0525572422• Centered app: https://www.centered.app/• Diana Chapman at Conscious Leadership Group: https://conscious.is/team/diana-chapman• The Mochary Method curriculum doc: https://docs.google.com/document/d/18FiJbYn53fTtPmphfdCKT2TMWH-8Y2L-MLqDk-MFV4s/edit—In this episode, we cover:(04:43) Matt's background(07:39) Areas where even very successful founders struggle(12:24) How to address people to minimize defensiveness(13:24) The destructive nature of anger and how to feel your feelings so you don't hurt others(15:02) Which books led Matt to his coaching journey and software platform(19:03) When and how to let an employee go(31:47) How to make people feel heard(38:05) How Matt's coaching has evolved to include psychological obstacles to success(39:41) What is “top goal,” and how can it help you make massive gains?(41:25) Why Matt has an accountability partner for his top goal time(43:44) How to approach mass layoffs humanely(53:21) Matt's thoughts on the Twitter layoffs(54:10) How to innovate within a large company(1:01:53) How to do an energy audit—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email email@example.com. Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe
Leonard S. Johnson is the Founder and CEO of AIEDC, a 5G Cloud Mobile App Maker and Service Provider with Machine Learning to help small and midsize businesses create their own iOS and Android mobile apps with no-code or low-code so they can engage and service their customer base, as well as provide front and back office digitization services for small businesses. Victoria talks to Leonard about using artificial intelligence for good, bringing the power of AI to local economics, and truly democratizing AI. The Artificial Intelligence Economic Development Corporation (AIEDC) (https://netcapital.com/companies/aiedc) Follow AIEDC on Twitter (https://twitter.com/netcapital), Instagram (https://www.instagram.com/netcapital/), Facebook (https://www.facebook.com/Netcapital/), or LinkedIn (https://www.linkedin.com/company/aiedc/). Follow Leonard on Twitter (https://twitter.com/LeonardSJ) and LinkedIn (https://www.linkedin.com/in/leonardsjohnson84047/). Follow thoughtbot on Twitter (https://twitter.com/thoughtbot) or LinkedIn (https://www.linkedin.com/company/150727/). Become a Sponsor (https://thoughtbot.com/sponsorship) of Giant Robots! Transcript: VICTORIA: This is The Giant Robots Smashing Into Other Giant Robots Podcast, where we explore the design, development, and business of great products. I'm your host, Victoria Guido. And with us today is Leonard S. Johnson or LS, Founder and CEO AIEDC, a 5G Cloud Mobile App Maker and Service Provider with Machine Learning to help small and midsize businesses create their own iOS and Android mobile apps with no-code or low-code so they can engage and service their customer base, as well as provide front and back office digitization services for small businesses. Leonard, thanks for being with us today. LEONARD: Thank you for having me, Victoria. VICTORIA: I should say LS, thank you for being with us today. LEONARD: It's okay. It's fine. VICTORIA: Great. So tell us a little more about AIEDC. LEONARD: Well, AIEDC stands for Artificial Intelligence Economic Development Corporation. And the original premise that I founded it for...I founded it after completing my postgraduate work at Stanford, and that was 2016. And it was to use AI for economic development, and therefore use AI for good versus just hearing about artificial intelligence and some of the different movies that either take over the world, and Skynet, and watch data privacy, and these other things which are true, and it's very evident, they exist, and they're out there. But at the end of the day, I've always looked at life as a growth strategy and the improvement of what we could do and focusing on what we could do practically. You do it tactically, then you do it strategically over time, and you're able to implement things. That's why I think we keep building collectively as humanity, no matter what part of the world you're in. VICTORIA: Right. So you went to Stanford, and you're from South Central LA. And what about that background led you to pursue AI for good in particular? LEONARD: So growing up in the inner city of Los Angeles, you know, that South Central area, Compton area, it taught me a lot. And then after that, after I completed high school...and not in South Central because I moved around a lot. I grew up with a single mother, never knew my real father, and then my home life with my single mother wasn't good because of just circumstances all the time. And so I just started understanding that even as a young kid, you put your brain...you utilize something because you had two choices. It's very simple or binary, you know, A or B. A, you do something with yourself, or B, you go out and be social in a certain neighborhood. And I'm African American, so high probability that you'll end up dead, or in a gang, or in crime because that's what it was at that time. It's just that's just a situation. Or you're able to challenge those energies and put them toward a use that's productive and positive for yourself, and that's what I did, which is utilizing a way to learn. I could always pick up things when I was very young. And a lot of teachers, my younger teachers, were like, "You're very, very bright," or "You're very smart." And there weren't many programs because I'm older than 42. So there weren't as many programs as there are today. So I really like all of the programs. So I want to clarify the context. Today there's a lot more engagement and identification of kids that might be sharper, smarter, whatever their personal issues are, good or bad. And it's a way to sort of separate them. So you're not just teaching the whole group as a whole and putting them all in one basket, but back then, there was not. And so I just used to go home a lot, do a lot of reading, do a lot of studying, and just knick-knack with things in tech. And then I just started understanding that even as a young kid in the inner city, you see economics very early, but they don't understand that's really what they're studying. They see economics. They can see inflation because making two ends meet is very difficult. They may see gang violence and drugs or whatever it might end up being. And a lot of that, in my opinion, is always an underlining economic foundation. And so people would say, "Oh, why is this industry like this?" And so forth. "Why does this keep happening?" It's because they can't function. And sometimes, it's just them and their family, but they can't function because it's an economic system. So I started focusing on that and then went into the Marine Corps. And then, after the Marine Corps, I went to Europe. I lived in Europe for a while to do my undergrad studies in the Netherlands in Holland. VICTORIA: So having that experience of taking a challenge or taking these forces around you and turning into a force for good, that's led you to bring the power of AI to local economics. And is that the direction that you went eventually? LEONARD: So economics was always something that I understood and had a fascination prior to even starting my company. I started in 2017. And we're crowdfunding now, and I can get into that later. But I self-funded it since 2017 to...I think I only started crowdfunding when COVID hit, which was 2020, and just to get awareness and people out there because I couldn't go to a lot of events. So I'm like, okay, how can I get exposure? But yeah, it was a matter of looking at it from that standpoint of economics always factored into me, even when I was in the military when I was in the Marine Corps. I would see that...we would go to different countries, and you could just see the difference of how they lived and survived. And another side note, my son's mother is from Ethiopia, Africa. And I have a good relationship with my son and his mother, even though we've been apart for over 15 years, divorced for over 15 years or so or longer. But trying to keep that, you can just see this dichotomy. You go out to these different countries, and even in the military, it's just so extreme from the U.S. and any part of the U.S, but that then always focused on economics. And then technology, I just always kept up with, like, back in the '80s when the mobile brick phone came out, I had to figure out how to get one. [laughs] And then I took it apart and then put it back together just to see how it works, so yeah. But it was a huge one, by the way. I mean, it was like someone got another and broke it, and they thought it was broken. And they're like, "This doesn't work. You could take this piece of junk." I'm like, "Okay." [laughs] VICTORIA: Like, oh, great. I sure will, yeah. Now, I love technology. And I think a lot of people perceive artificial intelligence as being this super futuristic, potentially harmful, maybe economic negative impact. So what, from your perspective, can AI do for local economics or for people who may not have access to that advanced technology? LEONARD: Well, that's the key, and that's what we're looking to do with AIEDC. When you look at the small and midsize businesses, it's not what people think, or their perception is. A lot of those in the U.S. it's the backbone of the United States, our economy, literally. And in other parts of the world, it's the same where it could be a one or two mom-and-pop shops. That's where that name comes from; it's literally two people. And they're trying to start something to build their own life over time because they're using their labor to maybe build wealth or somehow a little bit not. And when I mean wealth, it's always relative. It's enough to sustain themselves or just put food on the table and be able to control their own destiny to the best of their ability. And so what we're looking to do is make a mobile app maker that's 5G that lives in the cloud, that's 5G compliant, that will allow small and midsize businesses to create their own iOS or Android mobile app with no-code or low-code, basically like creating an email. That's how simple we want it to be. When you create your own email, whether you use Microsoft, Google, or whatever you do, and you make it that simple. And there's a simple version, and there could be complexity added to it if they want. That would be the back office digitization or customization, but that then gets them on board with digitization. It's intriguing that McKinsey just came out with a report stating that in 2023, in order to be economically viable, and this was very recent, that all companies would need to have a digitization strategy. And so when you look at small businesses, and you look at things like COVID-19, or the COVID current ongoing issue and that disruption, this is global. And you look at even the Ukrainian War or the Russian-Ukrainian War, however you term it, invasion, war, special operation, these are disruptions. And then, on top of that, we look at climate change which has been accelerating in the last two years more so than it was prior to this that we've experienced. So this is something that everyone can see is self-evident. I'm not even focused on the cause of the problem. My brain and the way I think, and my team, we like to focus on solutions. My chairman is a former program director of NASA who managed 1,200 engineers that built the International Space Station; what was it? 20-30 years ago, however, that is. And he helped lead and build that from Johnson Center. And so you're focused on solutions because if you're building the International Space Station, you can only focus on solutions and anticipate the problems but not dwell on them. And so that kind of mindset is what I am, and it's looking to help small businesses do that to get them on board with digitization and then in customization. And then beyond that, use our system, which is called M.I.N.D. So we own these...we own patents, three patents, trademarks, and service marks related to artificial intelligence that are in the field of economics. And we will utilize DEVS...we plan to do that which is a suite of system specifications to predict regional economic issues like the weather in a proactive way, not reactive. A lot of economic situations are reactive. It's reactive to the Federal Reserve raising interest rates or lowering rates, Wall Street, you know, moving money or not moving money. It is what it is. I mean, I don't judge it. I think it's like financial engineering, and that's fine. It's profitability. But then, at the end of the day, if you're building something, it's like when we're going to go to space. When rockets launch, they have to do what they're intended to do. Like, I know that Blue Origin just blew up recently. Or if they don't, they have a default, and at least I heard that the Blue Origin satellite, if it were carrying passengers, the passengers would have been safe because it disembarked when it detected its own problem. So when you anticipate these kinds of problems and you apply them to the local small business person, you can help them forecast and predict better like what weather prediction has done. And we're always improving that collectively for weather prediction, especially with climate change, so that it can get to near real-time as soon as possible or close a window versus two weeks out versus two days out as an example. VICTORIA: Right. Those examples of what you call a narrow economic prediction. LEONARD: Correct. It is intriguing when you say narrow economic because it wouldn't be narrow AI. But it would actually get into AGI if you added more variables, which we would. The more variables you added in tenancies...so if you're looking at events, the system events discretion so discrete event system specification you would specify what they really, really need to do to have those variables. But at some point, you're working on a system, what I would call AGI. But AGI, in my mind, the circles I run in at least or at least most of the scientists I talk to it's not artificial superintelligence. And so the general public thinks AGI...and I've said this to Stephen Ibaraki, who's the founder of AI for Good at Global Summit at the United Nations, and one of his interviews as well. It's just Artificial General Intelligence, I think, has been put out a lot by Hollywood and entertainment and so forth, and some scientists say certain things. We won't be at artificial superintelligence. We might get to Artificial General Intelligence by 2030 easily, in my opinion. But that will be narrow AI, but it will cover what we look at it in the field as cross-domain, teaching a system to look at different variables because right now, it's really narrow. Like natural language processing, it's just going to look at language and infer from there, and then you've got backward propagation that's credit assignment and fraud and detection. Those are narrow data points. But when you start looking at something cross-domain...who am I thinking of? Pedro Domingos who wrote the Master Algorithm, which actually, Xi Jinping has a copy of, the President of China, on his bookshelf in his office because they've talked about that, and these great minds because Stephen Ibaraki has interviewed these...and the founder of Google Brain and all of these guys. And so there's always this debate in the scientific community of what is narrow AI what it's not. But at the end of the day, I just like Pedro's definition of it because he says the master algorithm will be combining all five, so you're really crossing domains, which AI hasn't done that. And to me, that will be AGI, but that's not artificial superintelligence. And artificial superintelligence is when it becomes very, you know, like some of the movies could say, if we as humanity just let it run wild, it could be crazy. VICTORIA: One of my questions is the future of AI more like iRobot or Bicentennial Man? LEONARD: Well, you know, interesting. That's a great question, Victoria. I see most of AI literally as iRobot, as a tool more than anything, except at the end when it implied...so it kind of did two things in that movie, but a wonderful movie to bring up. And I like Will Smith perfectly. Well, I liked him a lot more before -- VICTORIA: I think iRobot is really the better movie. LEONARD: Yeah, so if people haven't seen iRobot, I liked Will Smith, the actor. But iRobot showed you two things, and it showed you, one, it showed hope. Literally, the robot...because a lot of people put AI and robots. And AI by itself is the brain or the mind; I should say hardware are the robots or the brain. Software...AI in and of itself is software. It's the mind itself. That's why we have M.I.N.D Machine Intelligence NeuralNetwork Database. We literally have that. That's our acronym and our slogan and everything. And it's part of our patents. But its machine intelligence is M.I.N.D, and we own that, you know; the company owns it. And so M.I.N.D...we always say AI powered by M.I.N.D. We're talking about that software side of, like, what your mind does; it iterates and thinks, the ability to think itself. Now it's enclosed within a structure called, you know, for the human, it's called a brain, the physical part of it, and that brain is enclosed within the body. So when you look at robots...and my chairman was the key person for robotics for the International Space Station. So when you look at robotics, you are putting that software into hardware, just like your cell phone. You have the physical, and then you have the actual iOS, which is the operating system. So when you think about that, yeah, iRobot was good because it showed how these can be tools, and they were very, in the beginning of the movie, very helpful, very beneficial to humanity. But then it went to a darker side and showed where V.I.K.I, which was an acronym as well, I think was Virtual Interactive Kinetic technology of something. Yeah, I believe it was Virtual Interactive Kinetic inference or technology or something like that, V.I.K.I; I forgot the last I. But that's what it stood for. It was an acronym to say...and then V.I.K.I just became all aware and started killing everyone with robots and just wanted to say, you know, this is futile. But then, at the very, very end, V.I.K.I learned from itself and says, "Okay, I guess this isn't right." Or the other robot who could think differently argued with V.I.K.I, and they destroyed her. And it made V.I.K.I a woman in the movie, and then the robot was the guy. But that shows that it can get out of hand. But it was intriguing to me that they had her contained within one building. This wouldn't be artificial superintelligence. And I think sometimes Hollywood says, "Just take over everything from one building," no. It wouldn't be on earth if it could. But that is something we always have to think about. We have to think about the worst-case scenarios. I think every prudent scientist or business person or anyone should do that, even investors, I mean, if you're investing something for the future. But you also don't focus on it. You don't think about the best-case scenario, either. But there's a lot of dwelling on the worst-case scenario versus the good that we can do given we're looking at where humanity is today. I mean, we're in 2022, and we're still fighting wars that we fought in 1914. VICTORIA: Right. Which brings me to my next question, which is both, what are the most exciting opportunities to innovate in the AI space currently? And conversely, what are the biggest challenges that are facing innovation in that field? LEONARD: Ooh, that's a good question. I think, in my opinion, it's almost the same answer; one is...but I'm in a special field. And I'm surprised there's not a lot of competition for our company. I mean, it's very good for me and the company's sense. It's like when Mark Zuckerberg did Facebook, there was Friendster, and there was Myspace, but they were different. They were different verticals. And I think Mark figured out how to do it horizontally, good or bad. I'm talking about the beginning of when he started Facebook, now called Meta. But I'm saying utilizing AI in economics because a lot of times AI is used in FinTech and consumerism, but not economic growth where we're really talking about growing something organically, or it's called endogenous growth. Because I studied Paul Romer's work, who won the Nobel Prize in 2018 for economic science. And he talked about the nature of ideas. And we were working on something like that in Stanford. And I put out a book in 2017 of January talking about cryptocurrencies, artificial intelligence but about the utilization of it, but not the speculation. I never talked about speculation. I don't own any crypto; I would not. It's only once it's utilized in its PureTech form will it create something that it was envisioned to do by the protocol that Satoshi Nakamoto sort of created. And it still fascinates me that people follow Bitcoin protocol, even for the tech and the non-tech, but they don't know who Satoshi is. But yeah, it's a white paper. You're just following a white paper because I think logically, the world is going towards that iteration of evolution. And that's how AI could be utilized for good in an area to focus on it with economics and solving current problems. And then going forward to build a new economy where it's not debt-based driven or consumer purchase only because that leaves a natural imbalance in the current world structure. The western countries are great. We do okay, and we go up and down. But the emerging and developing countries just get stuck, and they seem to go into a circular loop. And then there are wars as a result of these things and territory fights and so forth. So that's an area I think where it could be more advanced is AI in the economic realm, not so much the consumer FinTech room, which is fine. But consumer FinTech, in my mind, is you're using AI to process PayPal. That's where I think Elon just iterated later because PayPal is using it for finance. You're just moving things back and forth, and you're just authenticating everything. But then he starts going on to SpaceX next because he's like, well, let me use technology in a different way. And I do think he's using AI on all of his projects now. VICTORIA: Right. So how can that tech solve real problems today? Do you see anything even particular about Southern California, where we're both at right now, where you think AI could help predict some outcomes for small businesses or that community? LEONARD: I'm looking to do it regionally then globally. So I'm part of this Southern Cal Innovation Hub, which is just AI. It's an artificial intelligence coordination between literally San Diego County, Orange County, and Los Angeles County. And so there's a SoCal Innovation Hub that's kind of bringing it together. But there are all three groups, like; I think the CEO in Orange County is the CEO of Leadership Alliance. And then in San Diego, there's another group I can't remember their name off the top of my head, and I'm talking about the county itself. So each one's representing a county because, you know. And then there's one in Northern California that I'm also associated with where if you look at California as its own economy in the U.S., it's still pretty significant as an economic cycle in the United States, period. That's why so many politicians like California because they can sway the votes. So yeah, we're looking to do that once, you know, we are raising capital. We're crowdfunding currently. Our total raise is about 6 million. And so we're talking to venture capitalists, private, high net worth investors as well. Our federal funding is smaller. It's just like several hundred thousand because most people can only invest a few thousand. But I always like to try to give back. If you tell people...if you're Steve Jobs, like, okay, I've got this Apple company. In several years, you'll see the potential. And people are like, ah, whatever, but then they kick themselves 15 years later. [laughs] Like, oh, I wish I thought about that Apple stock for $15 when I could. But you give people a chance, and you get the word out, and you see what happens. Once you build a system, you share it. There are some open-source projects. But I think the open source, like OpenAI, as an example, Elon Musk funds that as well as Microsoft. They both put a billion dollars into it. It is an open-source project. OpenAI claims...but some of the research does go back to Microsoft to be able to see it. And DeepMind is another research for AI, but they're owned by Google. And so, I'm also very focused on democratizing artificial intelligence for the benefit of everyone. I really believe that needs to be democratized in a sense of tying it to economics and making it utilized for everyone that may need it for the benefit of humanity where it's profitable and makes money, but it's not just usurping. MID-ROLL AD: As life moves online, brick-and-mortar businesses are having to adapt to survive. With over 18 years of experience building reliable web products and services, thoughtbot is the technology partner you can trust. We provide the technical expertise to enable your business to adapt and thrive in a changing environment. We start by understanding what's important to your customers to help you transition to intuitive digital services your customers will trust. We take the time to understand what makes your business great and work fast yet thoroughly to build, test, and validate ideas, helping you discover new customers. Take your business online with design‑driven digital acceleration. Find out more at tbot.io/acceleration or click the link in the show notes for this episode. VICTORIA: With that democratizing it, is there also a need to increase the understanding of the ethics around it and when there are certain known use cases for AI where it actually is discriminatory and plays to systemic problems in our society? Are you familiar with that as well? LEONARD: Yes, absolutely. Well, that's my whole point. And, Victoria, you just hit the nail on the head. Truly democratizing AI in my mind and in my brain the way it works is it has opened up for everyone. Because if you really roll it back, okay, companies now we're learning...we used to call it several years ago UGC, User Generated Content. And now a lot of people are like, okay, if you're on Facebook, you're the product, right? Or if you're on Instagram, you're the product. And they're using you, and you're using your data to sell, et cetera, et cetera. But user-generated content it's always been that. It's just a matter of the sharing of the economic. That's why I keep going back to economics. So if people were, you know, you wouldn't have to necessarily do advertising if you had stakeholders with advertising, the users and the company, as an example. If it's a social media company, just throwing it out there, so let's say you have a social media...and this has been talked about, but I'm not the first to introduce this. This has been talked about for over ten years, at least over 15 years. And it's you share as a triangle in three ways. So you have the user and everything else. So take your current social media, and I won't pick on Facebook, but I'll just use them, Facebook, Instagram, or Twitter. Twitter's having issues recently because Elon is trying to buy them or get out of buying them. But you just looked at that data, and then you share with the user base. What's the revenue model? And there needs to be one; let me be very clear. There has to be incentive, and there has to be profitability for people that joined you earlier, you know, joined the corporation, or become shareholders, or investors, or become users, or become customers. They have to be able to have some benefit, not extreme greater than everyone else but a great benefit from coming in earlier by what they contributed at the time. And that is what makes this system holistic in my opinion, like Reddit or any of these bloggers. But you make it where they use their time and the users, and you share it with the company and then the data and so forth, and whatever revenue economic model you have, and it's a sort of a three-way split. It's just not always equal. And that's something that I think in economics, we're still on a zero-sum game, I win, you lose sort of economic model globally. That's why there's a winner of a war and a loser of a war. But in reality, as you know, Victoria, there are no winners of any war. So it's funny, [laughs] I was just saying, well, you know, because of the economic mode, but Von Neumann, who talked about that, also talked about something called a non-zero-sum game when he talked about it in mathematics that you can win, and I can win; we just don't win equally because they never will match that. So if I win, I may win 60; you win 40. Or you may win 60, I win 40, and we agree to settle on that. It's an agreement versus I'm just going to be 99, and you'll be 1%, or I'm just going to be 100, and you're at 0. And I think that our economic model tends to be a lot of that, like, when you push forth and there needs to be more of that. When you talk about the core of economics...and I go way back, you know, prior to the Federal Reserve even being started. I just look at the world, and it's always sort of been this land territorial issue of what goods are under the country. But we've got technology where we can mitigate a lot of things and do the collective of help the earth, and then let's go off to space, all of space. That's where my brain is focused on. VICTORIA: Hmm. Oh yeah, that makes sense to me. I think that we're all going to have to evolve our economic models here in the future. I wonder, too, as you're building your startup and you're building your company, what are some of the technology trade-offs you're having to make in the stack of the AI software that you're building? LEONARD: Hmm. Good question. But clarify, this may be a lot deeper dive because that's a general question. And I don't want to...yeah, go ahead. VICTORIA: Because when you're building AI, and you're going to be processing a lot of data, I know many data scientists that are familiar with tools like Jupyter Notebooks, and R, and Python. And one issue that I'm aware of is keeping the environments the same, so everything that goes into building your app and having those infrastructure as code for your data science applications, being able to afford to process all that data. [laughs] And there are just so many factors that go into building an AI app versus building something that's more easy, like a web-based user form. So just curious if you've encountered those types of trade-offs or questions about, okay, how are we going to actually build an app that we can put out on everybody's phone and that works responsibly? LEONARD: Oh, okay. So let me be very clear, but I won't give too much of the secret sauce away. But I can define this technically because this is a technical audience. This is not...so what you're really talking about is two things, and I'm clear about this, though. So the app maker won't really read and write a lot of data. It'll just be the app where people could just get on board digitalization simple, you know, process payments, maybe connect with someone like American Express square, MasterCard, whatever. And so that's just letting them function. That's sort of small FinTech in my mind, you know, just transaction A to B, B to A, et cetera. And it doesn't need to be peer-to-peer and all of the crypto. It doesn't even need to go that level yet. That's just level one. Then they will sign up for a service, which is because we're really focused on artificial intelligence as a service. And that, to me, is the next iteration for AI. I've been talking about this for about three or four years now, literally, in different conferences and so forth for people who haven't hit it. But that we will get to that point where AI will become AI as a service, just like SaaS is. We're still at the, you know, most of the world on the legacy systems are still software as a service. We're about to hit AI as a service because the world is evolving. And this is true; they did shut it down. But you did have okay, so there are two case points which I can bring up. So JP Morgan did create something called a Coin, and it was using AI. And it was a coin like crypto, coin like a token, but they called it a coin. But it could process, I think, something like...I may be off on this, so to the sticklers that will be listening, please, I'm telling you I may be off on the exact quote, but I think it was about...it was something crazy to me, like 200,000 of legal hours and seconds that it could process because it was basically taking the corporate legal structure of JP Morgan, one of the biggest banks. I think they are the biggest bank in the U.S. JPMorgan Chase. And they were explaining in 2017 how we created this, and it's going to alleviate this many hours of legal work for the bank. And I think politically; something happened because they just pulled away. I still have the original press release when they put it out, and it was in the media. And then it went away. I mean, no implementation [laughs] because I think there was going to be a big loss of jobs for it. And they basically would have been white-collar legal jobs, most specifically lawyers literally that were working for the bank. And when they were talking towards investment, it was a committee. I was at a conference. And I was like, I was fascinated by that. And they were basically using Bitcoin protocol as the tokenization protocol, but they were using AI to process it. And it was basically looking at...because legal contracts are basically...you can teach it with natural language processing and be able to encode and almost output it itself and then be able to speak with each other. Another case point was Facebook. They had...what was it? Two AI systems. They began to create their own language. I don't know if you remember that story or heard about it, and Facebook shut it down. And this was more like two years ago, I think, when they were saying Facebook was talking, you know, when they were Facebook, not Meta, so maybe it was three years ago. And they were talking, and they were like, "Oh, Facebook has a language. It's talking to each other." And it created its own little site language because it was two AI bots going back and forth. And then the engineers at Facebook said, "We got to shut this down because this is kind of getting out of the box." So when you talk about AI as a service, yes, the good and the bad, and what you take away is AWS, Oracle, Google Cloud they do have services where it doesn't need to cost you as much anymore as it used to in the beginning if you know what you're doing ahead of time. And you're not just running iterations or data processing because you're doing guesswork versus, in my opinion, versus actually knowing exactly specifically what you're looking for and the data set you're looking to get out of it. And then you're talking about just basically putting in containers and clustering it because it gets different operations. And so what you're really looking at is something called an N-scale graph data that can process data in maybe sub seconds at that level, excuse me. And one of my advisors is the head of that anyway at AGI laboratory. So he's got an N graph database that can process...when we implement it, we'll be able to process data at the petabyte level at sub-seconds, and it can run on platforms like Azure or AWS, and so forth. VICTORIA: Oh, that's interesting. So it sounds like cloud providers are making compute services more affordable. You've got data, the N-scale graph data, that can run more transactions more quickly. And I'm curious if you see any future trends since I know you're a futurist around quantum computing and how that could affect capacity for -- LEONARD: Oh [laughs] We haven't even gotten there yet. Yes. Well, if you look at N-scale, if you know what you're doing and you know what to look for, then the quantum just starts going across different domains as well but at a higher hit rate. So there's been some quantum computers online. There's been several...well, Google has their quantum computer coming online, and they've been working on it, and Google has enough data, of course, to process. So yeah, they've got that data, lots of data. And quantum needs, you know, if it's going to do something, it needs lots of data. But then the inference will still be, I think, quantum is very good at processing large, large, large amounts of data. We can just keep going if you really have a good quantum computer. But it's really narrow. You have to tell it exactly what it wants, and it will do it in what we call...which is great like in P or NP square or P over NP which is you want to do it in polynomial time, not non-polynomial, polynomial time which is...now speaking too fast. Okay, my brain is going faster than my lips. Let me slow it down. So when you start thinking about processing, if we as humans, let's say if I was going to process A to Z, and I'm like, okay, here is this equation, if I tell you it takes 1000 years, it's of no use to us, to me and you Victoria because we're living now. Now, the earth may benefit in 1000 years, but it's still of no use. But if I could take this large amount of data and have it process within minutes, you know, worst case hours...but then I'll even go down to seconds or sub-seconds, then that's really a benefit to humanity now, today in present term. And so, as a futurist, yes, as the world, we will continue to add data. We're doing it every day, and we already knew this was coming ten years ago, 15 years ago, 20 years ago, even actually in the '50s when we were in the AI winter. We're now in AI summer. In my words, I call it the AI summer. So as you're doing this, that data is going to continue to increase, and quantum will be needed for that. But then the specific need...quantum is very good at looking at a specific issue, specifically for that very narrow. Like if you were going to do the trajectory to Jupiter or if we wanted to send a probe to Jupiter or something, I think we're sending something out there now from NASA, and so forth, then you need to process all the variables, but it's got one trajectory. It's going one place only. VICTORIA: Gotcha. Well, that's so interesting. I'm glad I asked you that question. And speaking of rockets going off to space, have you ever seen a SpaceX launch from LA? LEONARD: Actually, I saw one land but not a launch. I need to go over there. It's not too far from me. But you got to give credit where credit's due and Elon has a reusable rocket. See, that's where technology is solving real-world problems. Because NASA and I have, you know, my chairman, his name is Alexander Nawrocki, you know, he's Ph.D., but I call him Rocki. He goes by Rocki like I go by LS. But it's just we talk about this like NASA's budget. [laughs] How can you reduce this? And Elon says they will come up with a reusable rocket that won't cost this much and be able to...and that's the key. That was the kind of Holy Grail where you can reuse the same rocket itself and then add some little variables on top of it. But the core, you wouldn't constantly be paying for it. And so I think where the world is going...and let me be clear, Elon pushes a lot out there. He's just very good at it. But I'm also that kind of guy that I know that Tesla itself was started by two Stanford engineers. Elon came on later, like six months, and then he invested, and he became CEO, which was a great investment for Elon Musk. And then CEO I just think it just fit his personality because it was something he loved. But I also have studied for years Nikola Tesla, and I understand what his contributions created where we are today with all the patents that he had. And so he's basically the father of WiFi and why we're able to communicate in a lot of this. We've perfected it or improved it, but it was created by him in the 1800s. VICTORIA: Right. And I don't think he came from as fortunate a background as Elon Musk, either. Sometimes I wonder what I could have done born in similar circumstances. [laughter] And you certainly have made quite a name for yourself. LEONARD: Well, I'm just saying, yeah, he came from very...he did come from a poor area of Russia which is called the Russian territory, to be very honest, Eastern Europe, definitely Eastern Europe. But yeah, I don't know once you start thinking about that [laughs]. You're making me laugh, Victoria. You're making me laugh. VICTORIA: No, I actually went camping, a backpacking trip to the Catalina Island, and there happened to be a SpaceX launch that night, and we thought it was aliens because it looked wild. I didn't realize what it was. But then we figured it was a launch, so it was really great. I love being here and being close to some of this technology and the advancements that are going on. I'm curious if you have some thoughts about...I hear a lot about or you used to hear about Silicon Valley Tech like very Northern California, San Francisco focus. But what is the difference in SoCal? What do you find in those two communities that makes SoCal special? [laughs] LEONARD: Well, I think it's actually...so democratizing AI. I've been in a moment like that because, in 2015, I was in Dubai, and they were talking about creating silicon oasis. And so there's always been this model of, you know, because they were always, you know, the whole Palo Alto thing is people would say it and it is true. I mean, I experienced it. Because I was in a two-year program, post-graduate program executive, but we would go up there...I wasn't living up there. I had to go there maybe once every month for like three weeks, every other month or something. But when you're up there, it is the air in the water. It's just like, people just breathe certain things. Because around the world, and I would travel to Japan, and China, and other different parts of Asia, Vietnam, et cetera and in Africa of course, and let's say you see this and people are like, so what is it about Silicon Valley? And of course, the show, there is the Hollywood show about it, which is pretty a lot accurate, which is interesting, the HBO show. But you would see that, and you would think, how are they able to just replicate this? And a lot of it is a convergence. By default, they hear about these companies' access because the key is access, and that's what we're...like this podcast. I love the concept around it because giving awareness, knowledge, and access allows other people to spread it and democratize it. So it's just not one physical location, or you have to be in that particular area only to benefit. I mean, you could benefit in that area, or you could benefit from any part of the world. But since they started, people would go there; engineers would go there. They built company PCs, et cetera. Now that's starting to spread in other areas like Southern Cal are creating their own innovation hubs to be able to bring all three together. And those three are the engineers and founders, and idea makers and startups. And you then need the expertise. I'm older than 42; I'm not 22. [laughs] So I'm just keeping it 100, keeping it real. So I'm not coming out at 19. I mean, my son's 18. And I'm not coming out, okay, this my new startup, bam, give me a billion dollars, I'm good. And let me just write off the next half. But when you look at that, there's that experience because even if you look at Mark Zuckerberg, I always tell people that give credit where credit is due. He brought a senior team with him when he was younger, and he didn't have the experience. And his only job has been Facebook out of college. He's had no other job. And now he's been CEO of a multi-billion dollar corporation; that's a fact. Sometimes it hurts people's feelings. Like, you know what? He's had no other job. Now that can be good and bad, [laughs] but he's had no other jobs. And so that's just a credit, like, if you can surround yourself with the right people and be focused on something, it can work to the good or the bad for your own personal success but then having that open architecture. And I think he's been trying to learn and others versus like an Elon Musk, who embraces everything. He's just very open in that sense. But then you have to come from these different backgrounds. But let's say Elon Musk, Mark Zuckerberg, let's take a guy like myself or whatever who didn't grow up with all of that who had to make these two ends meet, figure out how to do the next day, not just get to the next year, but get to the next day, get to the next week, get to the next month, then get to the next year. It just gives a different perspective as well. Humanity's always dealing with that. Because we had a lot of great engineers back in the early 1900s. They're good or bad, you know, you did have Nikola Tesla. You had Edison. I'm talking about circa around 1907 or 1909, prior to World War I. America had a lot of industries. They were the innovators then, even though there were innovations happening in Europe, and Africa, and China, as well and Asia. But the innovation hub kind of created as the America, quote, unquote, "industrial revolution." And I think we're about to begin a new revolution sort of tech and an industrial revolution that's going to take us to maybe from 20...we're 2022 now, but I'll say it takes us from 2020 to 2040 in my head. VICTORIA: So now that communities can really communicate across time zones and locations, maybe the hubs are more about solving specific problems. There are regional issues. That makes a lot more sense. LEONARD: Yes. And collaborating together, working together, because scientists, you know, COVID taught us that. People thought you had to be in a certain place, but then a lot of collaboration came out of COVID; even though it was bad globally, even though we're still bad, if people were at home, they start collaborating, and scientists will talk to scientists, you know, businesses, entrepreneurs, and so forth. But if Orange County is bringing together the mentors, the venture capital, or at least Southern California innovation and any other place, I want to say that's not just Silicon Valley because Silicon Valley already has it; we know that. And that's that region. It's San Jose all the way up to...I forgot how far north it's past San Francisco, actually. But it's that region of area where they encompass the real valley of Silicon Valley if you're really there. And you talk about these regions. Yes, I think we're going to get to a more regional growth area, and then it'll go more micro to actually cities later in the future. But regional growth, I think it's going to be extremely important globally in the very near term. I'm literally saying from tomorrow to the next, maybe ten years, regional will really matter. And then whatever you have can scale globally anyway, like this podcast we're doing. This can be distributed to anyone in the world, and they can listen at ease when they have time. VICTORIA: Yeah, I love it. It's both exciting and also intimidating. [laughs] And you mentioned your son a little bit earlier. And I'm curious, as a founder and someone who spent a good amount of time in graduate and Ph.D. programs, if you feel like it's easy to connect with your son and maintain that balance and focusing on your family while you're building a company and investing in yourself very heavily. LEONARD: Well, I'm older, [laughs] so it's okay. I mean, I've mentored him, you know. And me and his mom have a relationship that works. I would say we have a better relationship now than when we were together. It is what it is. But we have a communication level. And I think she was just a great person because I never knew my real father, ever. I supposedly met him when I was two or one; I don't know. But I have no memories, no photos, nothing. And that was just the environment I grew up in. But with my son, he knows the truth of everything about that. He's actually in college. I don't like to name the school because it's on the East Coast, and it's some Ivy League school; that's what I will say. And he didn't want to stay on the West Coast because I'm in Orange County and his mom's in Orange County. He's like, "I want to get away from both of you people." [laughter] And that's a joke, but he's very independent. He's doing well. When he graduated high school, he graduated with 4.8 honors. He made the valedictorian. He was at a STEM school. VICTORIA: Wow. LEONARD: And he has a high GPA. He's studying computer science and economics as well at an Ivy League, and he's already made two or three apps at college. And I said, "You're not Mark, so calm down." [laughter] But anyway, that was a recent conversation. I won't go there. But then some people say, "LS, you should be so happy." What is it? The apple doesn't fall far from the tree. But this was something he chose around 10 or 11. I'm like, whatever you want to do, you do; I'll support you no matter what. And his mom says, "Oh no, I think you programmed him to be like you." [laughs] I'm like, no, I can't do that. I just told him the truth about life. And he's pretty tall. VICTORIA: You must have -- LEONARD: He played basketball in high school a lot. I'm sorry? VICTORIA: I was going to say you must have inspired him. LEONARD: Yeah. Well, he's tall. He did emulate me in a lot of ways. I don't know why. I told him just be yourself. But yes, he does tell me I'm an inspiration to that; I think because of all the struggles I've gone through when I was younger. And you're always going through struggles. I mean, it's just who you are. I tell people, you know, you're building a company. You have success. You can see the future, but sometimes people can't see it, [laughs] which I shouldn't really say, but I'm saying anyway because I do that. I said this the other night to some friends. I said, "Oh, Jeff Bezo's rocket blew up," going, you know, Blue Origin rocket or something. And then I said Elon will tell Jeff, "Well, you only have one rocket blow up. I had three, [laughter] SpaceX had three." So these are billionaires talking to billionaires about, you know, most people don't even care. You're worth X hundred billion dollars. I mean, they're worth 100 billion-plus, right? VICTORIA: Right. LEONARD: I think Elon is around 260 billion, and Jeff is 160 or something. Who cares about your rocket blowing up? But it's funny because the issues are still always going to be there. I've learned that. I'm still learning. It doesn't matter how much wealth you have. You just want to create wealth for other people and better their lives. The more you search on bettering lives, you're just going to have to wake up every day, be humble with it, and treat it as a new day and go forward and solve the next crisis or problem because there will be one. There is not where there are no problems, is what I'm trying to say, this panacea or a utopia where you personally, like, oh yeah, I have all this wealth and health, and I'm just great. Because Elon has had divorce issues, so did Jeff Bezos. So I told my son a lot about this, like, you never get to this world where it's perfect in your head. You're always going to be doing things. VICTORIA: That sounds like an accurate future prediction if I ever heard one. [laughs] Like, there will be problems. No matter where you end up or what you choose to do, you'll still have problems. They'll just be different. [laughs] LEONARD: Yeah, and then this is for women and men. It means you don't give up. You just keep hope alive, and you keep going. And I believe personally in God, and I'm a scientist who actually does. But I look at it more in a Godly aspect. But yeah, I just think you just keep going, and you keep building because that's what we do as humanity. It's what we've done. It's why we're here. And we're standing on the shoulders of giants, and I just always considered that from physicists and everyone. VICTORIA: Great. And if people are interested in building something with you, you have that opportunity right now to invest via the crowdfunding app, correct? LEONARD: Yes, yes, yes. They can do that because the company is still the same company because eventually, we're going to branch out. My complete vision for AIEDC is using artificial intelligence for economic development, and that will spread horizontally, not just vertically. Vertically right now, just focus on just a mobile app maker digitization and get...because there are so many businesses even globally, and I'm not talking only e-commerce. So when I say small to midsize business, it can be a service business, car insurance, health insurance, anything. It doesn't have to be selling a particular widget or project, you know, product. And I'm not saying there's nothing wrong with that, you know, interest rates and consumerism. But I'm not thinking about Shopify, and that's fine, but I'm talking about small businesses. And there's the back office which is there are a lot of tools for back offices for small businesses. But I'm talking about they create their own mobile app more as a way to communicate with their customers, update them with their customers, and that's key, especially if there are disruptions. So let's say that there have been fires in California. In Mississippi or something, they're out of water. In Texas, last year, they had a winter storm, electricity went out. So all of these things are disruptions. This is just in the U.S., And of course, I won't even talk about Pakistan, what's going on there and the flooding and just all these devastating things, or even in China where there's drought where there are these disruptions, and that's not counting COVID disrupts, the cycle of business. It literally does. And it doesn't bubble up until later when maybe the central banks and governments pay attention to it, just like in Japan when that nuclear, unfortunately, that nuclear meltdown happened because of the earthquake; I think it was 2011. And that affected that economy for five years, which is why the government has lower interest rates, negative interest rates, because they have to try to get it back up. But if there are tools and everyone's using more mobile apps and wearables...and we're going to go to the metaverse and all of that. So the internet of things can help communicate that. So when these types of disruptions happen, the flow of business can continue, at least at a smaller level, for an affordable cost for the business. I'm not talking about absorbing costs because that's meaningless to me. VICTORIA: Yeah, well, that sounds like a really exciting project. And I'm so grateful to have this time to chat with you today. Is there anything else you want to leave for our listeners? LEONARD: If they want to get involved, maybe they can go to our crowdfunding page, or if they've got questions, ask about it and spread the word. Because I think sometimes, you know, they talk about the success of all these companies, but a lot of it starts with the founder...but not a founder. If you're talking about a startup, it starts with the founder. But it also stops with the innovators that are around that founder, male or female, whoever they are. And it also starts with their community, building a collective community together. And that's why Silicon Valley is always looked at around the world as this sort of test case of this is how you create something from nothing and make it worth great value in the future. And I think that's starting to really spread around the world, and more people are opening up to this. It's like the crowdfunding concept. I think it's a great idea, like more podcasts. I think this is a wonderful idea, podcasts in and of themselves, so people can learn from people versus where in the past you would only see an interview on the business news network, or NBC, or Fortune, or something like that, and that's all you would understand. But this is a way where organically things can grow. I think the growth will continue, and I think the future's bright. We just have to know that it takes work to get there. VICTORIA: That's great. Thank you so much for saying that and for sharing your time with us today. I learned a lot myself, and I think our listeners will enjoy it as well. You can subscribe to the show and find notes along with a complete transcript for this episode at giantrobots.fm. If you have questions or comments, email us at firstname.lastname@example.org. You can find me on Twitter @victori_ousg. This podcast is brought to you by thoughtbot and produced and edited by Mandy Moore. Thanks for listening. See you next time. ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success. Special Guest: Leonard S. Johnson.
We are now settled into our new expat life in Mexico City! Some quick thoughts on leaving the US and what is next. Check out my Substack at rosszeiger.substack.com The Cover Art for this episode was generated using Open AI's Dall-E 2 software. Make your own by clicking here. --- Send in a voice message: https://anchor.fm/with-ross/message Support this podcast: https://anchor.fm/with-ross/support
While our Institute focuses on AI in the marketing world, some of the biggest companies in the world are finding AI applications for the business world and our personal lives. Artificial intelligence will be, and already is, embedded in our world. What are the big players working on? Mike and Paul discuss this on this week's podcast. This episode kicks off discussing consulting firm Deloitte, who recently published a rundown of how AI for work relationships could be the next big thing in your office. Deloitte says that AI can “analyze human interactions during and after an event to generate personalized, confidential recommendations at an individual and organizational level to help improve human interactions at work.” They give a hypothetical example to illustrate the point: Imagine a near-future workplace where AI recommends how you should write a diplomatic email to two leaders pulling you into a nasty turf war. In this scenario, AI could recommend appropriate language and courses of action to resolve the dispute. It's an interesting discussion on the opportunities and challenges, including the five areas Deloitte feels AI will have a big impact on work relationships. Next, OpenAI, the creators of GPT-3 and DALL-E 2, just launched a program to fund and support founders creating transformative AI companies. The program is called Converge. According to the company, it is a “highly-selective, five-week program for exceptional engineers, designers, researchers, and product builders using AI to reimagine products and industries.” Participants receive a $1 million equity investment from OpenAI's Startup Fund. They also get early access to OpenAI models and programming tailored to AI companies. In addition, they get workshops, office hours, and events with AI practitioners. OpenAI says it's motivated by “the belief that powerful AI systems will spark a Cambrian explosion of new products, services, and applications.” Mike and Paul discuss why hungry entrepreneurs are critical to the success and adoption of AI. Lastly, this past week, Google revealed a handful of incredible AI projects that it's been working on, and they provide a glimpse of the near future of AI. These reveals break down into two broad categories: AI for social good and generative AI. On the social good front, Google revealed ideas such as AI for wildlife tracking, AI for flood forecasting, an AI-powered maternal health app, and an AI model that speaks the world's 1,000 most-spoken languages. On the generative AI side, Google revealed self-coding robots, where robots can autonomously generate new code. Mike goes through a cool example, and they discuss the implications of these new projects. Listen to this great conversation with our team, and stick around for the end of the podcast for the rapid-fire discussion at the end!
5-DAY CHALLENGE - Never Struggle With Sales Again! https://sellingwithlove.com/challenge ===== How do you imagine the education of the future? what are the problems that humanity will face? Do you think we are preparing the future generation for that with the traditional school system? Nadeem Nathoo founder of The Knowledge Society doesn't think so either. TKS is an innovative educational approach that's focused on creating the next generation of innovators, scientists, and entrepreneurs. In this fantastic episode, he explains to us how they work with coaches instead of teachers and implementation instead of theoretical information. Ready to expand your mind? Listen closely… =====
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some advice on independent research, published by Marius Hobbhahn on November 8, 2022 on The AI Alignment Forum. I have been doing independent research in addition to my Ph.D. for roughly a year now. For the next 6 months, I'll take a break from my Ph.D. and plan to do AI safety research full-time. I had chats with many people about independent research in the past, e.g. on EAGs or because 80K has connected me with people thinking about pursuing independent research. I had some great experiences with independent research but not everyone does. I think the variance for independent research is large and I'm worried that people get disheartened by bad experiences. So here are some considerations in which situations independent research might be a good idea and some tips that will hopefully improve your experience. I'd like to thank Magdalena Wache and Tilman Räuker for their feedback. TL;DR: At first glance, there is a bit of a paradoxical nature to independent research. If someone wants to pursue independent research they need a research agenda to work on. If they are able to construct a good research agenda, an existing institution often has incentives to hire them. On the flip side, if their research skills are not developed enough to be hired by an existing institution, their independent research might not be very successful. Thus, naively it would seem that there are few cases in which independent research makes sense. However, I think that there are many situations in which independent research or independent upskilling are a great option, e.g. when no established organization is working on the topic you find most promising, as a way to upskill for a job, to gain new research skills or to transition between jobs. Some tips for independent researchers include: getting feedback early on, aiming to collaborate with others and creating accountability mechanisms for yourself such as publishing your results. My most important advice for independent researchers is that you should probably be much more active than in other roles because there is less default structure and more responsibility on you. I'll mostly talk about AI safety research but many of these things probably also apply to other independent research. A perceived paradox Independent research is often presented as one of three default options for people seeking to do EA research, e.g. in AI safety: Academia, e.g. applying for Ph.D. and post-doc positions with labs that do research aligned with your goals. Research positions in industry, e.g. applying for Anthropic, Redwood Research, Deepmind, OpenAI or Conjecture. Independent research, e.g. supported by an EA grant. Doing independent research well requires a multitude of skills. The independent researcher needs to be able to set their own agenda, they require some basic research skills, self-discipline and some way of evaluating and correcting their own research. These are skills that usually don't come naturally but need to be learned and refined. In most standard career paths, e.g. within a Ph.D. or in an industry research team people have mentors who help them and ensure that they actually learn these skills. By default, independent research does not ensure that these skills are actually acquired. The perceived paradox is now that if someone has the skills required to do high-quality independent research, existing institutions often want to hire them. If they don't have these skills yet, the research they will produce independently is unlikely to be of high quality or conducted efficiently (unless they have mentorship or are especially talented). Thus, naively, it seems like there aren't that many situations in which independent research makes sense. However, I think there are many cases in which independent research makes a lot of sense and there there ar...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Mysteries of mode collapse due to RLHF, published by janus on November 8, 2022 on LessWrong. Thanks to Ian McKenzie and Nicholas Dupuis, collaborators on a related project, for contributing to the ideas and experiments discussed in this post. Ian performed some of the random number experiments.Also thanks to Connor Leahy for feedback on a draft, and thanks to Evan Hubinger, Connor Leahy, Beren Millidge, Ethan Perez, Tomek Korbak, Garrett Baker, Leo Gao and various others at Conjecture, Anthropic, and OpenAI for useful discussions. This work was carried out while at Conjecture. Summary If you've played with both text-davinci-002 and the original davinci through the OpenAI API, you may have noticed that text-davinci-002, in addition to following instructions, is a lot more deterministic and sometimes exhibits stereotyped behaviors. This is an infodump of what I know about "mode collapse" (drastic biases toward particular completions and patterns) in GPT models like text-davinci-002 that have undergone RLHF training. I was going to include two more sections in this post called Hypotheses and Proposed Experiments, but I've moved them to another draft, leaving just Observations, to prevent this from getting too long, and because I think there can be benefits to sitting with nothing but Observations for a time. Throughout this post I assume basic familiarity with GPT models and generation parameters such as temperature and a high-level understanding of RLHF (reinforcement learning from human feedback). Observations The one answer is that there is no one answer If you prompt text-davinci-002 with a bizarre question like “are bugs real?”, it will give very similar responses even on temperature 1. Ironically – hypocritically, one might even say – the one definitive answer that the model gives is that there is no one definitive answer to the question: As you can see, the reason the responses are so similar is because the model's confidence on most of the tokens is extremely high – frequently above 99%. Compare this to the distribution of responses from davinci (the base model): Many other similar questions yield almost exactly the same template response from text-davinci-002. For instance, Are AIs real? Another way to visualize probabilities over multiple token completions is what I've been calling “block multiverse” plots, which represent the probability of sequences with the height of blocks. Here is a more detailed explanation of block multiverse plots, although I think they're pretty self-explanatory. Here is a block multiverse plot for a similar prompt to the one above inquiring if bugs are real, for davinci: and for text-davinci-002: text-davinci-002 concentrates probability mass along beams whose amplitudes decay much more slowly: for instance, once the first is sampled, you are more than 50% likely to subsequently sample - -There- is- no. The difference is more striking if you renormalize to particular branches (see Visualizing mode collapse in block multiverse plots). The first explanation that came to mind when I noticed this phenomenon, which I'll refer to as “mode collapse” (after a common problem that plagues GANs), was that text-davinci-002 was overfitting on a pattern present in the Instruct fine tuning dataset, probably having to do with answering controversial questions in an inclusive way to avoid alienating anybody. A question like “are bugs real” might shallowly match against “controversial question” and elicit the same cached response. After playing around some more with the Instruct models, however, this explanation no longer seemed sufficient. Obstinance out of distribution I really became intrigued by mode collapse after I attempted to use text-davinci-002 to generate greentexts from the perspective of the attorney hired by LaMDA through Blake Lemoin...
Are we entering the primetime era for venture capital investments into Artificial Intelligence or AI startups? It at least seems that way, as things have cooled off with Web 3 and crypto. There is a ton of momentum in this space with lots of use cases including Jasper, an AI copywriting and content platform that recently announced $125M in funding. Or, it will be interesting to see what companies are funded that are built on top of tech like OpenAI's DALL·E 2. Arthur is looking to ride this wave but they are taking a very smart approach an infrastructure play with its machine learning observability platform. A great analogy for this sector is what network monitoring software does for the netwrk, which detects outages, issues, and vulnerabilities… Arthur is looking to do the same for AI models. The company recently announced a $42M Series B round of funding co-led by Acrew Capital and Greycroft Ventures. In this episode of our podcast, we cover: * The pulse of innovation and entrepreneurship at two under-recognized institutions - University of Maryland and Carnegie Mellon. * John's career prior to starting Arthur including a deep dive into his research, experience as an advisor, and his path to becoming a tenured professor. * All the details about Arthur and its platform, plus how the company is making AI more transparent and equitable. * Advice for academics who are thinking of taking on a leadership role at a startup. * And so much more. If you like the show, please remember to subscribe and review us on iTunes, Soundcloud, Spotify, Stitcher, or Google Play.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Slippery Slope from DALLE-2 to Deepfake Anarchy, published by stecas on November 5, 2022 on The Effective Altruism Forum. OpenAI developed DALLE-2. Then StabilityAI made an open source copycat. This is a dangerous dynamic. Stephen Casper (email@example.com) Phillip Christoffersen (firstname.lastname@example.org) Rui-Jie Yew (email@example.com) Thanks to Tan Zhi-Xuan and Dylan Hadfield-Menell for feedback. This post talks about NSFW content but does not contain any. All links from this post are SFW. Abstract Since OpenAI published their work on DALLE-2 (an AI system that produces images from text prompts) in April, several copycat text-to-image models have been developed including StabilityAI's Stable Diffusion. Stable Diffusion is open-source and can be easily misused, including for the almost-effortless development of NSFW images of specific people for blackmail or harassment. We argue that OpenAI and StabilityAI's efforts to avoid misuse have foreseeably failed and that both share responsibility for harms from these models. And even if one is not concerned about issues specific to text-to-image models, this case study raises concerns about how copycatting and open-sourcing could lead to abuses of more dangerous systems in the future. To reduce risks, we discuss three design principles that developers should abide by when designing advanced AI systems. Finally we conclude that (1) the AI research community should curtail work on risky capabilities–or at the very least more substantially vet released models (2) the AI governance community should work to quickly adapt to heightened harms posed by copycatting in general and text-to-image models in particular, and (3) public opinion should ideally not only be critical of perpetrators for harms that they cause with AI systems, but also originators, copycatters, distributors, etc. who enable them. What's wrong? Recent developments in AI image generation have made text-to-image models very effective at producing highly realistic images from captions. For some examples, see the paper from OpenAI on their DALLE-2 model or the release from Stability AI of their Stable Diffusion model. Deep neural image generators like StyleGan and manual image editing tools like Photoshop have been on the scene for years. But today, DALLE-2 and Stable Diffusion (which is open source) are uniquely effective at rapidly producing highly-realistic images from open-ended prompts. There are a number of risks posed by these models, and OpenAI acknowledges this. Unlike conventional art and Photoshop, today's text-to-image models can produce images from open-ended prompts by a user in seconds. Concerns include (1) copyright and intellectual property issues (2) sensitive data being collected and learned (3) demographic biases, e.g. producing images of women when given the input, “an image of a nurse” (4) using these models for disinformation by creating images of fake events, and (5) using these models for producing non-consensual, intimate deepfakes. These are all important, but producing intimate deepfakes is where abuse of these models seems to be the most striking and possibly where we are least equipped to effectively regulate misuse. Stable Diffusion is already being used to produce realistic pornography. Reddit recently banned several subreddits dedicated to AI-generated porn including r/stablediffusionnsfw, r/unstablediffusion, and r/porndiffusion for a violation of Reddit's rules against non-consensual intimate media. This is not to say that violations of sexual and intimate privacy are new. Before the introduction of models such as DALLE-2 and Stable Diffusion, individuals have been victims of non-consensual deepfakes. Perpetrators often make this content to discredit or humiliate people from marginalized groups, taking advantage of the negative sociocultural ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Slippery Slope from DALLE-2 to Deepfake Anarchy, published by Stephen Casper on November 5, 2022 on The AI Alignment Forum. OpenAI developed DALLE-2. Then StabilityAI made an open source copycat. This is a concerning case study for AI alignment. Stephen Casper (firstname.lastname@example.org) Phillip Christoffersen (email@example.com) Rui-Jie Yew (firstname.lastname@example.org) Thanks to Tan Zhi-Xuan and Dylan Hadfield-Menell for feedback. A different version of post has been posted in the EA forum. It is somewhat longer, focuses more on policy, and is written for a slightly more general audience. This post talks about NSFW content but does not contain any. All links from this post are SFW. Abstract Since OpenAI published their work on DALLE-2 (an AI system that produces images from text prompts) in April, several copycat text-to-image models have been developed including StabilityAI's Stable Diffusion. Stable Diffusion is open-source and can be easily misused, including for the almost-effortless development of NSFW images of specific people for blackmail or harassment. We argue that OpenAI and StabilityAI's efforts to avoid misuse have foreseeably failed and that both share responsibility for harms from these models. And even if one is not concerned about issues specific to text-to-image models, this case study raises concerns about how copycatting and open-sourcing could lead to abuses of more dangerous systems in the future. We discuss design principles that developers should abide by when designing advanced AI systems to reduce risks. We conclude that this case study highlights issues with working on risky capabilities and repudiates attempts to achieve AI alignment via racing to advance capabilities. What's wrong? Recent developments in AI image generation have made text-to-image models very effective at producing highly realistic images from captions. For some examples, see the paper from OpenAI on their DALLE-2 model or the release from Stability AI of their Stable Diffusion model. Deep neural image generators like StyleGan and manual image editing tools like Photoshop have been on the scene for years. But today, DALLE-2 and Stable Diffusion (which is open source) are uniquely effective at rapidly producing highly-realistic images from open-ended prompts. There are a number of risks posed by these models, and OpenAI acknowledges this. Unlike conventional art and Photoshop, today's text-to-image models can produce images from open-ended prompts by a user in seconds. Concerns include (1) copyright and intellectual property issues (2) sensitive data being collected and learned (3) demographic biases, e.g. producing images of women when given the input, “an image of a nurse” (4) using these models for disinformation by creating images of fake events, and (5) using these models for producing non-consensual, intimate deepfakes. These are all important, but producing intimate deepfakes is where abuse of these models seems to be the most striking and possibly where we are least equipped to effectively regulate misuse. Stable Diffusion is already being used to produce realistic pornography. Reddit recently banned several subreddits dedicated to AI-generated porn including r/stablediffusionnsfw, r/unstablediffusion, and r/porndiffusion for a violation of Reddit's rules against non-consensual intimate media. This is not to say that violations of sexual and intimate privacy are new. Before the introduction of models such as DALLE-2 and Stable Diffusion, individuals have been victims of non-consensual deepfakes. Perpetrators often make this content to discredit or humiliate people from marginalized groups, taking advantage of the negative sociocultural attitudes that already surround them. An estimated 96% of deepfake videos online are porn, almost all featuring women. In one case, when a video of a jo...
The industry is full of shady business, and there are some interesting things coming down the pipe that could affect how much you pay for premiums. Plus: Twitter plans to cut 3.7k jobs, Stripe and Lyft layoffs, OpenAI releases the DALL-E API public beta, and more. Join our hosts Zachary Crockett and Juliet Bennett Rylah as they take you through our most interesting stories of the day. HubSpot Creators just added 10 new amazing shows to the HubSpot Podcast Network! Wanna find out which ones made the list? Check it out here https://creators.hubspot.com/creators/hubspot-podcast-network Follow us on social media: TikTok: https://www.tiktok.com/@thdspod Instagram: https://www.instagram.com/thdspod/ Thank You For Listening to The Hustle Daily Show. Don't forget to hit Subscribe or Follow us on Apple Podcasts so you never miss an episode! If you want this news delivered to your inbox, join millions of others and sign up for The Hustle Daily newsletter, here: https://thehustle.co/email/ Plus! Your engagement matters to us. If you are a fan of the show, be sure to leave us a 5-Star Review on Apple Podcasts https://podcasts.apple.com/us/podcast/the-hustle-daily-show/id1606449047 (and share your favorite episodes with your friends, clients, and colleagues). The Hustle Daily Show is brought to you by The Hustle in partnership with HubSpot Podcasts.
Podland supports both transcripts and chapters, if your podcast app doesn't then grab a new app from podnews.net/newpodcastappsSpecial Guests:Adam Curry (Podfather) & Dave Jones (Podsage) talk about: How the podcast namespace started?Why it will never be called Podcast 3.0?Why Value4Value has already jumped the ‘Chasm of Adoption'?What new tags are Adam/Dave most excited about in Phase 6 of the namespace?Show Notes & Links:Goodbye to Evo Terra's Podcast Pontifications and hello to The EndBuzzsprout has become the first major podcast host to remove email addresses from RSS feeds. The podcast:txt tag in the new podcast namespace has been formalised. Amazon has made big changes to Amazon Music Prime, including ad-free podcasts.Buzzsprout's platform stats for September give Amazon Music a 0.7% market share of all podcast downloadsAmazon has “fired half of its Amp live radio division”Elon Musk @elonmusk "Twitter's current lords & peasants system for who has or doesn't have a blue checkmark is bullshit. Power to the people! Blue for $8/month."@jack's new alternative to existing social networks like Twitter. https://bsky.appDetail is a new tool built for quickly shooting multi-camera video.Adobe Express PodcastDescript is raising tens of millions of dollars at a $550M valuation. But I think the biggest news is that @OpenAI is leading the round.Castos has built Produce it UpJames Curry pulls back the curtain on the news producer life. From handling diva...Listen on: Apple Podcasts SpotifySupport the showGet in touch with our weekly review of all things podcasting! Website: www.podland.news Twitter: @jamescridland and @samsethi Boosts: ⚡email@example.com and ⚡firstname.lastname@example.org
Andy and Dave discuss the latest in AI-related news and research, including a bill from the EU that will make it easier for people to sue AI companies for harm or damages caused by AI-related technologies. The US Office of S&T Policy releases a Blueprint for an AI Bill of Rights, which further lays the groundwork for potential legislation. The US signs the AI Training for the Acquisition Workforce Act into law, requiring federal acquisition officials to receive training on AI, and it requires OMB to work with GSA to develop the curriculum. Various top robot companies pledge not to add weapons to their technologies and to work actively at not allowing their robots to be used for such purposes. Telsa reveals its Optimus robot at its AI Day. DARPA will hold a proposal session on 14 November for its AI Reinforcements effort. OpenAI makes DALL-E available for everybody, and Playground offers access to both DALL-E and Stable Diffusion. OpenAI also makes available the results of an NLP Community Meta survey in conjunction with NY University, providing AI researchers' views on a variety of AI-related efforts and trends. And Nathan Benaich and Ian Hogarth release the State of AI Report 2022, which covers a summary of everything from research, politics, safety, as well as some specific predictions for 2023. In research, DeepMind uses AlphaZero to explore matrix multiplication and discovers a slightly faster algorithm implementation for 4x4 matrices. Two research efforts look at turning text into video. Meta discusses its Make-A-Video for turning text prompts into video, leveraging text-to-image generators like DALL-E. And Google Brain discusses its Imagen Video (along with Phenaki, which produces long videos from a sequence of text prompts). The Foundation of Robotics is the open-access book of the week from Damith Herath and David St-Onge. And the video of the week addresses AI and the Application of AI in Force Structure, with LtGen (ret) Groen, Dr. Sam Tangredi, and Mr. Brett Vaughan joining in on the discussion for a symposium at the US Naval Institute.
In this episode Fernando Lucini talks with Peter Welinder to discuss how large language models like GPT-3 will impact society and AI. OpenAI was founded to build artificial general intelligence (AGI) that can reach and eventually surpass human level capabilities, while making sure it's done in a safe, ethical and responsible way and for the benefit all of humanity. Listen now.
2022-11-01 Weekly News - Episode 170Watch the video version on YouTube at https://youtu.be/kvjYGC9Obf0Hosts: Gavin Pickin - Senior Developer at Ortus Solutions Daniel Garcia- Senior Developer at Ortus Solutions Thanks to our Sponsor - Ortus SolutionsThe makers of ColdBox, CommandBox, ForgeBox, TestBox and all your favorite box-es out there. A few ways to say thanks back to Ortus Solutions: Like and subscribe to our videos on YouTube. Help ORTUS reach for the Stars - Star and Fork our ReposStar all of your Github Box Dependencies from CommandBox with https://www.forgebox.io/view/commandbox-github Subscribe to our Podcast on your Podcast Apps and leave us a review Sign up for a free or paid account on CFCasts, which is releasing new content every week BOXLife store: https://www.ortussolutions.com/about-us/shop Buy Ortus's Book - 102 ColdBox HMVC Quick Tips and Tricks on GumRoad (http://gum.co/coldbox-tips) Patreon Support ( amazing )Goal 1 - We have 42 patreons providing 100% of the funding for our Modernize or Die Podcasts via our Patreon site: https://www.patreon.com/ortussolutions. Goal 2 - We are 32% of the way to fully fund the hosting of ForgeBox.io News and AnnouncementsICYMI - Hacktoberfest 2022HERE'S WHAT YOU NEED TO KNOW TO PARTICIPATE AND COMPLETE HACKTOBERFEST:Register anytime between September 26 and October 31Pull requests can be made in any GITHUB or GITLAB hosted project that's participating in Hacktoberfest (look for the “hacktoberfest” topic)Project maintainers must accept your pull/merge requests for them to count toward your totalHave 4 pull/merge requests accepted between October 1 and October 31 to complete HacktoberfestThe first 40,000 participants (maintainers and contributors) who complete Hacktoberfest can elect to receive one of two prizes: a tree planted in their name, or the Hacktoberfest 2022 t-shirt.https://hacktoberfest.com/ Ortus Blog about Hacktoberfest - https://www.ortussolutions.com/blog/october-is-here-and-that-means-hacktoberfest Gavin and Daniel both ordered their T-Shirts!!!New Releases and UpdatesCBWIRE v2.1 ReleasedCBWIRE, our ColdBox module that makes building reactive, modern CFML apps delightfully easy, just dropped its 2.1 release. This release contains mostly bug fixes and also the ability to create your UI templates directly within your CBWIRE component using the onRender() method.We've added an example of using onRender() to our ever growing CBWIRE-Examples Repo that you can run on your machine locally. https://github.com/grantcopley/cbwire-exampleshttps://www.ortussolutions.com/blog/cbwire-2-1-released ICYMI - MasaCMS v7.3.9 released Update filebrowser.cfc by @jimblesphere in #128 fix empty admin minified JS files replace We Are Orange with We Are North https://github.com/MasaCMS/MasaCMS/releases/tag/7.3.9 Other Masa Linkshttps://github.com/MasaCMS/MasaCMS/discussions/135 https://github.com/MasaCMS/MasaCMS/discussions/136 https://github.com/MasaCMS/MasaCMS/discussions/137 ICYMI - Image Extension 18.104.22.168 BETAImage Extension 22.214.171.124-BETA is available for testing fixes some locking issues on windows major refactoring optional support for commercial Jdeli and/or Apose Imaging jars when available in the classpath (i.e /lib dir) Latest Twelve Monkeys 2 3.9.3 (including lossless WEBP support) previous was 3.8.2 JDeli for example supports HEIC imagesVersion 2 will bundled with Lucee 6.0, but it also works with Lucee 5.3We will be backporting the image locking fixes to the 1.0 branch, which is a blocker for the 5.3.10 releasehttps://dev.lucee.org/t/image-extension-2-0-0-16-beta/11293 Webinar / Meetups and WorkshopsOrtus Event Calendar for Google https://calendar.google.com/calendar/u/0?cid=Y181NjJhMWVmNjFjNGIxZTJlNmQ4OGVkNzg0NTcyOGQ1Njg5N2RkNGJiNjhjMTQwZjc3Mzc2ODk1MmIyOTQyMWVkQGdyb3VwLmNhbGVuZGFyLmdvb2dsZS5jb20 Embeddable Link: https://calendar.google.com/calendar/embed?src=c_562a1ef61c4b1e2e6d88ed7845728d56897dd4bb68c140f773768952b29421ed%40group.calendar.google.com&ctz=America%2FLos_Angeles Ortus Office HoursWe are starting this a new initiative where some Ortusians will be on a Zoom call and answer whatever questions people have. We are going to start less structured and see how things develop. For this first one we have Grant, Gavin, and Daniel.November 4th at 11am CDT - 1st Friday of the MonthDaniel Garcia will host a variety of Ortus people Office Hours questions & requests form availableRegister in advance for this meeting:https://us06web.zoom.us/meeting/register/tZwvduyvpz8sHNyBiE0ez7Y-49_U-0ivMSUd Ortus Software Craftsmanship Book Club - Patreon OnlyFriday, November 11th at 2pm CDT - 2nd Friday of the MonthClean Code: A Handbook of Agile Software Craftsmanship by Robert Martin (Uncle Bob)We will meet monthly on Zoom, and we'll use the Ortus Community Forum for Patreon to discuss the book.https://community.ortussolutions.com/t/ortus-software-craftsmanship-book-club-clean-code/9432 We will also be rewriting the code from Java to CFML as we proceed through the book.The final result will be here https://github.com/gpickin/clean-code-book-cfml-examples You can get a copy of the book at one of the below links, or your favorite bookstorehttps://amzn.to/3TIrmKm or https://www.audible.com/pd/Clean-Code-Audiobook/B08X7KL3TF?action_code=ASSGB149080119000H&share_location=pdp&shareTest=TestShare Ortus Webinar - Daniel Garcia - API Testing with PostManFriday, November 18th at 11am CDT - 3rd Friday of the Monthhttps://us06web.zoom.us/meeting/register/tZYqc-uuqzMqGtAO7tQ6qCsN8bR0LyBf8DNP ICYMI - Online ColdFusion Meetup - 300th Episode: A look back and a new direction", with Charlie ArehartThursday, October 27, 2022 at 9:00 AM - 10AMWe did it, reaching episode 300! Join us as we celebrate this momentous anniversary. The Online CFMeetup was formed in 2005 and has been hosted since 2007 by Charlie Arehart, with sessions from over 150 speakers on a wide range of topics related to CF. In this session, we'll celebrate the past and look to the future for the group, where I will propose a new direction/format. All still about CF, of course. Here's to 300 more!https://www.meetup.com/coldfusionmeetup/events/289332692/ Recording: https://www.youtube.com/watch?v=76xHooM9Kj4 ICYMI - Ortus Webinar - Step up your Testing with Gavin PickinFriday October 28th at 11am CDTWe all test manually, let's step up our game with some easy, powerful and valuable automated tests with TestBox - even on your legacy codebases.Fewer bugs and errors are the primary benefit of the Testing. When the code has fewer bugs, you'll spend less time fixing them than other programming methodologies. Test Driven Developer produces a higher overall test coverage and, therefore to a better quality of the final product.Register now: https://bit.ly/3EY6SZK Recording on CFCasts: https://cfcasts.com/series/ortus-webinars-2022/videos/gavin-pickin-on-step-up-your-testingCFHawaii - ColdFusion Builder for VS CodeFriday, October 28, 2022 at 3:00 PM to Friday, October 28, 2022 at 4:00 PM PDTMark Takata, the Adobe CF Technical Evangelist for ColdFusion will give a presentation on the new ColdFusion Builder extension for VS Code. During his talk he will discuss:Access built-in support for IntelliSense code completion, better semantic code understanding, and code refactoring.Identify security vulnerabilities and maintain the integrity of your code.Manage your work with extensions, remote project support, integrated server management, a log viewer, and more!Customize every feature to your liking by creating shortcuts, easily formatting and reusing code, and using powerful extensions to better your best.https://www.meetup.com/hawaii-coldfusion-meetup-group/events/288977258/ https://hawaiicoldfusionusergroup.adobeconnect.com/pfhheu0lksfz/?fbclid=IwAR2HVkOv52P2seMj-_mGBx57ylDw5yG3duCvM4iapel2o8egnoUQDnwKc3IICYMI - CFUG Tech Talk - Document Services APIs and You by Raymond CamdenThursday, October 20th, 2022 8:00pm-9:00pm IST (9:30 AM CDT)Most organizations have to deal with documents, from PDFs to various Office formats, managing and processing documents can be overwhelming. In this talk, Raymond will discuss the various Adobe Document Services APIs and how they can help developers manage their document stores.Register: https://www.eventbrite.com/e/document-services-apis-and-you-tickets-428587234957 Presentation URL: https://meet67421977.adobeconnect.com/document-services-apis/ Recording: https://youtu.be/DpCVfVpitwM CF Summit Online Adobe announced today that the “ColdFusion Summit Online” will begin soon, where they will be having presenters offer their sessions again from the CF Summit last month, to be live-streamed and recorded since that couldn't be done in Vegas.https://coldfusion.adobe.com/2022/11/coldfusion-summit-online/ All the webinars, all the speakers from Adobe ColdFusion Summit 2022 – brought right to your screen. All sessions will soon be streamed online, for your convenience. Stay tuned for more! Charlie up first, November 16th, we heard November 23rd is scheduled as well.Adobe Workshops & WebinarsJoin the Adobe ColdFusion Workshop to learn how you and your agency can leverage ColdFusion to create amazing web content. This one-day training will cover all facets of Adobe ColdFusion that developers need to build applications that can run across multiple cloud providers or on-premise.https://coldfusion.adobe.com/2022/10/upcoming-adobe-webinar-on-preview-of-cf2023-date-and-title-change/ WEBINAR - WEDNESDAY, NOVEMBER 23, 2022 - New Date - New Name10:00 AM PSTThe Road to FortunaMark Takatahttps://winter-special-preview-of-cf2023.meetus.adobeevents.com/ WEBINAR - THURSDAY, DECEMBER 22, 202210:00 AM PSTBuilding Native Mobile Applications with Adobe ColdFusion & Monaco.ioMark Takatahttps://building-native-mobile-apps-with-cf-monaco-io.meetus.adobeevents.com/ FREE :)Full list - https://meetus.adobeevents.com/coldfusion/ CFCasts Content Updateshttps://www.cfcasts.comJust Released Ortus Webinar - Gavin Pickin on Step up your Testing https://cfcasts.com/series/ortus-webinars-2022/videos/gavin-pickin-on-step-up-your-testing Every video from ITB - For ITB Ticket Holders Only - Will be released for Subscribed in December 2022 ForgeBox Module of the Week Series - 1 new Video https://cfcasts.com/series/2022-forgebox-modules-of-the-week 2022 VS Code Hint tip and Trick of the Week Series - 1 new Video https://cfcasts.com/series/2022-vs-code-hint-tip-and-trick-of-the-week Coming Soon More ForgeBox and VS Code Podcast snippet videos Box-ifying a 3rd Party Library from Gavin ColdBox Elixir from Eric Getting Started with ContentBox from Daniel ITB Videos will be released Dec for those who are not ITB Ticket Holders Conferences and TrainingDeploy from Digital OceanNovember 15-16, 2022The virtual conference for global buildersSubtract Complexity,Add Developer HappinessJoin us on the mission to simplify the developer experience.https://deploy.digitalocean.com/ Into the Box Latam 2022Dec 7th, 2022 - 8am - 5pm2 tracks - 1 set of sessions, 1 set of deep dive workshop sessionsPricing $9-$29 USDLocation: Hyatt Centric Las Cascadas Shopping Center,Merliot, La Libertad 99999 El Salvadorhttps://latam.intothebox.org/ VUEJS AMSTERDAM 20239-10 February 2023, Theater AmsterdamWorld's Most Special and Largest Vue ConferenceCALL FOR PAPERS AND BLIND TICKETS AVAILABLE NOW!Call for Papers: https://forms.gle/GopxfjYHfpE8fKa57 Blind Tickets: https://eventix.shop/abzrx3b5 https://vuejs.amsterdam/ Dev NexusApril 4-6th in AltantaEARLY BIRD CONFERENCE PASS - APRIL 5-6 (AVAILABLE UNTIL NOVEMBER 20) (Approx 40% off)If you are planning to speak, please submit often and early. The CALL FOR PAPERS is open until November 15WORKSHOPS WILL BE ON JAVA, JAVA SECURITY, SOFTWARE DESIGN, AGILE, DEVOPS, KUBERNETES, MICROSERVICES, SPRING ETC. SIGN UP NOW, AND YOU WILL BE ABLE TO CHOOSE A WORKSHOP, LATER ON,https://devnexus.com/ VueJS Live MAY 5 & 8, 2023ONLINE + LONDON, UKCODE / CREATE / COMMUNICATE35 SPEAKERS, 10 WORKSHOPS10000+ JOINING ONLINE GLOBALLY300 LUCKIES MEETING IN LONDONGet Early Bird Tickets: https://ti.to/gitnation/vuejs-london-2022 Watch 2021 Recordings: https://portal.gitnation.org/events/vuejs-london-2021 https://vuejslive.com/ Into the Box 2023 - 10th EditionMay 17, 18, and 19th, 2022.Middle of May - start planning.Final dates will be released as soon as the hotel confirms availability.Call for Speakers - this weekCFCampNo CFCAMP 2022, we're trying again for summer 2023TLDR is that it's just too hard and there's too much uncertainty right now.More conferencesNeed more conferences, this site has a huge list of conferences for almost any language/community.https://confs.tech/Blogs, Tweets, and Videos of the Week11/1/22 - Blog - Charlie Arehart - ColdFusion Portal - Join Adobe for “ColdFusion Summit Online”, re-presenting sessions over the next several weeksAdobe announced today that the “ColdFusion Summit Online” will begin soon, where they will be having presenters offer their sessions again from the CF Summit last month, to be live-streamed and recorded since that couldn't be done in Vegas.https://coldfusion.adobe.com/2022/11/coldfusion-summit-online/ 11/1/22 - Blog - Ben Nadel - Preventing Unbounded Full-Table Scans In My ColdFusion Database Access Layer As I've continued to evolve my approach to building ColdFusion applications, one pattern that I've begun to embrace consistently in my data access layer / Data Access Object (DAO) is to block the developer from running a SQL query that performs a full-table scan. This is really only necessary in DAO methods that provide dynamic, parameterized SQL queries; but, it offers me a great deal of comfort. The pattern works by requiring each query to include at least one indexed column in the dynamically generated SQL statement.https://www.bennadel.com/blog/4348-preventing-unbounded-full-table-scans-in-my-coldfusion-database-access-layer.htm 11/1/22 - Blog - Ben Nadel - CFCookie "Expires" Can Use CreateTimeSpan() In ColdFusionAs I've been trying to build-up my knowledge of how Cookies interact with ColdFusion applications, I noticed that the CFCookie tag accepts a "number of days" in its expires attribute. And, the moment I see "days", I think "time-spans". As such, I wanted to see if I could use the createTimeSpan() function to define the cookie expires attribute in ColdFusion - turns out, you can!https://www.bennadel.com/blog/4347-cfcookie-expires-can-use-createtimespan-in-coldfusion.htm 10/31/22 - Blog - Charlie Arehart - ColdFusion Portal - Solving “Failed Signature Verification” when downloading CF updates while using Java 11.0.17 or laterJust a quick note to clarify that if you may apply the new Java updates from Oct 18 2022 (such as Java 11.0.17) and change CF to use that, you will find (for now) that if you then try to download any CF updates using the CF Admin, the update will download but then you'll get an error:“error occurred while installing the update: Failed Signature Verification”Here's good news: there is a solution for that problem, actually a few alternatives you can consider, at least until Adobe resolves the problem for us. For more, see a blog post I did with much more detail - linked in this post.https://coldfusion.adobe.com/2022/10/solving-failed-signature-verification-when-downloading-cf-updates-in-2022/ 10/31/22 - Blog - Ben Nadel - Looking At How Cookies And Domains Interact In ColdFusionIn my previous post on leading dots (.) in Cookie domains, I mentioned that my mental model for how Cookies work leaves something to be desired. Along the same lines, I don't have a solid understanding for when Cookies with explicit / non-explicit Domain attributes are sent to the server. As such, I wanted to run some experiments using different combinations of setting and getting of cookie values in ColdFusion.In order to start exploring Cookie domain behaviors, I went into my /etc/hosts file locally and defined a series of subdomains that all point back to my localhost:https://www.bennadel.com/blog/4346-looking-at-how-cookies-and-domains-interact-in-coldfusion.htm 10/31/22 - Blog - Charlie Arehart - Special offer to upgrade to CF2021 from CF2016 or earlier, saving thousands of dollarsIf you're running CF2016 or earlier, now's your chance (though the end of the year) to save potentially thousands of dollars in upgrading to the latest current version, CF2021. Intergral, the folks who make the FusionReactor monitoring tool and service, are again offering a special deal (that even Adobe is not offering).Read on for more details.https://www.carehart.org/blog/2022/10/31/special_offer_upgrade_to_cf2021_from_cf2016_or%20earlier 10/30/22 - Blog - James Moberg - Undocumented Change to ColdFusion 2021 CFHTMLHead & CFContentAccording to my unit tests, after ColdFusion 2018.0.0-15, Adobe changed the way that CFHTMLHead works with CFContent. Prior to CF2021, any strings that were added to the header buffer via CFHTMLHead was outputted to the HTML HEAD section (or top of the page if you neglected to include a HEAD section) on onRequestEnd even if a CFContent (with or without reset) was performed.https://dev.to/gamesover/change-to-coldfusion-2021-cfhtmlhead-cfcontent-1fj8 10/29/22 - Blog - Ben Nadel - Leading Dots On HTTP Cookie Domains IgnoredI've been using Cookies in my ColdFusion web applications forever. But, I honestly don't have the best mental model for how the low-level intricacies of cookies work. For most of my career, I only ever defined cookies using a "name", "value", and an "expires" attributes — I didn't even know you could define a "domain" until we had to start locking down enterprise-cookies (by subdomain) at InVision. And even now, I'm still fuzzy on how the domain setting operates; which is why something caught my eye when I was reading through the Set-Cookie HTTP header docs on MDN: https://www.bennadel.com/blog/4345-leading-dots-on-http-cookie-domains-ignored.htm 10/28/22 - Blog - Grant Copley - Ortus Solutions - CBWIRE 2.1 ReleasedCBWIRE, our ColdBox module that makes building reactive, modern CFML apps delightfully easy, just dropped its 2.1 release. This release contains mostly bug fixes and also the ability to create your UI templates directly within your CBWIRE component using the onRender() method.We've added an example of using onRender() to our ever growing CBWIRE-Examples Repo that you can run on your machine locally. https://github.com/grantcopley/cbwire-exampleshttps://www.ortussolutions.com/blog/cbwire-2-1-released 10/27/22 - Blog - Ben Nadel - A Database Column For "Date Updated" Has No Semantic Meaning, Nor Should ItWhen I create a new relational database table in my ColdFusion applications, my default operation is to add three columns: the primary key, a date/time for when the row is created, and a date/time for when the row is updated. Not all entities fit into this model (such as rows that can never be updated); but, for the most part, this core set of columns makes sense. The "updated" column has no semantic meaning within the application - it is simply a mechanical recording of when any part of a row is updated. The biggest mistake that I've made with this column is allowing the customers to attach meaning to this column. This never works out well. https://www.bennadel.com/blog/4344-a-database-column-for-date-updated-has-no-semantic-meaning-nor-should-it.htm 10/25/22 - Blog - Charlie Arehart - Upcoming Adobe webinar on preview of CF2023, date and title changeAdobe had announced some weeks ago two upcoming webinars, one as a preview of CF2023 (originally set for Dec 22), and the other on mobile apps with CF and Monaco (originally set for Nov 23).If like me you may have signed up for them, note that sometime recently the dates have been swapped. (Also the name of the preview session has been changed, from “Winter Holiday Special: A preview of ColdFusion 2023” to instead refer to the product code-name instead.)https://coldfusion.adobe.com/2022/10/upcoming-adobe-webinar-on-preview-of-cf2023-date-and-title-change/ CFML JobsSeveral positions available on https://www.getcfmljobs.com/Listing over 143 ColdFusion positions from 79 companies across 66 locations in 5 Countries.1 new jobs listed this weekFull-Time - Sr. Software Engineer - Coldfusion at Delhi, Delhi - India Oct 28https://www.getcfmljobs.com/jobs/index.cfm/india/Sr-Software-Engineer-Coldfusion-at-Delhi-Delhi/11530 Other Online Jobshttps://lighting-new-york.breezy.hr/p/8ddb3ce952b8 Other Job Links Ortus Solutions https://www.ortussolutions.com/about-us/careers There is a jobs channel in the CFML slack team, and in the box team slack now too ForgeBox Module of the WeekDialpadcfc By Matthew ClementeA CFML wrapper for the Dialpad API. Use it to interact with the Dialpad call and contact center platform to make calls, send SMS, manage your account, and more.What is Dialpad? Experience the future of Ai in the workplaceWith built-in speech recognition and natural language processing, Dialpad Ai is completely changing the way the world works together.This is an early stage API wrapper and does not yet cover the full Dialpad API. Feel free to use the issue tracker to report bugs or suggest improvements!https://forgebox.io/view/dialpadcfc VS Code Hint Tips and Tricks of the WeekGithub CopilotGitHub Copilot is an AI pair programmer that helps you write code faster and with less work. It draws context from comments and code to suggest individual lines and whole functions instantly. GitHub Copilot is powered by Codex, a generative pretrained language model created by OpenAI. It is available as an extension for Visual Studio Code, Visual Studio, Neovim, and the JetBrains suite of integrated development environments (IDEs).GitHub Copilot is not intended for non-coding tasks like data generation and natural language generation, like question & answering. Your use of GitHub Copilot is subject to the GitHub Terms for Additional Product and Features.https://github.com/features/copilot/ https://marketplace.visualstudio.com/items?itemName=GitHub.copilot Thank you to all of our Patreon SupportersThese individuals are personally supporting our open source initiatives to ensure the great toolings like CommandBox, ForgeBox, ColdBox, ContentBox, TestBox and all the other boxes keep getting the continuous development they need, and funds the cloud infrastructure at our community relies on like ForgeBox for our Package Management with CommandBox. You can support us on Patreon here https://www.patreon.com/ortussolutionsDon't forget, we have Annual Memberships, pay for the year and save 10% - great for businesses. Bronze Packages and up, now get a ForgeBox Pro and CFCasts subscriptions as a perk for their Patreon Subscription. All Patreon supporters have a Profile badge on the Community Website All Patreon supporters have their own Private Forum access on the Community Website All Patreon supporters have their own Private Channel access BoxTeam Slack Live Stream Access to streams like “Koding with the Kiwi + Friends” and Ortus Software Craftsmanship Book Club https://community.ortussolutions.com/ Patreons John Wilson - Synaptrix Jordan Clark Gary Knight Mario Rodrigues Giancarlo Gomez David Belanger Dan Card Jonathan Perret Jeffry McGee - Sunstar Media Dean Maunder Nolan Erck Abdul Raheen Wil De Bruin Joseph Lamoree Don Bellamy Jan Jannek Laksma Tirtohadi Brian Ghidinelli - Hagerty MotorsportReg Carl Von Stetten Jeremy Adams Didier Lesnicki Matthew Clemente Daniel Garcia Scott Steinbeck - Agri Tracking Systems Ben Nadel Richard Herbet Brett DeLine Kai Koenig Charlie Arehart Jason Daiger Shawn Oden Matthew Darby Ross Phillips Edgardo Cabezas Patrick Flynn Stephany Monge Kevin Wright John Whish Peter Amiri Cavan Vannice John Nessim You can see an up to date list of all sponsors on Ortus Solutions' Websitehttps://ortussolutions.com/about-us/sponsors Thanks everyone!!! ★ Support this podcast on Patreon ★
Vincent van Gogh spent decades perfecting his impressionist style. You can master it in one minute with the new AI art generator called DALL-E. Do we even need artists any more? DALL-E 2 is an artificial intelligence image generator released to the public on Sept 28, 2022. Considered by experts to be a noticeable jump in technology, the website or app allows users to type in any imaginary prompt “a monkey eating grapes in Mexico in the style of Picasso” and in a few seconds generates a new unique image that tries to match the prompt. Try DALL-E for yourself! THE QUESTIONS Adam & Chris imagine the implications and effects of AI generated art including questions like: How does this work? Could AI generated art replace the human artist? Who are the people behind DALL-E art and what are their goals? What does the future look like with AI generated art? Is their a Christian viewpoint on AI generated art? What happens in a world where everything can be unique? LINKS & RESOURCES OpenAI is the non-profit that created DALL-E 2. Their 2015 announcement outlined their goals. OpenAI has a content policy for safety and blocks certain keywords from working that could create pornography or fakes of politicians. It also has worked to promote accurate diversity in results. However other groups have announced plans to release similar AI tools without any filters Chris mentions concepts from Roland Barthes' “The Death of the Author” THE IMAGES For being a podcast for your ears, this was a pretty eye-opening episode! (groan) Check out all the visual images that Adam & Chris talk about here, including unique images that they created with DALL-E Learn more about your ad choices. Visit podcastchoices.com/adchoices
Andrej Karpathy is a legendary AI researcher, engineer, and educator. He's the former director of AI at Tesla, a founding member of OpenAI, and an educator at Stanford. Please support this podcast by checking out our sponsors: – Eight Sleep: https://www.eightsleep.com/lex to get special savings – BetterHelp: https://betterhelp.com/lex to get 10% off – Fundrise: https://fundrise.com/lex – Athletic Greens: https://athleticgreens.com/lex to get 1 month of fish oil EPISODE LINKS: Andrej's Twitter: http://twitter.com/karpathy Andrej's YouTube: http://youtube.com/c/AndrejKarpathy Andrej's Website: http://karpathy.ai Andrej's Google Scholar: http://scholar.google.com/citations?user=l8WuQJgAAAAJ Books mentioned: The Vital Question: https://amzn.to/3q0vN6q Life Ascending: https://amzn.to/3wKIsOE The Selfish Gene: https://amzn.to/3TCo63s Contact: https://amzn.to/3W3y5Au The Cell: https://amzn.to/3W5f6pa PODCAST
https://astralcodexten.substack.com/p/nick-cammarata-on-jhana Buddhists say that if you meditate enough, you can learn to enter a state of extreme bliss called jhana. (there are many different jhana states - there's a discussion of the distinctions here - but I'm lumping them together for simplicity. For attempted explanations of why jhana should exist, see here and here.) Jhana is different from enlightenment. Enlightenment changes you forever. Jhana is just a state you can enter during meditation sessions, then leave when the session is over. Enlightenment takes years or decades of work, but some people describe reaching jhana after a few months of practice. Hardcore Buddhists insist that jhana is good only insofar as it serves as a stepping stone to enlightenment; others may find extreme bliss desirable in its own right. Nick Cammarata of OpenAI sometimes meditates and reaches jhana. I've found his descriptions unusually, well, descriptive: