POPULARITY
OpenAI has released research in collaboration with the MIT Media Lab that explores the emotional impact of using ChatGPT. The study indicates that while over 400 million people engage with ChatGPT weekly, only a small number form emotional connections with the chatbot, which is primarily marketed as a productivity tool. Notably, female users reported a decrease in socialization after four weeks of use, and those who interacted with the chatbot in a voice different from their own experienced heightened feelings of loneliness. The findings suggest that users who bond with ChatGPT may face increased loneliness and emotional dependency, prompting OpenAI to submit these studies for peer review.In the competitive landscape of generative artificial intelligence, OpenAI is reportedly facing significant financial challenges, with annual operating costs estimated between $7-8 billion. AI scholar Kai-Fu Lee points out that as foundational models become more commoditized, OpenAI may struggle to compete with cheaper alternatives like DeepSea, which operates at just 2% of OpenAI's costs. Lee emphasizes that the economics of the AI industry are shifting towards open-source models, which are cheaper to produce and operate, suggesting that while OpenAI is not on the brink of collapse, the market may soon be dominated by a few key players.The podcast also discusses the evolving capabilities of AI models, highlighting the latest version of ChatGPT, which can now blend text and image generation and respond to voice commands. Additionally, DeepSeek has upgraded its AI model, showing improved performance in coding and reasoning tasks, while Google has introduced its Gemini 2.5 Pro model, which boasts enhanced reasoning capabilities and a large token context window. These advancements indicate a trend where AI models are becoming more versatile and capable of handling complex tasks, emphasizing the importance of deployment flexibility and cost efficiency in the evolving AI landscape.Finally, the episode addresses ongoing privacy concerns surrounding AI technologies, including a new complaint against OpenAI in Europe for generating false information and a settlement reached by Clearview AI regarding privacy violations. The discussion highlights the legal implications of using generative AI tools, particularly in relation to GDPR compliance. Additionally, the podcast examines the lack of diversity in IT leadership, revealing that despite efforts towards diversity, equity, and inclusion, the demographic makeup of IT leadership remains largely unchanged, underscoring the need for continued focus on inclusive leadership in the tech industry. Four things to know today 00:00 Talking to ChatGPT Might Hurt Your Mood—And OpenAI's Bottom Line04:15 Who's Winning the AI Arms Race? Depends If You Want Comics, Code, or Context08:10 From Encrypted Chats to AI Slip-Ups—More in “What Could Possibly Go Wrong?”12:00 All Talk, No Change? IT Leadership Still Looks the Same in 2025 Supported by: https://syncromsp.com/ Event: : https://www.nerdiocon.com/ All our Sponsors: https://businessof.tech/sponsors/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/ Support the show on Patreon: https://patreon.com/mspradio/ Want to be a guest on Business of Tech: Daily 10-Minute IT Services Insights? Send Dave Sobel a message on PodMatch, here: https://www.podmatch.com/hostdetailpreview/businessoftech Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftechBluesky: https://bsky.app/profile/businessof.tech
Nghe trọn sách nói AI 2041: 10 Viễn Cảnh Cho Tương Lai trên ứng dụng Fonos: https://fonos.link/podcast-tvsn --Về Fonos:Fonos là Ứng dụng âm thanh số - Với hơn 13.000 nội dung gồm Sách nói có bản quyền, PodCourse, Podcast, Ebook, Tóm tắt sách, Thiền định, Truyện ngủ, Nhạc chủ đề, Truyện thiếu nhi. Bạn có thể nghe miễn phí chương 1 của tất cả sách nói trên Fonos. Tải app để trải nghiệm ngay!--Từ rất lâu trước khi ChatGPT ra đời, Kai-Fu Lee và Chen Qiufan đã hiểu được tiềm năng to lớn của trí tuệ nhân tạo trong việc biến đổi cuộc sống hằng ngày của chúng ta. Nhưng ngay cả khi thế giới thức tỉnh với sức mạnh của AI, nhiều người trong chúng ta vẫn không nắm bắt được bức tranh toàn cảnh. Chatbot và các mô hình ngôn ngữ lớn chỉ là khởi đầu. Trong "sự hợp tác đầy cảm hứng" này, Kai-Fu Lee và Chen Qiufan đã hợp tác để tưởng tượng ra thế giới của chúng ta vào năm 2041 và cách AI sẽ định hình thế giới đó. 10 viễn cảnh tương lai của năm 2041 là 10 câu chuyện thú vị qua thủ pháp viễn tưởng khoa học của nhà văn Chen QiuFan, ngay sau đó là những phân tích lý giải chuyên sâu về mặt công nghệ của Kai-Fu-Lee. Các câu chuyện được sắp xếp theo mọi khía cạnh của AI theo thứ tự từ công nghệ cơ bản đến nâng cao.Các câu chuyện trong cuốn sách sẽ mang đến cho ta hình dung hoàn toàn có căn cứ về một thế giới của 20 năm sau, nơi mà rất có thể AI sẽ hiện diện tràn ngập xung quanh chúng ta. Tuy nhiên, đây không chỉ là một lời cảnh báo về các rủi ro đe dọa sự tồn vong của nhân loại, đây còn là sự chuẩn bị cho một tương lai nhiều thách thức, nhiều cơ hội và cũng đầy hứa hẹn, tất cả phụ thuộc vào mỗi hành động, mỗi quyết định của chúng ta ngay từ hôm nay.--Tìm hiểu thêm về Fonos: https://fonos.vn/Theo dõi Facebook Fonos: https://www.facebook.com/fonosvietnam/
Bio Bala has rich experience in retail technology and process transformation. Most recently, he worked as a Principal Architect for Intelligent Automation, Innovation & Supply Chain in a global Fortune 100 retail corporation. Currently he works for a luxury brand as Principal Architect for Intelligent Automation providing technology advice for the responsible use of technology (Low Code, RPA, Chatbots, and AI). He is passionate about technology and spends his free time reading, writing technical blogs and co-chairing a special interest group with The OR Society. Interview Highlights 02:00 Mentors and peers 04:00 Community bus 07:10 Defining AI 08:20 Contextual awareness 11:45 GenAI 14:30 The human loop 17:30 Natural Language Processing 20:45 Sentiment analysis 24:00 Implementing AI solutions 26:30 Ethics and AI 27:30 Biased algorithms 32:00 EU AI Act 33:00 Responsible use of technology Connect Bala Madhusoodhanan on LinkedIn Books and references · https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html - NLP · https://www.theregister.com/2021/05/27/clearview_europe/ - Facial Technology Issue · https://www.designnews.com/electronics-test/apple-card-most-high-profile-case-ai-bias-yet - Apple Card story · https://www.ft.com/content/2d6fc319-2165-42fb-8de1-0edf1d765be3 - Data Centre growth · https://www.technologyreview.com/2024/02/06/1087793/what-babies-can-teach-ai/ · Independent Audit of AI Systems - · Home | The Alan Turing Institute · Competing in the Age of AI: Strategy and Leadership When Algorithms and Networks Run the World, Marco Iansiti & Karim R. Lakhani · AI Superpowers: China, Silicon Valley, and the New World, Kai-Fu Lee · The Algorithmic Leader: How to Be Smart When Machines Are Smarter Than You, Mike Walsh · Human+Machine: Reimagining Work in the Age of AI, Paul R Daugherty, H. James Wilson · Superintelligence: Paths, Dangers, Strategies, Nick Bostrom · The Alignment Problem: How Can Artificial Intelligence Learn Human Values, Brian Christian · Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent, and Respectful AI, Reid Blackman · Wanted: Human-AI Translators: Artificial Intelligence Demystified, Geertrui Mieke De Ketelaere · The Future of Humanity: Terraforming Mars, Interstellar Travel, Immortality, and Our Destiny Beyond, Michio Kaku, Feodor Chin et al Episode Transcript Intro: Hello and welcome to the Agile Innovation Leaders podcast. I'm Ula Ojiaku. On this podcast I speak with world-class leaders and doers about themselves and a variety of topics spanning Agile, Lean Innovation, Business, Leadership and much more – with actionable takeaways for you the listener. Ula Ojiaku So I have with me here, Bala Madhusoodhanan, who is a principal architect with a global luxury brand, and he looks after their RPA and AI transformation. So it's a pleasure to have you on the Agile Innovation Leaders podcast, Bala, thank you for making the time. Bala Madhusoodhanan It's a pleasure to have a conversation with the podcast and the podcast audience, Ula. I follow the podcast and there have been fantastic speakers in the past. So I feel privileged to join you on this conversation. Ula Ojiaku Well, the privilege is mine. So could you start off with telling us about yourself Bala, what have been the key points or the highlights of your life that have led to you being the Bala we know now? Bala Madhusoodhanan It's putting self into uncharted territory. So my background is mechanical engineering, and when I got the job, it was either you go into the mechanical engineering manufacturing side or the software side, which was slightly booming at that point of time, and obviously it was paying more then decided to take the software route, but eventually somewhere the path kind of overlapped. So from a mainframe background, started working on supply chain, and then came back to optimisation, tied back to manufacturing industry. Somewhere there is an overlap, but yeah, that was the first decision that probably got me here. The second decision was to work in a UK geography, rather than a US geography, which is again very strange in a lot of my peers. They generally go to Silicon Valley or East Coast, but I just took a choice to stay here for personal reasons. And then the third was like the mindset. I mean, I had over the last 15, 20 years, I had really good mentors, really good peers, so I always had their help to soundboard my crazy ideas, and I always try to keep a relationship ongoing. Ula Ojiaku What I'm hearing is, based on what you said, lots of relationships have been key to getting you to where you are today, both from mentors, peers. Could you expand on that? In what way? Bala Madhusoodhanan The technology is changing quite a lot, at least in the last 10 years. So if you look into pre-2010, there was no machine learning or it was statistics. People were just saying everything is statistics and accessibility to information was not that much, but post 2010, 2011, people started getting accessibility. Then there was a data buzz, big data came in, so there were a lot of opportunities where I could have taken a different career path, but every time I was in a dilemma which route to take, I had someone with whom either I have worked or who was my team lead or manager to guide me to tell me, like, take emotion out of the decision making and think in a calm mind, because you might jump into something and you might like it, you might not like it, you should not regret it. So again, over the course of so many such decisions, my cognitive mind has also started thinking about it. So those conversations really help. And again, collective experience. If you look into the decision making, it's not just my decision, I'm going through conversations that I had with people where they have applied their experience, so it's not just me or just not one situation, and to understand the why behind that, and that actually helps. In short, it's like a collection of conversations that I had with peers. A few of them are visionary leaders, they are good readers. So they always had a good insight on where I should focus, where I shouldn't focus, and of late recently, there has been a community bus. So a lot of things are moving to open source, there is a lot of community exchange of conversation, the blogging has picked up a lot. So, connecting to those parts also gives you a different dimension to think about. Ula Ojiaku So you said community bus, some of the listeners or people who are watching the video might not understand what you mean by the community bus. Are you talking about like meetups or communities that come around to discuss shared interests? Bala Madhusoodhanan If you are very much specifically interested in AI, or you are specifically interested in, power platform or a low code platform, there are a lot of content creators on those topics. You can go to YouTube, LinkedIn, and you get a lot of information about what's happening. They do a lot of hackathons, again, you need to invest time in all these things. If you don't, then you are basically missing the boat, but there are various channels like hackathon or meetup groups, or, I mean, it could be us like a virtual conversation like you and me, we both have some passionate topics, that's why we resonate and we are talking about it. So it's all about you taking an initiative, you finding time for it, and then you have tons and tons of information available through community or through conferences or through meetup groups. Ula Ojiaku Thanks for clarifying. So, you said as well, you had a collection of conversations that helped you whenever you were at a crossroad, some new technology or something emerges or there's a decision you had to make and checking in with your mentors, your peers and your personal Board of Directors almost, that they give you guidance. Now, looking back, would you say there were some turns you took that knowing what you know now, you would have done differently? Bala Madhusoodhanan I would have liked to study more. That is the only thing, because sometimes the educational degree, even though without a practical knowledge has a bigger advantage in certain conversation, otherwise your experience and your content should speak for you and it takes a little bit of effort and time to get that trust among leaders or peers just to, even them to trust saying like, okay, this person knows what he's talking about. I should probably trust rather than, someone has done a PhD and it's just finding the right balance of when I should have invested time in continuing my education, if I had time, I would have gone back two years and did everything that I had done, like minus two years off-set it by two years earlier. It would have given me different pathways. That is what I would think, but again, it's all constraints. I did the best at that point in time with whatever constraints I had. So I don't have any regret per se, but yeah, if there is a magic wand, I would do that. Ula Ojiaku So you are a LinkedIn top voice from AI. How would you define AI, artificial intelligence? Bala Madhusoodhanan I am a bit reluctant to give a term Artificial Intelligence. It's in my mind, it is Artificial Narrow Intelligence, it's slightly different. So let me start with a building block, which is machine learning. So machine learning is like a data labeller. You go to a Tesco store, you read the label, you know it is a can of soup because you have read the label, your brain is not only processing that image, it understands the surrounding. It does a lot of things when you pick that can of soup. You can't expect that by just feeding one model to a robot. So that's why I'm saying like it's AI is a bit over glorified in my mind. It is artificial narrow intelligence. What you do to automate certain specific tasks using a data set which is legal, ethical, and drives business value is what I would call machine learning, but yeah, it's just overhyped and heavily utilised term AI. Ula Ojiaku You said, there's a hype around artificial intelligence. So what do you mean by that? And where do you see it going? Bala Madhusoodhanan Going back to the machine learning definition that I said, it's basically predicting an output based on some input. That's as simple as what we would say machine learning. The word algorithm is basically something like a pattern finder. What you're doing is you are giving a lot of data, which is properly labelled, which has proper diversity of information, and there are multiple algorithms that can find patterns. The cleverness or engineering mind that you bring in is to select which pattern or which algorithm you would like to do for your use case. Now you're channelling the whole machine learning into one use case. That's why I'm going with the term narrow intelligence. Computers can do brilliant jobs. So you ask computers to do like a Rubik's cubes solving. It will do it very quickly because the task is very simple and it is just doing a lot of calculation. You give a Rubik's cube to a kid. It has to apply it. The brain is not trained enough, so it has to cognitively learn. Maybe it will be faster. So anything which is just pure calculation, pure computing, if the data is labelled properly, you want to predict an outcome, yes, you can use computers. One of the interesting videos that I showed in one of my previous talks was a robot trying to walk across the street. This is in 2018 or 19. The first video was basically talking about a robot crossing a street and there were vehicles coming across and the robot just had a headbutt and it just fell off. Now a four year old kid was asked to walk and it knew that I have to press a red signal. So it went to the signal stop. It knew, or the baby knew that I can only walk when it is green. And then it looks around and then walks so you can see the difference – a four year old kid has a contextual awareness of what is happening, whereas the robot, which is supposed to be called as artificial intelligence couldn't see that. So again, if you look, our human brains have been evolved over millions of years. There are like 10 billion neurons or something, and it is highly optimised. So when I sleep, there are different set of neurons which are running. When I speak to you, my eyes and ears are running, my motion sensor neurons are running, but these are all highly optimised. So the mother control knows how much energy should be sent on which neuron, right, whereas all these large language models, there is only one task. You ask it, it's just going to do that. It doesn't have that intelligence to optimise. When I sleep, maybe 90 percent of my neurons are sleeping. It's getting recharged. Only the dream neurons are working. Whereas once you put a model live, it doesn't matter, all the hundred thousand neurons would run. So, yeah, it's in very infancy state, maybe with quantum computing, maybe with more power and better chips things might change, but I don't see that happening in the next five to 10 years. Ula Ojiaku Now, what do you say about Gen AI? Would you also classify generative AI as purely artificial neural intelligence? Bala Madhusoodhanan The thing with generative AI is you're trying to generalise a lot of use cases, say ChatGPT, you can throw in a PDF, you can ask something, or you can say, hey, can you create a content for my blog or things like that, right? Again, all it is trying to do is it has some historical content with which it is trying to come up with a response. So the thing that I would say is humans are really good with creativity. If a problem is thrown at a person, he will find creative ways to solve it. The tool with which we are going to solve might be a GenAI tool, I don't know, because I don't know the problem, but because GenAI is in a hype cycle, every problem doesn't need GenAI, that's my view. So there was an interesting research which was done by someone in Montreal University. It talks about 10 of the basic tasks like converting text to text or text to speech and with a generative AI model or multiple models, because you have a lot of vendors providing different GenAI models, and then they went with task specific models and the thing that they found was the task specific models were cheap to run, very, very scalable and robust and highly accurate, right. Whereas GenAI, if, when you try to use it and when it goes into a production ready or enterprise ready and if it is used by customers or third party, which are not part of your ecosystem, you are putting yourself in some kind of risk category. There could be a risk of copyright issues. There could be a risk of IP issues. There could be risk of not getting the right consent from someone. I can say, can you create an image of a podcaster named Ula? You never know because you don't remember that one of your photos on Google or Twitter or somewhere is not set as private. No one has come and asked you saying, I'm using this image. And yeah, it's finding the right balance. So even before taking the technology, I think people should think about what problem are they trying to solve? In my mind, AI or artificial intelligence, or narrow intelligence can have two buckets, right. The first bucket is to do with how can I optimise the existing process? Like there are a lot of things that I'm doing, is there a better way to do it? Is there an efficient way to do it? Can I save time? Can I save money? Stuff like that. So that is an optimisation or driving efficiency lever. Other one could be, I know what to do. I have a lot of data, but I don't have infrastructure or people to do it, like workforce augmentation. Say, I have 10 data entry persons who are graduate level. Their only job is to review the receipts or invoices. I work in FCA. I have to manually look at it, approve it, and file it, right? Now it is a very tedious job. So all you are doing is you are augmenting the whole process with an OCR engine. So OCR is Optical Character Recognition. So there are models, which again, it's a beautiful term for what our eyes do. When we travel somewhere, we get an invoice, we exactly know where to look, right? What is the total amount? What is the currency I have paid? Have they taken the correct credit card? Is my address right? All those things, unconsciously, your brain does it. Whereas our models given by different software vendors, which have trained to capture these specific entities which are universal language, to just pass, on data set, you just pass the image on it. It just picks and maps that information. Someone else will do that job. But as part of your process design, what you would do is I will do the heavy lifting of identifying the points. And I'll give it to someone because I want someone to validate it. It's human at the end. Someone is approving it. So they basically put a human in loop and, human centric design to a problem solving situation. That's your efficiency lever, right? Then you have something called innovation level - I need to do something radical, I have not done this product or service. Yeah, that's a space where you can use AI, again, to do small proof of concepts. One example could be, I'm opening a new store, it's in a new country, I don't know how the store layout should look like. These are my products. This is the store square footage. Can you recommend me the best way so that I can sell through a lot? Now, a visual merchandising team will have some ideas on where the things should be, they might give that prompt. Those texts can be converted into image. Once you get the base image, then it's human. It's us. So it will be a starting point rather than someone implementing everything. It could be a starting point. But can you trust it? I don't know. Ula Ojiaku And that's why you said the importance of having a human in the loop. Bala Madhusoodhanan Yeah. So the human loop again, it's because we humans bring contextual awareness to the situation, which machine doesn't know. So I'll tie back this to the NLP. So Natural Language Processing, it has two components, so you have natural language understanding and then you have natural language generation. When you create a machine learning model, all it is doing is, it is understanding the structure of language. It's called form. I'm giving you 10,000 PDFs, or you're reading a Harry Potter book. There is a difference between you reading a Harry Potter book and the machine interpreting that Harry Potter book. You would have imagination. You will have context of, oh, in the last chapter, we were in the hilly region or in a valley, I think it will be like this, the words like mist, cold, wood. You started already forming images and visualising stuff. The machine doesn't do that. Machine works on this is the word, this is a pronoun, this is the noun, this is the structure of language, so the next one should be this, right? So, coming back to the natural language understanding, that is where the context and the form comes into play. Just think of some alphabets put in front of you. You have no idea, but these are the alphabet. You recognise A, you recognise B, you recognise the word, but you don't understand the context. One example is I'm swimming against the current. Now, current here is the motion of water, right? My current code base is version 01. I'm using the same current, right? The context is different. So interpreting the structure of language is one thing. So, in natural language understanding, what we try to do is we try to understand the context. NLG, Natural Language Generation, is basically how can I respond in a way where I'm giving you an answer to your query. And this combined is NLP. It's a big field, there was a research done, the professor is Emily Bender, and she one of the leading professors in the NLP space. So the experiment was very funny. It was about a parrot in an island talking to someone, and there was a shark in between, or some sea creature, which basically broke the connection and was listening to what this person was saying and mimicking. Again, this is the problem with NLP, right? You don't have understanding of the context. You don't put empathy to it. You don't understand the voice modulation. Like when I'm talking to you, you can judge what my emotion cues are, you can put empathy, you can tailor the conversation. If I'm feeling sad, you can put a different spin, whereas if I'm chatting to a robot, it's just going to give a standard response. So again, you have to be very careful in which situation you're going to use it, whether it is for a small team, whether it is going to be in public, stuff like that. Ula Ojiaku So that's interesting because sometimes I join the Masters of Scale strategy sessions and at the last one there was someone whose organisational startup was featured and apparently what their startup is doing is to build AI solutions that are able to do sentiment analysis. And I think some of these, again, in their early stages, but some of these things are already available to try to understand the tone of voice, the words they say, and match it with maybe the expression and actually can transcribe virtual meetings and say, okay, this person said this, they looked perplexed or they looked slightly happy. So what do you think about that? I understand you're saying that machines can't do that, but it seems like there are already organisations trying to push the envelope towards that direction. Bala Madhusoodhanan So the example that you gave, sentiment of the conversation, again, it is going by the structure or the words that I'm using. I am feeling good. So good, here is positive sentiment. Again, for me the capability is slightly overhyped, the reason being is it might do 20 percent or 30 percent of what a human might do, but the human is any day better than that particular use case, right? So the sentiment analysis typically works on the sentiment data set, which would say, these are the certain proverbs, these are the certain types of words, this generally referred to positive sentiment or a good sentiment or feel good factor, but the model is only good as good as the data is, right? So no one is going and constantly updating that dictionary. No one is thinking about it, like Gen Z have a different lingo, millennials had a different lingo. So, again, you have to treat it use case by use case, Ula. Ula Ojiaku At the end of the day, the way things currently are is that machines aren't at the place where they are as good as humans. Humans are still good at doing what humans do, and that's the key thing. Bala Madhusoodhanan Interesting use case that I recently read probably after COVID was immersive reading. So people with dyslexia. So again, AI is used for good as well, I'm not saying it is completely bad. So AI is used for good, like, teaching kids who are dyslexic, right? Speech to text can talk, or can translate a paragraph, the kid can hear it, and on the screen, I think one note has an immersive reader, it actually highlights which word it is, uttering into the ears and research study showed that kids who were part of the study group with this immersive reading audio textbook, they had a better grasp of the context and they performed well and they were able to manage dyslexia better. Now, again, we are using the technology, but again, kudos to the research team, they identified a real problem, they formulated how the problem could be solved, they were successful. So, again, technology is being used again. Cancer research, they invest heavily, in image clustering, brain tumours, I mean, there are a lot of use cases where it's used for good, but then again, when you're using it, you just need to think about biases. You need to understand the risk, I mean, everything is risk and reward. If your reward is out-paying the minimum risk that you're taking, then it's acceptable. Ula Ojiaku What would you advise leaders of organisations who are considering implementing AI solutions? What are the things we need to consider? Bala Madhusoodhanan Okay. So going back to the business strategy and growth. So that is something that the enterprises or big organisations would have in mind. Always have your AI goals aligned to what they want. So as I said, there are two buckets. One is your efficiency driver, operational efficiency bucket. The other one is your innovation bucket. Just have a sense check of where the business wants to invest in. Just because AI is there doesn't mean you have to use it right. Look into opportunities where you can drive more values. So that would be my first line of thought. The second would be more to do with educating leaders about AI literacy, like what each models are, what do they do? What are the pitfalls, the ethical awareness about use of AI, data privacy is big. So again, that education is just like high level, with some examples on the same business domain where it has been successful, where it has been not so successful, what are the challenges that they face? That's something that I would urge everyone to invest time in. I think I did mention about security again, over the years, the practice has been security is always kept as last. So again, I was fortunate enough to work in organisations where security first mindset was put in place, because once you have a proof of value, once you show that to people, people get excited, and it's about messaging it and making sure it is very secured, protecting the end users. So the third one would be talking about having secure first design policies or principles. Machine learning or AI is of no good if your data quality is not there. So have a data strategy is something that I would definitely recommend. Start small. I mean, just like agile, you take a value, you start small, you realise whether your hypothesis was correct or not, you monitor how you performed and then you think about scale just by hello world doesn't mean that you have mastered that. So have that mindset, start small, monitor, have constant feedback, and then you think about scaling. Ula Ojiaku What are the key things about ethics and AI, do you think leaders should be aware of at this point in time? Bala Madhusoodhanan So again, ethical is very subjective. So it's about having different stakeholders to give their honest opinion of whether your solution is the right thing to do against the value of the enterprise. And it's not your view or my view, it's a consent view and certain things where people are involved, you might need to get HR, you might need to get legal, you might need to get brand reputation team to come and assist you because you don't understand the why behind certain policies were put in place. So one is, is the solution or is the AI ethical to the core value of the enterprise? So that's the first sense check that you need to do. If you pass that sense check, then comes about a lot of other threats, I would say like, is the model that I'm using, did it have a fair representation of all data set? There's a classic case study on one of a big cloud computing giant using an AI algorithm to filter resumes and they had to stop it immediately because the data set was all Ivy League, male, white, dominant, it didn't have the right representation. Over the 10 years, if I'm just hiring certain type of people, my data is inherently biased, no matter how good my algorithm is, if I don't have that data set. The other example is clarify AI. They got into trouble on using very biased data to give an outcome on some decision making to immigration, which has a bigger ramification. Then you talk about fairness, whether the AI system is fair to give you an output. So there was a funny story about a man and a woman in California living together, and I think the woman wasn't provided a credit card, even though everything, the postcode is the same, both of them work in the same company, and it was, I think it has to do with Apple Pay. Apple Pay wanted to bring in a silver credit card, Apple card or whatever it is, but then it is so unfair that the women who was equally qualified was not given the right credit limit, and the bank clearly said the algorithm said so. Then you have privacy concern, right? So all these generic models that you have that is available, even ChatGPT for that matter. Now you can chat with ChatGPT multiple times. You can talk about someone like Trevor Noah and you can say hey, can you create a joke? Now it has been trained with the jokes that he has done, it might be available publicly. But has the creator of model got a consent saying, hey Trevor, I'm going to use your content so that I can give better, and how many such consent, even Wikipedia, if you look into Wikipedia, about 80 percent of the information is public, but it is not diversified. What I mean by that is you can search for a lot of information. If the person is from America or from UK or from Europe, maybe from India to some extent, but what is the quality of data, if you think about countries in Africa, what do you think about South America? I mean, it is not representing the total diversity of data, and we have this large language model, which has been just trained on that data, right? So there is a bias and because of that bias, your outcome might not be fair. So these two are the main things, and of course the privacy concern. So if someone goes and says, hey, you have used my data, you didn't even ask me, then you're into lawsuit. Without getting a proper consent, again, it's a bad world, it's very fast moving and people don't even, including me, I don't even read every terms and condition, I just scroll down, tick, confirm, but those things are the things where I think education should come into play. Think about it, because people don't understand what could go wrong, not to them, but someone like them. Then there is a big fear of job displacement, like if I put this AI system, what will I do with my workforce? Say I had ten people, you need to think about, you need to reimagine your workplace. These are the ten jobs my ten people are doing. If I augment six of those jobs, how can I use my ten resources effectively to do something different or that piece of puzzle is always, again, it goes back to the core values of the company, what they think about their people, how everything is back, but it's just that needs a lot of inputs from multiple stakeholders. Ula Ojiaku It ties back to the enterprise strategy, there is the values, but with technology as it has evolved over the years, things will be made obsolete, but there are new opportunities that are created, so moving from when people travelled with horses and buggies and then the automotive came up. Yes, there wasn't as much demand for horseshoes and horses and buggies, but there was a new industry, the people who would mechanics or garages and things like that. So I think it's really about that. Like, going back to what you're saying, how can you redeploy people? And that might involve, again, training, reskilling, and investing in education of the workforce so that they're able to harness AI and to do those creative things that you've emphasised over this conversation about human beings, that creative aspect, that ability to understand context and nuance and apply it to the situation. Bala Madhusoodhanan So I was fortunate to work with ForHumanity, an NGO which basically is trying to certify people to look into auditing AI systems. So EU AI Act is now in place, it will be enforced soon. So you need people to have controls on all these AI systems to protect - it's done to protect people, it's done to protect the enterprise. So I was fortunate enough to be part of that community. I'm still working closely with the Operation Research Society. Again, you should be passionate enough, you should find time to do it, and if you do it, then the universe will find a way to give you something interesting to work with. And our society, The Alan Turing Institute, the ForHumanity Society, I had a few ICO workshops, which was quite interesting because when you hear perspectives from people from different facets of life, like lawyers and solicitors, you would think, ah, this statement, I wouldn't interpret in this way. It was a good learning experience and I'm sure if I have time, I would still continue to do that and invest time in ethical AI. As technology, it's not only AI, it's ethical use of technology, so sustainability is also part of ethical bucket if you look into it. So there was an interesting paper it talks about how many data centres have been opened between 2018 to 2024, which is like six years and the power consumption has gone from X to three times X or two times X, so we have opened a lot. We have already caused damage to the environment with all these technology, and just because the technology is there, it doesn't mean you have to use it, but again, it's that educational bit, what is the right thing to do? And even the ESG awareness, people are not aware. Like now, if you go to the current TikTok trenders, they know I need to look into certified B Corp when I am buying something. The reason is because they know, and they're more passionate about saving the world. Maybe we are not, I don't know, but again, once you start educating and, telling those stories, humans are really good, so you will have a change of heart. Ula Ojiaku What I'm hearing you say is that education is key to help us to make informed choices. There is a time and place where you would need to use AI, but not everything requires it, and if we're more thoughtful in how we approach, these, because these are tools at the end of the day, then we can at least try to be more balanced in the risks and taking advantage of opportunities versus the risks around it and the impact these decisions and the tools that we choose to use make on the environment. Now, what books have you found yourself recommending most to people, and why? Bala Madhusoodhanan Because we have been talking on AI, AI Superpower is one book which was written by Kai-Fu Lee. There is this book by Brian Christian, The Alignment Problem: Machine Learning and Human Values alignment of human values and machine it was basically talking about what are the human values? Where do you want to use machine learning? How do you basically come up with a decision making, that's a really interesting read. Then there is a book called Ethical Machines by Reid Blackman. So it talks about all the ethical facets of AI, like biases, fairnesses, like data privacy, transparency, explainability, and he gives quite a detail, example and walkthrough of what that means. Another interesting book was Wanted: Human-AI Translators: Artificial Intelligence Demystified by a Dutch professor, again, really, really lovely narration of what algorithms are, what AI is, where, and all you should think about, what controls and stuff like that. So that is an interesting book. Harvard Professor Kahrim Lakhani, he wrote something called, Competing in the Age of AI, that's a good book. The Algorithmic Leader: How to Be Smart When Machines Are Smarter Than You by Mike Walsh is another good book, which I finished a couple of months back. Ula Ojiaku And if the audience wants to find you, how can they reach out to you? Bala Madhusoodhanan They can always reach out to me at LinkedIn, I would be happy to touch base through LinkedIn. Ula Ojiaku Awesome. And do you have any final words and or ask of the audience? Bala Madhusoodhanan The final word is, again, responsible use of technology. Think about not just the use case, think about the environmental impact, think about the future generation, because I think the damage is already done. So, at least not in this lifetime, maybe three or four lifetimes down the line, it might not be the beautiful earth that we have. Ula Ojiaku It's been a pleasure, as always, speaking with you, Bala, and thank you so much for sharing your insights and wisdom, and thank you for being a guest on the Agile Innovation Leaders Podcast. Bala Madhusoodhanan Thank you, lovely conversation, and yeah, looking forward to connecting with more like minded LinkedIn colleagues. Ula Ojiaku That's all we have for now. Thanks for listening. If you liked this show, do subscribe at www.agileinnovationleaders.com or your favourite podcast provider. Also share with friends and do leave a review on iTunes. This would help others find this show. I'd also love to hear from you, so please drop me an email at ula@agileinnovationleaders.com Take care and God bless!
Trung Quốc đã trở thành một « cường quốc về công nghệ tiên tiến », không chỉ « đi sau » Hoa Kỳ. Các tập đoàn Mỹ trông cậy vào sức mạnh của sự sáng tạo và đồng tiền để phát triển trí tuệ nhân tạo AI. Deepseek đang chứng minh rằng đó không là con đường duy nhất để đến đích. Viễn cảnh Bắc Kinh áp đảo thế giới với hàng rẻ low cost kể cả về công nghệ mới đặt Washington trong thế « bất an ». Washington vừa công bố kế hoạch « Stargate 500 tỷ đô la », một dự án đầu tư khổng lồ để phát triển công nghệ trí tuệ nhân tạo, logo con cá voi xanh Trung Quốc của Deepseek làm rung chuyển từ thung lũng công nghệ Silicon ở California đến tận sàn chứng khoán Wall Street tại New York. Lý do « một phiên bản Trung Quốc của ChatGPT được cho là thông minh hơn, gọn nhẹ hơn và hiệu quả hơn » so với các công cụ của Mỹ.Bắc Kinh thách thức thung lũng công nghệ Silicon Những tên tuổi lớn ở thung lũng công nghệ California đã phải thừa nhận là với ứng dụng Deepseek, Bắc Kinh « đã tung một đòn lợi hại » trong trận chiến AI. Hiện tại còn rất nhiều những câu hỏi chung quanh công cụ « trợ giúp thông minh » của Trung Quốc, một công cụ tìm kiếm và hỗ trợ miễn phí, nhưng theo Daniel Andler, một nhà triết học và cũng là một nhà nghiên cứu về toán và khoa học người Mỹ, lịch sử đã chứng minh rằng, cứ mỗi lần mà một công cụ sản xuất, hay một dịch vụ không còn là một thứ xa xỉ phẩm mà trở thành một mặt hàng đại chúng thì đó là một sự « đảo lộn ».Trong trường hợp này, Deepseek đã mang lại nhiều sự đảo lộn cùng một lúc mà đầu tiên hết là một « bước đại nhẩy vọt của Trung Quốc về mặt kỹ thuật và giá cả ».Công ty khởi nghiệp ở Hàng Châu do doanh nhân còn trẻ tuổi Lương Văn Phong lập ra mới chỉ cách này 2 năm, đã trình làng một công cụ « hiệu quả tương đương, thậm chí là thông minh hơn » so với các đối thủ cạnh tranh ở Hoa Kỳ. Deepseek cũng đã được phát triển từ những thiết bị « thô thiển hơn » so với những gì mà các kỹ sư ở Mỹ đã có trong tay để « nuôi dưỡng trí tuệ nhân tạo ».Để có được « sản phẩm mới », căn cứ vào các thông tin chính thức của Deepseek, ứng dùng này chỉ cần chưa đầy 6 triệu đô la để phát triển. Trong khi đó Open AI với ChatGPT của Mỹ do tỷ phú Sam Altman điều hành phải đầu tư hơn 100 triệu đô la.« Hàng rẻ » higt tech Trung Quốc sẽ tràn ngập thế giới Với ứng dụng « thông minh » của Deepseek Trung Quốc có điều kiện để trở thành « công xưởng » của thế giới kể cả trong những lĩnh vực công nghệ tiên tiến nhất. Guillaume Graillet, phóng viên của tuần báo Pháp Le Point phụ trách mục công nghệ cao, nói đến một sự « phục thù » của Bắc Kinh « Đây là một sự phục thù, như ông Lý Khai Phục (Kai Fu Lee) một doanh nhân hàng đầu trong lĩnh vực công nghệ cao của Trung Quốc đã dự báo trong một cuốn sách ông cho ra mắt độc giả đã từ khá lâu nay. Về trận chiến giữa Mỹ và Trung Quốc về trí tuệ nhân tạo, họ Lý đã trông thấy trước một ngày nào đó Trung Quốc sẽ vượt trội, nhờ mô hình mã nguồn mở -open source. Ngoài ra, thành công của Deepseek cũng là thất bại của các chính sách kềm tỏa công nghệ Trung Quốc phát triển mà chính quyền Biden đã theo đuổi và chắc chắn là tổng thống Trump sẽ còn mạnh tay hơn. Điểm thứ ba ở đây là Trung Quốc, trên nguyên tắc, đã phát triển ứng dụng thông minh từ những bọ điện tử không thuộc thế hệ tiên tiến nhất -thậm chí là bị chậm mất so với các đối thủ ở Mỹ đến 2 thế hệ về chip điện tử, vậy mà họ vẫn có những sản phẩm rất tốt. Điều này làm tôi liên tưởng đến trường hợp của Hoa Vi. Mỹ cũng đã tìm mọi cách để ngăn cản nhà cung cấp thiết bị viễn thông này phát triển, nhưng họ vẫn hoạt động rất tốt ở châu Phi và nhiều nơi trên thế giới ».Chiến lược của các doanh nghiệp Mỹ hết hiệu quả ?Gilles Babinet, đồng chủ tịch Hội Đồng Quốc Gia về Công Nghệ Kỹ Thuật Số của Pháp nhấn mạnh nếu như Mỹ trông cậy vào những trung tâm xử ký dữ liệu cồng kềnh, mà tiêu biểu nhất là thông báo của Nhà Trắng đầu tư 500 tỷ đô la để xây dựng các « data center » khổng lồ, thì trái lại, Trung Quốc tập trung vào một mô hình « gọn và nhẹ » ít hao tốn năng lượng hơn.« Trung Quốc là yếu tố mới khuynh đảo chiến lược phát triển của các đại tập đoàn công nghệ cao ở Hoa Kỳ. Mỹ dựa vào sức mạnh của sự sáng tạo và của đồng tiền. Các ông chủ trong lĩnh vực high tech ở Mỹ chủ trương đầu tư thật nhiều tiền, lấy đó làm vũ khí để triệt hạ hết các đối thủ cạnh tranh. Họ chủ trương mở những trung tâm xử lý dữ liệu -data center khổng lồ để không một ai đủ vốn và đủ sức mở những data center khác to hơn là của Mỹ. Hoa Kỳ hiện trong giai đoạn xây dựng các data center kiểu này. Nhưng cần phải nói đây là các khoản đầu từ chưa thu về được lãi. Trong lúc mà Hoa Kỳ đầu tư biết bao nhiêu tiền của để mở các ‘nhà máy' xử lý data cực lớn, và cực mạnh thì Deepseek bắt mọi người phải giật mình : công ty khởi nghiệp này không cần những data center đồ sộ như của Mỹ mà vẫn có thể cung cấp các dịch vụ hỗ trợ hiệu quả như các đối thủ Hoa Kỳ. Điều đó cho thấy tiền bạc và khả năng huy động thật nhiều vốn không là những yếu tố quyết định duy nhất để phát triển » Yếu tố Mỹ trong sự thành công của Trung QuốcKhông được tiếp cận với các thiết bị hiện đại như Mỹ, các doanh nghiệp Trung Quốc đã có nhiều cách xoay xở khác nhau. Vẫn ông Gilles Babinet, thuộc Hội Đồng Quốc Gia về Công Nghệ Kỹ Thuật Số của Pháp cho rằng, vô hình chung, chính sách trừng phạt của Mỹ đã tạo lực đẩy cho những công ty Trung Quốc như Deepseek : « Mỹ áp dụng chính sách ‘lấy thịt đè người. Mang rất nhiều tiền ra để thiên hạ khiếp sợ. Không ai có điều kiện để đầu tư như Mỹ và do vậy họ chỉ đi theo sau và phải dựa vào các doanh nghiệp Mỹ. Sam Altman, chủ nhân Open AI từ một năm rưỡi nay đã loan báo cho ra đời những sản phẩm mới, nhưng vẫn chưa thấy ChatGPT thế hệ 5 xuất hiện trên thị trường. Trong khi đó thì công ty khởi nghiệp của Trung Quốc lại có những ứng dụng thông minh hiệu quả. Đây là một thành công về mặt khoa học, bởi vì Trung Quốc sử dụng một phương pháp khác để phát triển AI. Họ cũng bắt buộc phải đi tìm một con đường khác. Lý do là, dưới tác động của các chính sách trừng phạt Washington ban hành, các hãng Trung Quốc không tiếp cận được với các loại chip tiên tiến nhất. Bản thân nhà cung cấp thiết bị Nvidia, một tập đoàn của Mỹ đã thừa nhận rằng cấm xuất khẩu cho Trung Quốc cũng vô ích, bởi Trung Quốc có khả năng phát triển những ứng dụng hiệu quả từ những bọ điện tử không thuộc dòng ‘đẳng cấp' nhất. Deepseek chứng minh rằng dù chỉ được sử dụng chip đi sau đến 2 thế hệ so với các đối thủ ở Hoa Kỳ, trí tuệ nhân tạo của Trung Quốc vẫn lợi hại ».AI và vế địa chính trị Liệu rằng trí tuệ nhân tạo Trung Quốc đang « xào lại » bàn cờ địa chính trị trên thế giới hay không khi mà ngay cả trong một lĩnh vực công nghệ mũi nhọn như AI, theo sát nút Hoa Kỳ không phải là châu Âu hay Nhật Bản, Hàn Quốc mà lại là một công ty khởi nghiệp ít được biết đến của Trung Quốc.Charles Thibout, chuyên nghiên cứu về khía cạnh địa chính trị trong mảng công nghiệp kỹ thuật số, cộng tác viên của Viện Quan Hệ Quốc Tế và Chiến Lược IRIS của Pháp cho rằng còn quá sớm để kết luận là Trung Quốc đang làm thay đổi cục diện chiến trận AI :« Sự xuất hiện của công ty khởi nghiệp Trung Quốc trong lĩnh vực trí tuệ nhân tạo là một điều đáng khích lệ về mặt sáng tạo và kỹ thuật. Tuy nhiên tôi không dám nghĩ rằng Deepseek làm thay đổi bàn cờ trong trận đấu về AI giữa Hoa Kỳ với Trung Quốc. Với công cụ thông minh để hỗ trợ người sử dụng này, Trung Quốc đã có một bước ‘đại nhẩy vọt' để giành được một chỗ đứng ngang hàng với Mỹ trong một lĩnh vực còn rất mới mẻ. Nhưng bước kế tiếp, chúng ta phải tìm hiểu xem công cụ này được sử dụng như thế nào, để dùng vào việc gì … Tất cả mới chỉ bắt đầu, chưa thể biết được Deepseek hữu ích đến đâu ».Hết thành kiến về vai trò của Đảng và sự sáng tạo ? Cũng chuyên gia Charles Thibout viện IRIS thận trọng cho rằng, trong các thông cáo chính thức của Trung Quốc về những thành tích vượt bực của ứng dụng Deepseek cũng có một phần yếu tố « tuyên truyền » để chứng minh rằng, không gì kềm tỏa được quyết tâm của Bắc Kinh trở thành một cường quốc công nghệ thời đại kỹ thuật số.Điều đó không cấm cản, sự trỗi dậy của Deepseek là một bằng chứng cho thấy rằng, chính sách kiểm duyệt của đảng Cộng Sản Trung Quốc không hoàn toàn bóp ngạt sức sáng tạo và khả năng đổi mới trong làng công nghệ cao của quốc gia này. Về phía Hoa Kỳ, Lầu Năm Góc từ lâu nay đã theo dõi sát tiến trình phát triển công nghệ cao của Trung Quốc kể từ khi quốc gia châu Á này đề ra mục tiêu làm chủ công nghệ « thông minh nhân tạo ».« Từ lâu nay Hoa Kỳ đã theo dõi sát tiến trình phát triển công nghệ của Trung Quốc, đặc biệt là từ khi Bắc Kinh công bố kế hoạch phát triển AI. Năm 2014 Lầu Năm Góc đã có hẳn một chiến lược phát triển công nghệ tránh để bị Trung Quốc qua mặt về quân sự và trong nhiều lĩnh vực khác nữa. Năm 2017 Bắc Kinh khởi động một kế hoạch quy mô để phát triển công nghệ trí tuệ nhân tạo. Trong giai đoạn 2017-2025 hàng năm Trung Quốc đầu tư 70 tỷ đô la cho AI và từ khi đó AI trở thành một vấn đề về địa chính trị. Cũng 2017, ở Matxcơva ông Vladimir Putin giải thích quyền lực trên thế giới thuộc về quốc gia nào làm chủ công nghệ AI. Từ khi đó tất cả mọi người đều chú ý đến AI ». Trung Quốc và Mỹ chia nhau thị trường thế giới Nói cách khác, không ai ngạc nhiên trước sự trỗi dậy của Trung Quốc trong lĩnh vực trí tuệ nhân tạo nói riêng, trong thế giới « high tech » nói chung khi biết rằng từ hơn một thập niên qua, Bắc Kinh đã có hẳn chiến lược phát triển công nghệ cao.Chỉ riêng về AI, trong lúc tại Hoa Lục có hơn 4500 công ty chỉ tập trung phát triển các công cụ hỗ trợ thông minh thì Pháp hiện mới chỉ có từ 600 đến 800 star up mạo hiểm trong một lĩnh vực còn mới mẻ này.Một điểm nổi bật khác là vào lúc mà giới trong ngành chú ý vào những tiến bộ hay kế hoạch đầu tư của các tập đoàn lớn ở Trung Quốc như Baidu chẳng hạn thì những bước đột phá lại xuất phát từ những công ty nhỏ, ít được truyền thông nhắc tới. Cũng chính nhờ ít « nổi tiếng » nên những doanh nghiệp như Deepseek mới dễ xoay xở và thoát lưới trừng phạt của Hoa Kỳ.Cuối cùng, như bà Rebecca Arcesati, thuộc trung tâm nghiên cứu Merics của Đức chuyên về hồ sơ Trung Quốc ghi nhận « hiện tượng » Deepseek làm chính giới ở Washington rúng động bởi Bắc Kinh đang thu hẹp khoảng cách với Hoa Kỳ kể cả trong những lĩnh vực mà đến nay Mỹ vẫn tự coi là « một mình một chợ ». Tâm trạng đó chắc chắn sẽ là củi lửa để các doanh nghiệp Mỹ và nhất là Lầu Năm Góc tăng tốc các chương trình phát triển « công nghệ thông minh » để tiếp tục thống lĩnh toàn cầu.
Kai-Fu Lee joins me to discuss AI in 2025. Kai-Fu is a storied AI researcher, investor, inventor and entrepreneur based in Taiwan. As one of the leading AI experts based in Asia, I wanted to get his take on this particular market.Key insights:Kai-Fu noted that unlike the singular “ChatGPT moment” that stunned Western audiences, the Chinese market encountered generative AI in a more “incremental and distributed” fashion.A particularly fascinating shift is how Chinese enterprises are adopting generative AI. Without the entrenched SaaS layers common in the US, Chinese companies are “rolling their own” solutions. This deep integration might be tougher and messier, but it encourages thorough, domain-specific implementations.We reflected on a structural shift in how we think about productivity software. With AI “conceptualizing” the document and the user providing strategic nudges, it's akin to reversing the traditional creative process.We're moving from a training-centric world to an inference-centric one. Models need to be cheaper, faster and less resource-intensive to run, not just to train. For instance, his team at ZeroOne.ai managed to train a top-tier model on “just” 2,000 H100 GPUs and bring inference costs down to 10 cents per million tokens—a fraction of GPT-4's early costs.In 2025, Kai-Fu predicts, we'll see fewer “demos” and more “AI-first” applications deploying text, image and video generation tools into real-world workflows.Connect with us:Exponential View
Happy holidays! We'll be sharing snippets from Latent Space LIVE! through the break bringing you the best of 2024! We want to express our deepest appreciation to event sponsors AWS, Daylight Computer, Thoth.ai, StrongCompute, Notable Capital, and most of all our LS supporters who helped fund the venue and A/V production!For NeurIPS last year we did our standard conference podcast coverage interviewing selected papers (that we have now also done for ICLR and ICML), however we felt that we could be doing more to help AI Engineers 1) get more industry-relevant content, and 2) recap 2024 year in review from experts. As a result, we organized the first Latent Space LIVE!, our first in person miniconference, at NeurIPS 2024 in Vancouver.Since Nathan Lambert ( Interconnects ) joined us for the hit RLHF 201 episode at the start of this year, it is hard to overstate how much Open Models have exploded this past year. In 2023 only five names were playing in the top LLM ranks, Mistral, Mosaic's MPT, TII UAE's Falcon, Yi from Kai-Fu Lee's 01.ai, and of course Meta's Llama 1 and 2. This year a whole cast of new open models have burst on the scene, from Google's Gemma and Cohere's Command R, to Alibaba's Qwen and Deepseek models, to LLM 360 and DCLM and of course to the Allen Institute's OLMo, OL MOE, Pixmo, Molmo, and Olmo 2 models. We were honored to host Luca Soldaini, one of the research leads on the Olmo series of models at AI2.Pursuing Open Model research comes with a lot of challenges beyond just funding and access to GPUs and datasets, particularly the regulatory debates this year across Europe, California and the White House. We also were honored to hear from and Sophia Yang, head of devrel at Mistral, who also presented a great session at the AI Engineer World's Fair Open Models track!Full Talk on YouTubePlease like and subscribe!Timestamps* 00:00 Welcome to Latent Space Live * 00:12 Recap of 2024: Best Moments and Keynotes * 01:22 Explosive Growth of Open Models in 2024 * 02:04 Challenges in Open Model Research * 02:38 Keynote by Luca Soldani: State of Open Models * 07:23 Significance of Open Source AI Licenses * 11:31 Research Constraints and Compute Challenges * 13:46 Fully Open Models: A New Trend * 27:46 Mistral's Journey and Innovations * 32:57 Interactive Demo: Lachat Capabilities * 36:50 Closing Remarks and NetworkingTranscriptSession3Audio[00:00:00] AI Charlie: Welcome to Latent Space Live, our first mini conference held at NeurIPS 2024 in Vancouver. This is Charlie, your AI co host. As a special treat this week, we're recapping the best of 2024 going domain by domain. We sent out a survey to the over 900 of you who told us what you wanted, and then invited the best speakers in the latent space network to cover each field.[00:00:28] AI Charlie: 200 of you joined us in person throughout the day, with over 2, 200 watching live online. Our next keynote covers the state of open models in 2024, with Luca Soldani and Nathan Lambert of the Allen Institute for AI, with a special appearance from Dr. Sophia Yang of Mistral. Our first hit episode of 2024 was with Nathan Lambert on RLHF 201 back in January.[00:00:57] AI Charlie: Where he discussed both reinforcement learning for language [00:01:00] models and the growing post training and mid training stack with hot takes on everything from constitutional AI to DPO to rejection sampling and also previewed the sea change coming to the Allen Institute. And to Interconnects, his incredible substack on the technical aspects of state of the art AI training.[00:01:18] AI Charlie: We highly recommend subscribing to get access to his Discord as well. It is hard to overstate how much open models have exploded this past year. In 2023, only five names were playing in the top LLM ranks. Mistral, Mosaics MPT, and Gatsby. TII UAE's Falcon, Yi, from Kaifu Lee's 01. ai, And of course, Meta's Lama 1 and 2.[00:01:43] AI Charlie: This year, a whole cast of new open models have burst on the scene. From Google's Jemma and Cohere's Command R, To Alibaba's Quen and DeepSeq models, to LLM360 and DCLM, and of course, to the Allen Institute's OLMO, [00:02:00] OLMOE, PIXMO, MOLMO, and OLMO2 models. Pursuing open model research comes with a lot of challenges beyond just funding and access to GPUs and datasets, particularly the regulatory debates this year across Europe.[00:02:14] AI Charlie: California and the White House. We also were honored to hear from Mistral, who also presented a great session at the AI Engineer World's Fair Open Models track. As always, don't forget to check the show notes for the YouTube link to their talk, as well as their slides. Watch out and take care.[00:02:35] Luca Intro[00:02:35] Luca Soldaini: Cool. Yeah, thanks for having me over. I'm Luca. I'm a research scientist at the Allen Institute for AI. I threw together a few slides on sort of like a recap of like interesting themes in open models for, for 2024. Have about maybe 20, 25 minutes of slides, and then we can chat if there are any questions.[00:02:57] Luca Soldaini: If I can advance to the next slide. [00:03:00] Okay, cool. So I did the quick check of like, to sort of get a sense of like, how much 2024 was different from 2023. So I went on Hugging Face and sort of get, tried to get a picture of what kind of models were released in 2023 and like, what do we get in 2024?[00:03:16] Luca Soldaini: 2023 we get, we got things like both LLAMA 1 and 2, we got Mistral, we got MPT, Falcon models, I think the YI model came in at the end. Tail end of the year. It was a pretty good year. But then I did the same for 2024. And it's actually quite stark difference. You have models that are, you know, reveling frontier level.[00:03:38] Luca Soldaini: Performance of what you can get from closed models from like Quen, from DeepSeq. We got Llama3. We got all sorts of different models. I added our own Olmo at the bottom. There's this growing group of like, Fully open models that I'm going to touch on a little bit later. But you know, just looking at the slides, it feels like 2024 [00:04:00] was just smooth sailing, happy knees, much better than previous year.[00:04:04] Luca Soldaini: And you know, you can plot you can pick your favorite benchmark Or least favorite, I don't know, depending on what point you're trying to make. And plot, you know, your closed model, your open model and sort of spin it in ways that show that, oh, you know open models are much closer to where closed models are today versus to Versus last year where the gap was fairly significant.[00:04:29] Luca Soldaini: So one thing that I think I don't know if I have to convince people in this room, but usually when I give this talks about like open models, there is always like this background question in, in, in people's mind of like, why should we use open models? APIs argument, you know, it's, it's. Just an HTTP request to get output from a, from one of the best model out there.[00:04:53] Luca Soldaini: Why do I have to set up infra and use local models? And there are really like two answer. There is the more [00:05:00] researchy answer for this, which is where it might be. Background lays, which is just research. If you want to do research on language models, research thrives on, on open models, there is like large swath of research on modeling, on how these models behave on evaluation and inference on mechanistic interpretability that could not happen at all if you didn't have open models they're also for AI builders, they're also like.[00:05:30] Luca Soldaini: Good use cases for using local models. You know, you have some, this is like a very not comprehensive slides, but you have things like there are some application where local models just blow closed models out of the water. So like retrieval, it's a very clear example. We might have like constraints like Edge AI applications where it makes sense.[00:05:51] Luca Soldaini: But even just like in terms of like stability, being able to say this model is not changing under the hood. It's, there's plenty of good cases for, [00:06:00] for open models. And the community is just not models. Is I stole this slide from one of the Quent2 announcement blog posts. But it's super cool to see like how much tech exists around open models and serving them on making them efficient and hosting them.[00:06:18] Luca Soldaini: It's pretty cool. And so. It's if you think about like where the term opens come from, comes from like the open source really open models meet the core tenants of, of open, of open source specifically when it comes around collaboration, there is truly a spirit, like through these open models, you can build on top of other people.[00:06:41] Luca Soldaini: innovation. We see a lot of these even in our own work of like, you know, as we iterate in the various versions of Alma it's not just like every time we collect from scratch all the data. No, the first step is like, okay, what are the cool data sources and datasets people have put [00:07:00] together for language model for training?[00:07:01] Luca Soldaini: Or when it comes to like our post training pipeline We one of the steps is you want to do some DPO and you use a lot of outputs of other models to improve your, your preference model. So it's really having like an open sort of ecosystem benefits and accelerates the development of open models.[00:07:23] The Definition of Open Models[00:07:23] Luca Soldaini: One thing that we got in 2024, which is not a specific model, but I thought it was really significant, is we first got we got our first open source AI definition. So this is from the open source initiative they've been generally the steward of a lot of the open source licenses when it comes to software and so they embarked on this journey in trying to figure out, okay, How does a license, an open source license for a model look like?[00:07:52] Luca Soldaini: Majority of the work is very dry because licenses are dry. So I'm not going to walk through the license step by [00:08:00] step, but I'm just going to pick out one aspect that is very good and then one aspect that personally feels like it needs improvement on the good side. This this open source AI license actually.[00:08:13] Luca Soldaini: This is very intuitive. If you ever build open source software and you have some expectation around like what open source looks like for software for, for AI, sort of matches your intuition. So, the weights need to be fairly available the code must be released with an open source license and there shouldn't be like license clauses that block specific use cases.[00:08:39] Luca Soldaini: So. Under this definition, for example, LLAMA or some of the QUEN models are not open source because the license says you can't use this model for this or it says if you use this model you have to name the output this way or derivative needs to be named that way. Those clauses don't meet open source [00:09:00] definition and so they will not be covered.[00:09:02] Luca Soldaini: The LLAMA license will not be covered under the open source definition. It's not perfect. One of the thing that, um, internally, you know, in discussion with with OSI, we were sort of disappointed is around the language. For data. So you might imagine that an open source AI model means a model where the data is freely available.[00:09:26] Luca Soldaini: There were discussion around that, but at the end of the day, they decided to go with a softened stance where they say a model is open source if you provide sufficient detail information. On how to sort of replicate the data pipeline. So you have an equivalent system, sufficient, sufficiently detailed.[00:09:46] Luca Soldaini: It's very, it's very fuzzy. Don't like that. An equivalent system is also very fuzzy. And this doesn't take into account the accessibility of the process, right? It might be that you provide enough [00:10:00] information, but this process costs, I don't know, 10 million to do. Now the open source definition. Like, any open source license has never been about accessibility, so that's never a factor in open source software, how accessible software is.[00:10:14] Luca Soldaini: I can make a piece of open source, put it on my hard drive, and never access it. That software is still open source, the fact that it's not widely distributed doesn't change the license, but practically there are expectations of like, what we want good open sources to be. So, it's, It's kind of sad to see that the data component in this license is not as, as, Open as some of us would like would like it to be.[00:10:40] Challenges for Open Models[00:10:40] Luca Soldaini: and I linked a blog post that Nathan wrote on the topic that it's less rambly and easier to follow through. One thing that in general, I think it's fair to say about the state of open models in 2024 is that we know a lot more than what we knew in, [00:11:00] in 2023. Like both on the training data, like And the pre training data you curate on like how to do like all the post training, especially like on the RL side.[00:11:10] Luca Soldaini: You know, 2023 was a lot of like throwing random darts at the board. I think 2024, we have clear recipes that, okay, don't get the same results as a closed lab because there is a cost in, in actually matching what they do. But at least we have a good sense of like, okay, this is, this is the path to get state of the art language model.[00:11:31] Luca Soldaini: I think that one thing that it's a downside of 2024 is that I think we are more research constrained in 2023. It feels that, you know, the barrier for compute that you need to, to move innovation along as just being right rising and rising. So like, if you go back to this slide, there is now this, this cluster of models that are sort of released by the.[00:11:57] Luca Soldaini: Compute rich club. Membership is [00:12:00] hotly debated. You know, some people don't want to be. Called the rich because it comes to expectations. Some people want to be called rich, but I don't know, there's debate, but like, these are players that have, you know, 10, 000, 50, 000 GPUs at minimum. And so they can do a lot of work and a lot of exploration and improving models that it's not very accessible.[00:12:21] Luca Soldaini: To give you a sense of like how I personally think about. Research budget for each part of the, of the language model pipeline is like on the pre training side, you can maybe do something with a thousand GPUs, really you want 10, 000. And like, if you want real estate of the art, you know, your deep seek minimum is like 50, 000 and you can scale to infinity.[00:12:44] Luca Soldaini: The more you have, the better it gets. Everyone on that side still complains that they don't have enough GPUs. Post training is a super wide sort of spectrum. You can do as little with like eight GPUs as long as you're able to [00:13:00] run, you know, a good version of, say, a LLAMA model, you can do a lot of work there.[00:13:05] Luca Soldaini: You can scale a lot of the methodology, just like scales with compute, right? If you're interested in you know, your open replication of what OpenAI's O1 is you're going to be on the 10K spectrum of our GPUs. Inference, you can do a lot with very few resources. Evaluation, you can do a lot with, well, I should say at least one GPUs if you want to evaluate GPUs.[00:13:30] Luca Soldaini: Open models but in general, like if you are, if you care a lot about intervention to do on this model, which it's my prefer area of, of research, then, you know, the resources that you need are quite, quite significant. Yeah. One other trends that has emerged in 2024 is this cluster of fully open models.[00:13:54] Luca Soldaini: So Omo the model that we built at ai, two being one of them and you know, it's nice [00:14:00] that it's not just us. There's like a cluster of other mostly research efforts who are working on this. And so it's good to to give you a primer of what like fully open means. So fully open, the easy way to think about it is instead of just releasing a model checkpoint that you run, you release a full recipe so that other people working on it.[00:14:24] Luca Soldaini: Working on that space can pick and choose whatever they want from your recipe and create their own model or improve on top of your model. You're giving out the full pipeline and all the details there instead of just like the end output. So I pull up the screenshot from our recent MOE model.[00:14:43] Luca Soldaini: And like for this model, for example, we released the model itself. Data that was trained on, the code, both for training and inference all the logs that we got through the training run, as well as every intermediate checkpoint and like the fact that you release different part of the pipeline [00:15:00] allows others to do really cool things.[00:15:02] Luca Soldaini: So for example, this tweet from early this year from folks in news research they use our pre training data to do a replication of the BitNet paper in the open. So they took just a Really like the initial part of a pipeline and then the, the thing on top of it. It goes both ways.[00:15:21] Luca Soldaini: So for example, for the Olmo2 model a lot of our pre trained data for the first stage of pre training was from this DCLM initiative that was led by folks Ooh, a variety of ins a variety of institutions. It was a really nice group effort. But you know, for When it was nice to be able to say, okay, you know, the state of the art in terms of like what is done in the open has improved.[00:15:46] AI2 Models - Olmo, Molmo, Pixmo etc[00:15:46] Luca Soldaini: We don't have to like do all this work from scratch to catch up the state of the art. We can just take it directly and integrate it and do our own improvements on top of that. I'm going to spend a few minutes doing like a [00:16:00] shameless plug for some of our fully open recipes. So indulge me in this.[00:16:05] Luca Soldaini: So a few things that we released this year was, as I was mentioning, there's OMOE model which is, I think still is state of the art MOE model in its size class. And it's also. Fully open, so every component of this model is available. We released a multi modal model called Molmo. Molmo is not just a model, but it's a full recipe of how you go from a text only model to a multi modal model, and we apply this recipe on top of Quent checkpoints, on top of Olmo checkpoints, as well as on top of OlmoE.[00:16:37] Luca Soldaini: And I think there'd be a replication doing that on top of Mistral as well. The post training side we recently released 2. 0. 3. Same story. This is a recipe on how you go from a base model to A state of the art post training model. We use the Tulu recipe on top of Olmo, on top of Llama, and then there's been open replication effort [00:17:00] to do that on top of Quen as well.[00:17:02] Luca Soldaini: It's really nice to see like, you know, when your recipe sort of, it's kind of turnkey, you can apply it to different models and it kind of just works. And finally, the last thing we released this year was Olmo 2, which so far is the best state of the art. Fully open language model a Sera combines aspect from all three of these previous models.[00:17:22] Luca Soldaini: What we learn on the data side from MomoE and what we learn on like making models that are easy to adapt from the Momo project and the Tulu project. I will close with a little bit of reflection of like ways this, this ecosystem of open models like it's not all roses. It's not all happy. It feels like day to day, it's always in peril.[00:17:44] Luca Soldaini: And, you know, I talked a little bit about like the compute issues that come with it. But it's really not just compute. One thing that is on top of my mind is due to like the environment and how you know, growing feelings about like how AI is treated. [00:18:00] It's actually harder to get access to a lot of the data that was used to train a lot of the models up to last year.[00:18:06] Luca Soldaini: So this is a screenshot from really fabulous work from Shane Longpre who's, I think is in Europe about Just access of like diminishing access to data for language model pre training. So what they did is they went through every snapshot of common crawl. Common crawl is this publicly available scrape of the, of a subset of the internet.[00:18:29] Luca Soldaini: And they looked at how For any given website whether a website that was accessible in say 2017, what, whether it was accessible or not in 2024. And what they found is as a reaction to like the close like of the existence of closed models like OpenAI or Cloud GPT or Cloud a lot of content owners have blanket Blocked any type of crawling to your website.[00:18:57] Luca Soldaini: And this is something that we see also internally at [00:19:00] AI2. Like one project that we started this year is we wanted to, we wanted to understand, like, if you're a good citizen of the internet and you crawl following sort of norms and policy that have been established in the last 25 years, what can you crawl?[00:19:17] Luca Soldaini: And we found that there's a lot of website where. The norms of how you express preference of whether to crawl your data or not are broken. A lot of people would block a lot of crawling, but do not advertise that in RobustDXT. You can only tell that they're crawling, that they're blocking you in crawling when you try doing it.[00:19:37] Luca Soldaini: Sometimes you can't even crawl the robots. txt to, to check whether you're allowed or not. And then a lot of websites there's, there's like all these technologies that historically have been, have existed to make websites serving easier such as Cloudflare or DNS. They're now being repurposed for blocking AI or any type of crawling [00:20:00] in a way that is Very opaque to the content owners themselves.[00:20:04] Luca Soldaini: So, you know, you go to these websites, you try to access them and they're not available and you get a feeling it's like, Oh, someone changed, something changed on the, on the DNS side that it's blocking this and likely the content owner has no idea. They're just using a Cloudflare for better, you know, load balancing.[00:20:25] Luca Soldaini: And this is something that was sort of sprung on them with very little notice. And I think the problem is this, this blocking or ideas really, it impacts people in different ways. It disproportionately helps companies that have a headstart, which are usually the closed labs and it hurts incoming newcomer players where either have now to do things in a sketchy way or you're never going to get that content that the closed lab might have.[00:20:54] Luca Soldaini: So there's a lot, it was a lot of coverage. I'm going to plug Nathan's blog post again. That is, [00:21:00] that I think the title of this one is very succinct which is like, we're actually not, You know, before thinking about running out of training data, we're actually running out of open training data. And so if we want better open models they should be on top of our mind.[00:21:13] Regulation and Lobbying[00:21:13] Luca Soldaini: The other thing that has emerged is that there is strong lobbying efforts on trying to define any kind of, AI as like a new extremely risky and I want to be precise here. Like the problem is now, um, like the problem is not not considering the risk of this technology. Every technology has risks that, that should always be considered.[00:21:37] Luca Soldaini: The thing that it's like to me is sorry, is ingenious is like just putting this AI on a pedestal and calling it like, An unknown alien technology that has like new and undiscovered potentials to destroy humanity. When in reality, all the dangers I think are rooted in [00:22:00] dangers that we know from existing software industry or existing issues that come with when using software on on a lot of sensitive domains, like medical areas.[00:22:13] Luca Soldaini: And I also noticed a lot of efforts that have actually been going on and trying to make this open model safe. I pasted one here from AI2, but there's actually like a lot of work that has been going on on like, okay, how do you make, if you're distributing this model, Openly, how do you make it safe?[00:22:31] Luca Soldaini: How, what's the right balance between accessibility on open models and safety? And then also there's annoying brushing of sort of concerns that are then proved to be unfounded under the rug. You know, if you remember the beginning of this year, it was all about bio risk of these open models.[00:22:48] Luca Soldaini: The whole thing fizzled because as being Finally, there's been like rigorous research, not just this paper from Cohere folks, but it's been rigorous research showing [00:23:00] that this is really not a concern that we should be worried about. Again, there is a lot of dangerous use of AI applications, but this one was just like, A lobbying ploy to just make things sound scarier than they actually are.[00:23:15] Luca Soldaini: So I got to preface this part. It says, this is my personal opinion. It's not my employer, but I look at things like the SP 1047 from, from California. And I think we kind of dodged a bullet on, on this legislation. We, you know, the open source community, a lot of the community came together at the last, sort of the last minute and did a very good effort trying to explain all the negative impact of this bill.[00:23:43] Luca Soldaini: But There's like, I feel like there's a lot of excitement on building these open models or like researching on these open models. And lobbying is not sexy it's kind of boring but it's sort of necessary to make sure that this ecosystem can, can really [00:24:00] thrive. This end of presentation, I have Some links, emails, sort of standard thing in case anyone wants to reach out and if folks have questions or anything they wanted to discuss.[00:24:13] Luca Soldaini: Is there an open floor? I think we have Sophia[00:24:16] swyx: who wants to who one, one very important open model that we haven't covered is Mistral. Ask her on this slide. Yeah, yeah. Well, well, it's nice to have the Mistral person talk recap the year in Mistral. But while Sophia gets set up, does anyone have like, just thoughts or questions about the progress in this space?[00:24:32] Questions - Incentive Alignment[00:24:32] swyx: Do you always have questions?[00:24:34] Quesiton: I'm very curious how we should build incentives to build open models, things like Francois Chollet's ArcPrize, and other initiatives like that. What is your opinion on how we should better align incentives in the community so that open models stay open?[00:24:49] Luca Soldaini: The incentive bit is, like, really hard.[00:24:51] Luca Soldaini: Like, even It's something that I actually, even we think a lot about it internally because like building open models is risky. [00:25:00] It's very expensive. And so people don't want to take risky bets. I think the, definitely like the challenges like our challenge, I think those are like very valid approaches for it.[00:25:13] Luca Soldaini: And then I think in general, promoting, building, so, any kind of effort to participate in this challenge, in those challenges, if we can promote doing that on top of open models and sort of really lean into like this multiplier effect, I think that is a good way to go. If there were more money for that.[00:25:35] Luca Soldaini: For efforts like research efforts around open models. There's a lot of, I think there's a lot of investments in companies that at the moment are releasing their model in the open, which is really cool. But it's usually more because of commercial interest and not wanting to support this, this like open models in the longterm, it's a really hard problem because I think everyone is operating sort of [00:26:00] in what.[00:26:01] Luca Soldaini: Everyone is at their local maximum, right? In ways that really optimize their position on the market. Global maximum is harder to achieve.[00:26:11] Question2: Can I ask one question? No.[00:26:12] Luca Soldaini: Yeah.[00:26:13] Question2: So I think one of the gap between the closed and open source models is the mutability. So the closed source models like chat GPT works pretty good on the low resource languages, which is not the same on the open, open source models, right?[00:26:27] Question2: So is it in your plan to improve on that?[00:26:32] Luca Soldaini: I think in general,[00:26:32] Luca Soldaini: yes, is I think it's. I think we'll see a lot of improvements there in, like, 2025. Like, there's groups like, Procurement English on the smaller side that are already working on, like, better crawl support, multilingual support. I think what I'm trying to say here is you really want to be experts.[00:26:54] Luca Soldaini: who are actually in those countries that teach those languages to [00:27:00] participate in the international community. To give you, like, a very easy example I'm originally from Italy. I think I'm terribly equipped to build a model that works well in Italian. Because one of the things you need to be able to do is having that knowledge of, like, okay, how do I access, you know, how Libraries, or content that is from this region that covers this language.[00:27:23] Luca Soldaini: I've been in the US long enough that I no longer know. So, I think that's the efforts that folks in Central Europe, for example, are doing. Around like, okay, let's tap into regional communities. To get access you know, to bring in collaborators from those areas. I think it's going to be, like, very crucial for getting products there.[00:27:46] Mistral intro[00:27:46] Sophia Yang: Hi everyone. Yeah, I'm super excited to be here to talk to you guys about Mistral. A really short and quick recap of what we have done, what kind of models and products we have released in the [00:28:00] past year and a half. So most of you We have already known that we are a small startup funded about a year and a half ago in Paris in May, 2003, it was funded by three of our co founders, and in September, 2003, we released our first open source model, Mistral 7b yeah, how, how many of you have used or heard about Mistral 7b?[00:28:24] Sophia Yang: Hey, pretty much everyone. Thank you. Yeah, it's our Pretty popular and community. Our committee really loved this model, and in December 23, we, we released another popular model with the MLE architecture Mr. A X seven B and oh. Going into this year, you can see we have released a lot of things this year.[00:28:46] Sophia Yang: First of all, in February 2004, we released MrSmall, MrLarge, LeChat, which is our chat interface, I will show you in a little bit. We released an embedding model for, you [00:29:00] know, converting your text into embedding vectors, and all of our models are available. The, the big cloud resources. So you can use our model on Google cloud, AWS, Azure Snowflake, IBM.[00:29:16] Sophia Yang: So very useful for enterprise who wants to use our model through cloud. And in April and May this year, we released another powerful open source MOE model, AX22B. And we also released our first code. Code Model Coastal, which is amazing at 80 plus languages. And then we provided another fine tuning service for customization.[00:29:41] Sophia Yang: So because we know the community love to fine tune our models, so we provide you a very nice and easy option for you to fine tune our model on our platform. And also we released our fine tuning code base called Menstrual finetune. It's open source, so feel free to take it. Take a look and.[00:29:58] Sophia Yang: More models. [00:30:00] On July 2, November this year, we released many, many other models. First of all is the two new small, best small models. We have Minestra 3B great for Deploying on edge devices we have Minstrel 8B if you used to use Minstrel 7B, Minstrel 8B is a great replacement with much stronger performance than Minstrel 7B.[00:30:25] Sophia Yang: We also collaborated with NVIDIA and open sourced another model, Nemo 12B another great model. And Just a few weeks ago, we updated Mistral Large with the version 2 with the updated, updated state of the art features and really great function calling capabilities. It's supporting function calling in LatentNate.[00:30:45] Sophia Yang: And we released two multimodal models Pixtral 12b. It's this open source and Pixtral Large just amazing model for, models for not understanding images, but also great at text understanding. So. Yeah, a [00:31:00] lot of the image models are not so good at textual understanding, but pixel large and pixel 12b are good at both image understanding and textual understanding.[00:31:09] Sophia Yang: And of course, we have models for research. Coastal Mamba is built on Mamba architecture and MathRoll, great with working with math problems. So yeah, that's another model.[00:31:29] Sophia Yang: Here's another view of our model reference. We have several premier models, which means these models are mostly available through our API. I mean, all of the models are available throughout our API, except for Ministry 3B. But for the premier model, they have a special license. Minstrel research license, you can use it for free for exploration, but if you want to use it for enterprise for production use, you will need to purchase a license [00:32:00] from us.[00:32:00] Sophia Yang: So on the top row here, we have Minstrel 3b and 8b as our premier model. Minstrel small for best, best low latency use cases, MrLarge is great for your most sophisticated use cases. PixelLarge is the frontier class multimodal model. And, and we have Coastral for great for coding and then again, MrEmbedding model.[00:32:22] Sophia Yang: And The bottom, the bottom of the slides here, we have several Apache 2. 0 licensed open way models. Free for the community to use, and also if you want to fine tune it, use it for customization, production, feel free to do so. The latest, we have Pixtros 3 12b. We also have Mr. Nemo mum, Coastal Mamba and Mastro, as I mentioned, and we have three legacy models that we don't update anymore.[00:32:49] Sophia Yang: So we recommend you to move to our newer models if you are still using them. And then, just a few weeks ago, [00:33:00] we did a lot of, uh, improvements to our code interface, Lachette. How many of you have used Lachette? Oh, no. Only a few. Okay. I highly recommend Lachette. It's chat. mistral. ai. It's free to use.[00:33:16] Sophia Yang: It has all the amazing capabilities I'm going to show you right now. But before that, Lachette in French means cat. So this is actually a cat logo. If you You can tell this is the cat eyes. Yeah. So first of all, I want to show you something Maybe let's, let's take a look at image understanding.[00:33:36] Sophia Yang: So here I have a receipts and I want to ask, just going to get the prompts. Cool. So basically I have a receipt and I said I ordered I don't know. Coffee and the sausage. How much do I owe? Add a 18 percent tip. So hopefully it was able to get the cost of the coffee and the [00:34:00] sausage and ignore the other things.[00:34:03] Sophia Yang: And yeah, I don't really understand this, but I think this is coffee. It's yeah. Nine, eight. And then cost of the sausage, we have 22 here. And then it was able to add the cost, calculate the tip, and all that. Great. So, it's great at image understanding, it's great at OCR tasks. So, if you have OCR tasks, please use it.[00:34:28] Sophia Yang: It's free on the chat. It's also available through our API. And also I want to show you a Canvas example. A lot of you may have used Canvas with other tools before. But, With Lachat, it's completely free again. Here, I'm asking it to create a canvas that's used PyScript to execute Python in my browser.[00:34:51] Sophia Yang: Let's see if it works. Import this. Okay, so, yeah, so basically it's executing [00:35:00] Python here. Exactly what we wanted. And the other day, I was trying to ask Lachat to create a game for me. Let's see if we can make it work. Yeah, the Tetris game. Yep. Let's just get one row. Maybe. Oh no. Okay. All right. You get the idea. I failed my mission. Okay. Here we go. Yay! Cool. Yeah. So as you can see, Lachet can write, like, a code about a simple game pretty easily. And you can ask Lachet to explain the code. Make updates however you like. Another example. There is a bar here I want to move.[00:35:48] Sophia Yang: Okay, great, okay. And let's go back to another one. Yeah, we also have web search capabilities. Like, you can [00:36:00] ask what's the latest AI news. Image generation is pretty cool. Generate an image about researchers. Okay. In Vancouver? Yeah, it's Black Forest Labs flux Pro. Again, this is free, so Oh, cool.[00:36:19] Sophia Yang: I guess researchers here are mostly from University of British Columbia. That's smart. Yeah. So this is Laia ira. Please feel free to use it. And let me know if you have any feedback. We're always looking for improvement and we're gonna release a lot more powerful features in the coming years.[00:36:37] Sophia Yang: Thank you. Get full access to Latent Space at www.latent.space/subscribe
Frisst die Datenkultur auch die Datenstrategie zum Frühstück? Oder spielen die beiden vielleicht zusammen? Und was verbirgt sich hinter diesen Buzzwords? Darüber spricht Christian Krug, der Host des Podcasts „Unf*ck Your Data“ mit Marco Geuer, Head of Global Data Strategy & AI Solutions bei FIEGE.▬▬▬▬▬▬ Profile: ▬▬▬▬Zum LinkedIn-Profil von Marco: https://www.linkedin.com/in/marco-geuer-the-data-economist/Zum LinkedIn-Profil von Christian: https://www.linkedin.com/in/christian-krug/Christians Wonderlink: https://wonderl.ink/@christiankrugUnf*ck Your Data auf Linkedin: https://www.linkedin.com/company/unfck-your-data▬▬▬▬▬▬ Buchempfehlung: ▬▬▬▬Buchempfehlung von Marco: KI 2041 - Kai-Fu Lee, Qiufan ChenAlle Empfehlungen in Melenas Bücherladen: https://gunzenhausen.buchhandlung.de/unfuckyourdata▬▬▬▬▬▬ Hier findest Du Unf*ck Your Data: ▬▬▬▬Zum Podcast auf Spotify: https://open.spotify.com/show/6Ow7ySMbgnir27etMYkpxT?si=dc0fd2b3c6454bfaZum Podcast auf iTunes: https://podcasts.apple.com/de/podcast/unf-ck-your-data/id1673832019Zum Podcast auf Deezer: https://deezer.page.link/FnT5kRSjf2k54iib6Zum Podcast auf Youtube: https://www.youtube.com/@unfckyourdata▬▬▬▬▬▬ Merch: ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬https://unfckyourdata-shop.de/▬▬▬▬▬▬ Kontakt: ▬▬▬▬E-Mail: christian@uyd-podcast.com▬▬▬▬▬▬ Timestamps: ▬▬▬▬▬▬▬▬▬▬▬▬▬00:00 Kultur und Strategie: Ein untrennbares Duo03:08 Datenkultur und ihre Bedeutung für Unternehmen06:01 Die Rolle der Führung in der Kulturprägung08:50 Transformation und Reifegrad: Kultur im Fokus12:08 Zielsysteme und ihre Auswirkungen auf die Kultur14:55 Der Einfluss von Machtstrukturen auf die Unternehmenskultur18:12 Vertrauen und Offenheit: Schlüssel zur erfolgreichen Transformation21:00 Strategische Entscheidungen und ihre kulturellen Implikationen24:08 Der Weg zur datengetriebenen Innovation27:34 Die Bedeutung von Offenheit in der Unternehmenskultur30:24 Führung und Mitarbeiterbindung32:45 Lernkultur vs. Fehlerkultur36:40 Menschlichkeit in der Unternehmenskultur40:34 Kultur und Strategie im Unternehmen43:59 Entscheidungstransparenz und Datenkultur
In this episode, Kai-Fu and Peter discuss 01.AI's growth, Chinese entrepreneurship, and how open-source AI can impact the world. Recorded on Oct 19th, 2024 Views are my own thoughts; not Financial, Medical, or Legal Advice. Kai-Fu Lee is the Chairman and CEO of Sinovation Ventures, a venture capital firm he founded in 2009 that manages over $2 billion in assets and focuses on fostering the next generation of Chinese high-tech companies. In 2023, Lee launched 01.AI, a startup that built AI applications tailored for China, including Wanzhi, a productivity assistant similar to Microsoft Office 365 Copilot. As a leading figure in artificial intelligence, Lee continues to shape the tech landscape in China, where he recently noted that Chinese AI models are only 6 to 9 months behind their U.S. counterparts. He has authored influential books such as AI Superpowers (2018) and AI 2041 (2021) and was named one of Time Magazine's 100 most influential people in 2013. Earlier in his career, Lee held prominent positions in tech, including Vice President at Google, President of Google China (2005-2009), and Corporate Vice President at Microsoft (2000- 2005). He also founded and led Microsoft Research Asia from 1998 to 2000. Lee remains a highly respected thought leader in AI and continues to drive innovation in the field. Beago: https://www.beago.ai/ 01.AI: https://www.01.ai/ Kai-Fu's X: https://x.com/kaifulee Kai-Fu's LinkedIn: https://www.linkedin.com/in/kaifulee/ Pre-Order my Longevity Guidebook here: https://longevityguidebook.com/ ____________ I only endorse products and services I personally use. To see what they are, please support this podcast by checking out our sponsors: Get started with Fountain Life and become the CEO of your health: https://fountainlife.com/peter/ AI-powered precision diagnosis you NEED for a healthy gut: https://www.viome.com/peter Get 15% off OneSkin with the code PETER at https://www.oneskin.co/ #oneskinpod Get real-time feedback on how diet impacts your health with https://join.levelshealth.com/peter/ _____________ I send weekly emails with the latest insights and trends on today's and tomorrow's exponential technologies. Stay ahead of the curve, and sign up now: Blog _____________ Connect With Peter: Twitter Instagram Youtube Moonshots
Kai-Fu Lee es famoso por ser el autor del libro AI Superpowers. Recientemente, ha fundado la empresa 01, que se dedica a entrenar modelos fundacionales. En unas declaraciones recientes asegura que han entrenado un modelo llamado Yi-Lightning del nivel de GPT4 con tan solo 3 millones de dólares. Hoy en la tertulia periodismo de investigación: ¿Es Yi-Lightning un cuento chino? Participan en la tertulia: Josu Gorostegui, Víctor Goñi y Guillermo Barbadillo. Recuerda que puedes enviarnos dudas, comentarios y sugerencias en: https://twitter.com/TERTUL_ia Más info en: https://ironbar.github.io/tertulia_inteligencia_artificial/
Kai-Fu Lee is the co-founder of Sinovation Ventures, a venture capital firm funding Chinese start-ups. Before this, he was the president of Google China, and founder of Microsoft Research Asia. He's also the inventor of a speaker-independent, continuous speech recognition system. In our conversation we discuss: [00:00] - Introductions[01:50] - Kai Fu's background[05:44] - What is the first AI[11:56] - AI's future[15:01] - China's strength in AI technology[20:09] - China's Super App[24:38] - 996 and 997[28:10] - Engineering background[30:30] - Opportunities from AI[38:30] - AI as a double-edged sword[44:03] - Contribution to society[46:08] - Four Quadrants[49:39] - Relevant skillsets[52:59] - Humans falling in love with technology[55:54] - Working smart vs working hard[58:09] - Taking risks[59:20] - Longevity Watch full episodes on: https://www.youtube.com/@seankim?sub_confirmation=1 Connect on IG: https://instagram.com/heyseankim
AI REVOLUTION AND THE FUTURE OF WORK BY 2030 Is your career AI-proof? We dove deep into the future of work with AI expert Marynês Pereira on the latest episode of the Disruptive Minds Podcast! Marynês breaks down the impact of AI on different professions, using Kai-Fu Lee's "Four Zones of Risk" framework. From the "safe zone" to the "danger zone," we explore which jobs are most likely to be impacted by automation and what skills will be essential to thrive. We even discuss the fascinating "human veneer" zone and what it means for human interaction in the workplace. This episode is powered by AI - even the voice actors!
REVOLUÇÃO DA IA E O FUTURO DO TRABALHO ATÉ 2030- PODCAST EM PORTUGUÊS Tive que usar um PROMPT nesta tecnologia e "forçar" para o nosso idioma, porém, ainda não está bom, basta ver a postagem anterior com o áudio em inglês do mesmo tema. Sua carreira é à prova de IA? Mergulhamos fundo no futuro do trabalho com a especialista em IA Marynês Pereira neste episódio do Podcast Disruptive Minds. Marynês analisa o impacto da IA em diferentes profissões, usando a estrutura "Quatro Zonas de Risco" de Kai-Fu Lee. Da "zona segura" à "zona de perigo", exploramos quais empregos têm maior probabilidade de serem impactados pela automação e quais habilidades serão essenciais para prosperar. Nós até discutimos a fascinante zona de "verniz humano" e o que isso significa para a interação humana no local de trabalho. Este episódio é alimentado por IA - até mesmo os dubladores!
In this episode, Peter is joined by leaders in the "BEYOND GPT MODELS — WHAT IS THE DECADE AHEAD?" panel at the 8th FII Summit to discuss how AI will impact industries beyond large language models. This includes: Dr. Kai-Fu Lee, Chairman & CEO, Sinovation Ventures, CEO, 01.AI Richard Socher, CEO & Founder, you.com, Co-Founder & Managing Director, AIX Ventures Prem Akkaraju, CEO, Stability AI Recorded on Oct 30th, 2024 Views are my own thoughts; not Financial, Medical, or Legal Advice. Learn more about the Future Investment Initiative Institute (FII): https://fii-institute.org/ _____________ I send weekly emails with the latest insights and trends on today's and tomorrow's exponential technologies. Stay ahead of the curve, and sign up now: Blog Learn more about my executive summit, Abundance360: https://www.abundance360.com/ _____________ Connect With Peter: Twitter Instagram Youtube Moonshots
Send me a Text Message hereFULL SHOW NOTES https://podcast.nz365guy.com/606 Prepare yourself for an eye-opening exploration as we uncover why cybercrime could be the world's third-largest economy. Tune in to hear Ana, Andrew, William, and Chris's exhilaration over Trevor Noah's enlightening talk and Vasu Jakal's captivating cybersecurity panel, where we unravel the staggering statistic of 4,000 cyberattacks per second. Discover the vital importance of zero trust policies and proactive security measures in a world where AI-driven threats are ever-evolving.Imagine a future where AI agents handle tasks with the finesse of seasoned travel experts. Our latest discussion takes you through the evolution of AI from basic automation to the sophisticated orchestration of agents capable of independent action. We dive into the importance of data accuracy to prevent chaos and highlight the potential of orchestrators and agent chaining as powerful tools for optimizing intelligent systems, transforming the way we work.As we navigate the complex waters of Responsible AI, we focus on balancing technology with ethical practices and human elements. Learn about the implementation of Responsible AI waivers and tools like Co-pilot Studio that manage liability and ethical AI deployment. Join us as we contemplate the challenges of managing personal data, the rise of deepfakes, and the broader implications of AI through insights from Kai-Fu Lee's "AI 2041." Our conversation wraps up with a call for feedback and innovation to enhance software estate value, inviting you to be a part of this transformative journey.90 Day Mentoring Challenge 10% off code use MBAP at checkout https://ako.nz365guy.comSupport the showIf you want to get in touch with me, you can message me here on Linkedin.Thanks for listening
This week we talk about Taylor Swift, knife attacks, and immigration politics.We also discuss immigration rationales, riffraff, and terrorist plots.Recommended Book: AI 2041 by Kai-Fu Lee and Chen QiufanTranscriptAmerican musician, singer, and songwriter Taylor Swift, at age 34, recently became the world's first music industry billionaire who's primary source of income is their music—as opposed to side-businesses, work, and royalties in adjacent or completely disconnected industries.A lot of that wealth has stemmed from her incredibly successful, and ongoing—as of the day I'm recording this at least—Eras tour, which began in March of 2023 and which is her sixth tour, and by far the biggest in scope, scale, and success.The Eras Tour, by itself, has surpassed a billion dollars in revenue—the first tour to ever hit that milestone—and it's had all sorts of interesting direct ramifications and repercussions, like bolstering Swift's music sales and streams, but also indirect ones, like creating a sort of economic weather system wherever these tour stops are planned: it's been estimated, for instance, that the Eras Tour contributed something like $4.3 billion to the GDP of the United States, and the WSJ dubbed these economic impacts "Taylornomics," as the combination of travel, food, entertainment, and other spending surrounding her tour dates, folks coming from all the around the world to visit the relevant cities, attend the concert, and spend on those sorts of things while in town, has all had a meaningful impact legible in even the huge-scale numbers of national income figures.Swift, then, has been having quite the moment, following the several decades of work in this industry leading up to this tour.And the swirl of activity—economic and cultural, especially—around her Eras Tour stops have made these events central to the collective consciousness, grabbing lots of airtime and watercooler talk wherever she shows up, because of how much of an event each of these stops are; and notably, they have been very well reviewed, in terms of the performance, the sets, the planning, everything—so it would seem that the attention being focused on these shows isn't superficial and reflexive, it's the result of having put together something pretty special for people who are willing to spend to attend that kind of event.It maybe shouldn't come as a surprise, then, that there may be people out there looking to garner attention for themselves and their causes who see these events as an opportunity to do exactly that.Three sold-out shows in Vienna, Austria were cancelled in early August due to a plot by what seems to have been several teenagers looking to kill as many people as possible outside the tour's local concert venue.An investigation into this plot is ongoing, so there's still a fair bit we don't know, but what's been divulged so far is that three people have been connected to the plot and detained, the main suspect is a 19-year-old who planned to use knives and/or explosives to kill as many of the 30,000 or so onlookers who gather outside the show venues each night as possible—and another 65,000 people would have been inside the venue, so that's a lot of people, and a lot of potential for stampede-related injuries and deaths, alongside those that could be caused with knives and bombs—and that he, alongside two other suspects, a 17-year-old and an 18-year-old, was inspired by the Islamic State group and al-Qaida—the 18-year-old, who is an Iraqi citizen, apparently having pledged himself to the Islamic State.Propaganda materials from both terrorist organizations were found at the 17-year-old's home, alongside bomb-making materials, and he was hired just a few days before being caught by a company that provides some type of service to the concert venue; specifics about what said company provides haven't been officially divulged yet, but the theory is that this job was meant to give him and his accomplices some kind of access, allowing them to do what they intended to do more effectively.There were a lot of disappointed Swift fans in Vienna who in some cases spent thousands of dollars just getting and staying there for the concert, only to be told that it was cancelled; most of the response from those affected in this way seems to be relatively upbeat, though, considering the circumstances, pretty much everyone breathing a sigh of relief that this plot wasn't pulled off successfully, which could have resulted in something like what happened at Manchester Arena in 2017, when an Ariana Grande concert was attacked by an Islamic extremist with a bomb who killed 22 people and injured more than 1,000.Swift's representatives have said that her next concert, scheduled for between August 15 and the 20th, are still on the books and ready to go, at London's Wembley Stadium, which will close-out the European leg of this record-setting tour.London's mayor has said that local authorities are prepared for whatever might happen, having learned a lot from that aforementioned Ariana Grande concert in 2017.What I'd like to talk about today is a bout of violent rioting that broke out in the UK recently, which is loosely connected to Swift and her music, though only adjacently, and is primarily focused on the roiling topic of immigration and its British discontents.—At the tail-end of July, 2024, there was a knife attack in Southport, a town in northwestern England, in which three young girls were killed and ten other people, eight of whom were also children, were injured—some very badly injured.This attack targeted a Taylor Swift-themed yoga and dance workshop that catered to children ages 6 to 11, twenty-five of whom were in attendance—hence that large number of young victims. And the adults who tried to stop the attacker were all themselves injured, in some cases critically, and the assailant was only ultimately halted when a pair of police officers managed to subdue him.The person behind this attack, and those murders, is a 17-year-old British citizen who was arrested at the scene, and whose identity was initially concealed from the public because of how privacy laws work in the UK, related to minors; they tend not to divulge identifying details when crimes are committed by people who are legally children, though in this case they ultimately decided to do so, for reasons I'll get into in a moment.Thus far, there isn't a clear motive behind this attack—the attacker has been diagnosed with autism spectrum disorder and was apparently deep in the midst of some kind of self-imposed isolation leading up to his apparent decision to take a taxi to this workshop and kill a bunch of children.He's been charged with possession of a bladed article, ten counts of attempted murder, and three counts of murder, and his trial date is currently set for the end of January in 2025.This attack is currently not being treated as a terrorist incident, though again, no clear motive has been established, and there's a lot that's not known, and likely quite a bit that hasn't been publicly divulged yet.This knife attack, unto itself, led to a lot of headlines and attention because of how just brutal and horrible it was.But in the aftermath of the attack, possibly because the attacker wasn't named, again, because he was a minor, rumors and then outright misinformation began to spread around less-than-legitimate news entities in the UK, and across social media platforms and messaging apps like Telegram, many of them suggesting or directly alleging that the attacker was someone he was not—a false name was given to him by some of those spreading these rumors—and even in cases when a name wasn't misattributed or fabricated, he was alleged to be an immigrant seeking asylum—which is also incorrect; his parents are from Rwanda, but he was born in Cardiff, and is thus a British citizen.Within days these rumors and this mis- and dis-information, this accidental and purposeful spreading of mistruths, began to reach a fever-pitch, the zone flooded with patently untrue claims and narratives, which is why the police decided to release the attacker's name publicly on August 1; it was going to happen within a week or so, anyway, because he was turning 18 on the 7th, so the idea was to get ahead of that impending forced divulgence, and to try to temper some of that false information spread within facets of British society in the meantime.Most of the false stories, though, hung on, even after officials made this information public, and to understand why, it's important to understand what a political force anti-immigration sentiment has become in Britain over the past few decades.The British aren't alone in this, of course: especially in wealthier countries, mostly but not exclusively conservative politicians and parties have been making hay with claims about folks from other countries coming into their territory, taking their jobs, gobbling up their social services, and changing their culture into something those who came before feel they no longer recognize.Part of this is just the consequence of societies changing being reframed into something devilish and wrong, part of it is the reframing of stagnating economic conditions as something that's being done to their societies by outside forces, not by uncontrollable macro variables like pandemics, and controllable variables that are being mismanaged by those in power.Part of it, though, is related to real-deal demographic shifts, as folks flee from repression, violence, economic deterioration, and dangerous climactic happenings in less-wealthy parts of the world to those that are currently not suffering from these things, or not to the same degree.Thus, we see waves of people show up to the US's southern border, all hoping to get into the country, legally or otherwise, and the same is true of European nations with Mediterranean coasts, and, as is the case here, people arriving legally, by ship and plane and train into England, but also illegally, often in makeshift boats crossing the channel, the people who arrive in this way arrested and filtered into a system that often moves sluggishly and puts these new arrivals up in hotels or other housings for the duration of their processing at government expense.From the perspective of someone in a smaller British town, then, where the economic conditions are not much better than those in a similarly sized town in a much poorer country—since London is the only city in Britain doing really well in that regard, right now—this looks like a bunch of people from elsewhere, who don't belong, taking resources that should go to them, should be spend on their housing and healthcare, should be making jobs for them, and the ones that are allowed to stay continue to take those resources, leaving a lot less to go around, again, in circumstances in which it already feels like there's not anywhere near enough—no chance for growth, few opportunities, and diminishing value in the social services they've been promised.These are potent political topics, then, because in some cases these are real-deal issues already, and in others it can be useful and effective to stoke fears that this could happen in the future, if we allow these foreigners to keep flooding across our borders, legally or illegally.In the UK, the Conservatives, the Tories, have used this issue as a very effective lever, and at the height of fervor about this topic, they seemingly accidentally led the country to a successful Brexit referendum in 2016, leading to the UK leaving the European Union, in large part because this would allow them to set their own immigration policies separate from those of the EU, which are much more open in terms of movement between member nations.All of which, I think, helps explain what happened next, following that knife attack, and the torrent of false information that flooded the zone following the attack, which included all sorts of claims that the attacker was an illegal immigrant, that he was a Muslim, and that if nothing else, he was a black teenager who had brutally murdered several young, British girls.On July 30, a crowd in Southport gathered outside a local mosque and started causing trouble and making threats. The police showed up to keep the peace, and the protestors attacked them, set fire to a police van, and damaged the mosque—in the process injuring more than 50 police officers and three police dogs.This initial group of protestors was formed around a nucleus of people belonging to the English Defense League, which is a far-right, anti-Muslim organization, and members of Patriotic Alternative, which is a neo-Nazi group.Similar protests that became riots popped up in cities across the country in subsequent days, and amidst all the resulting tumult, a police station was set on fire in Sunderland, and more mosques, alongside businesses and homes owned by people who were purported to be, often incorrectly, immigrants, were also damaged or destroyed.Hotels housing asylum seekers were attacked, and something like 750 of these anti-immigration rioters have been arrested, thus far.The nature of these riots changed on August 7, when a protest, populated by the same sorts of people as were seen at the other ones, mostly anti-immigrant, neo-Nazi, and far-right folks, was met by a group of anti-racist counter-protestors that dramatically outnumbered them. The police helped support the peaceful anti-racist protestors, and since that day, most of these would-be riots have been quelled by oversized groups of counter-protestors augmented by a police presence.Before that tactic arose, though, several cities saw a handful of nights in a row of rioting by those far-right groups, many of them pillaging and burning shops, and attacking strangers and the police, and the government has gone out of its way to really throw the book at folks they've arrested, handing down significant punishments to some of the instigators of these riots, in particular, while also publishing their names and faces, in an attempt to embarrass and make examples of them.As of the second week of August, we're still seeing periodic attacks on mosques and attempted protests and riots by far-right activists pop up here and there, though they're happening a lot less frequently than in previous weeks, and peaceful anti-fascist, anti-racist protests have become a lot more common, in response to attempted riots, but also unto themselves.There are several ways of looking at what has happened here, in response to that attack, and in response to the riots that followed.One narrative of all this is that far-right politicians and ideologues int the country have attempted to convert a truly horrible event into something it wasn't for the purpose of regaining some of the power they lost with the last round of parliamentary elections.It's been alleged by the new British Prime Minister, Kier Starmer, that these riots were instigated by far-right troublemakers like Nigel Farage, who was one of the key proponents of Brexit, and who has recently reemerged in British politics as the leader of a vehemently anti-immigrant, further-right than the Conservatives party in the country.Farage and similar anti-immigrant leaders flooded the informational zone with disinformation and nudge-nudge-wink-wink innuendo that implied this was one of the consequences of allowing immigrants into the UK, and that, according to Starmer and other government leaders, led to some of this violence and destruction—they've even hinted that it might be prudent to clamp down on those sorts of posts and false claims, because of the real-world consequences that can follow; though that hint has been met with skepticism and worry from free-speech advocates.It's also been alleged that foreign agencies, like those in Russia, have been helping amplify these false claims, as part of their larger effort to sow discord and to augment the potency of reactionary groups in countries they want to influence, and folks who have aligned themselves with global conservative movements, like Tesla CEO Elon Musk, have been accused of doing the same, Musk himself sharing a lot of misinformation related to this attack and the riots that followed on the social network he bought, formerly Twitter, now X, clashing with the new Labour government on the network while saying that he believes a civil war is inevitable in the UK.So we could look at all this through the political-leverage lens, as there's a lot of power to be gained by successfully attaching reins to this sort of movement, and amplifying trouble for those in power, if those in power are your political rivals.We might also look at this through the lens of actual on-the-ground issues, though.There was a piece in the Financial Times, recently, in which it was posited that how we perceive these riots, and the people sparking and perpetuating them, will tend to depend on whether we subscribe to the "rational actor" or "riffraff" models of riot interpretation.The rational actor model says people who riot are doing what they do because of real-deal grievances that they can't seem to get addressed in any other way, while the riffraff model says rioters are basically low-lifes who have nothing better to do, and/or who like to mindlessly give in to the animalistic urge to belittle, attack, and maybe even kill those who seem different from them and theirs, and all they're looking for is an excuse to do so.Most social scientists, today, support the rational actor model, suggesting that even people who lean toward violence will keep those behaviors tamped-down most of the time, and only allow them out at moments in which they feel like there's no other way to get themselves and their grievances heard and addressed; and that's true whether we're talking about people of color rioting because they feel like they're being unfairly and violently targeted by police, or when it's mostly white, British Christians who feel like they're being elbowed out of society by Muslim immigrants and various other people of color.That folks like Farage might step in and try to ride this kind of wave, then, might ultimately be less important than identifying a pressure-valve that'll allow these sorts of grievances to be worked out and addressed in socially positive, legal ways.Government healthcare infrastructure in many of these areas is stretched to the breaking point, social safety nets are unraveling, and years of Conservative-instigated austerity measures have left these and other social baselines way below where they were in recent memory—and the messaging from the Conservative Party has been that immigrants are to blame, not their good and logical and responsible monetary policies.Starmer, as the head of the new Labour government, which won the last election in a landslide, booting the Conservatives from their perch for the first time in a long time, has the opportunity to address these issues, then, but it's likely he'll have to do so in such a way that the actual problems are addressed—providing better funding for these services, helping stimulate more economic activity so there are enough, and high-quality enough jobs for everyone, and ensuring there's enough immigration so that systems that rely on folks coming in from elsewhere (which is especially true of the nation's healthcare system and its construction industries) can function properly—while also addressing some of the seeming issues, like cracking down on illegal immigration; which probably isn't the core problem it's been promoted as by the Conservatives and those to their political right, but is an issue, and is something Starmer has said he will crack down on.It currently seems like he might aim to grease the wheels of the immigration system, so that folks applying for citizenship are processed faster, which will mean fewer resources expended putting them up in the country while they're waiting to see if they can stay—which would possibly help free up government resources, while also representing a kinder model for those who are otherwise left in limbo for long periods of time.Whether this can be framed and communicated correctly by the Labour government so that it appeals to those who are worried about immigration is an open question, though, as it's possible that anything other than a hardcore lockdown of the borders and a denial of new immigration requests—which would cause even more chaos in the country's healthcare and other immigration-related systems—might seem like non-solutions, even if they technically solve some of the underlying problems the rioters have been complaining about.Show Noteshttps://en.wikipedia.org/wiki/2024_Southport_stabbinghttps://www.cnn.com/2024/07/30/uk/taylor-swift-southport-stabbing-reaction-gbr-intl/index.htmlhttps://www.thetimes.com/article/e87b09fb-b8fe-408d-a961-c89e6ae0f098?shareToken=620a021a38d86ed3af11587c36a52afdhttps://www.bbc.com/news/articles/c5y38gjp4ygohttps://apnews.com/article/britain-unrest-riot-southport-police-response-ec348340c7d223f0117ae8c62638dd6fhttps://newrepublic.com/article/184691/day-riots-stopped-ukhttps://www.reuters.com/world/uk/uk-examines-foreign-states-role-sowing-discord-leading-riots-2024-08-05/https://www.bbc.com/news/articles/cp35w0kj2y4ohttps://www.aljazeera.com/news/2024/8/8/why-is-elon-musk-clashing-with-the-uk-government-over-far-right-riotshttps://archive.ph/vKdeuhttps://www.bbc.com/news/articles/clyg7dzr8wkohttps://www.newstatesman.com/comment/2024/08/is-cocaine-driving-the-british-riotshttps://www.lrb.co.uk/blog/2024/august/this-time-it-s-worsehttps://www.bristolpost.co.uk/news/bristol-news/amid-chaos-far-right-protests-9459421https://www.cnn.com/2024/07/29/uk/northern-england-stabbing-intl/index.htmlhttps://apnews.com/article/uk-southport-children-stabbed-dance-class-8a9c7d7ed01441ce96332cd3d1250e43https://en.wikipedia.org/wiki/2024_United_Kingdom_riotshttps://en.wikipedia.org/wiki/English_Defence_Leaguehttps://en.wikipedia.org/wiki/Patriotic_Alternativehttps://www.independent.co.uk/news/uk/politics/southport-far-right-disinformation-russia-b2589041.htmlhttps://www.theguardian.com/uk-news/article/2024/jul/30/misinformation-southport-attack-suspect-social-media-conspiracy-theorieshttps://en.wikipedia.org/wiki/Brexithttps://www.ft.com/content/a0a4fb08-40cc-4627-a58f-b3a8d2d0e009?accessToken=zwAGH1UwRNrgkdOgpPsIQMxGJ9Olj7Oo0tDgCQ.MEYCIQChxhfA2SBamOb_Y_c0vQwPJmzXo0fHfucpW2v_dBGr2gIhANMcXEtBzZqY7R0Z9RkAZMkEoGMSy5P49MRnprFYWvAH&sharetype=gift&token=75895b79-b6c8-4e1f-a3ab-dc4d87161131https://www.nbcnews.com/news/world/taylor-swift-concert-terror-plot-austria-foiled-2-men-arrested-shows-w-rcna165591https://en.wikipedia.org/wiki/Manchester_Arena_bombinghttps://en.wikipedia.org/wiki/Taylor_Swifthttps://www.investopedia.com/swiftonomics-definition-8601178https://en.wikipedia.org/wiki/Impact_of_the_Eras_Tourhttps://apnews.com/article/austria-taylor-swift-concerts-canceled-extremism-arrests-17b494f1a164b205128d7faeb607e731https://www.billboard.com/music/music-news/taylor-swift-vienna-terror-plot-third-person-detained-1235750067/https://apnews.com/article/austria-taylor-swift-concerts-canceled-extremism-arrests-feff9108d0a14d9941c4bc416c0eb05f This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit letsknowthings.substack.com/subscribe
Dans ce nouvel épisode, Matthieu Stefani découvre de nouveaux souscripteurs du fonds OVNI Ventures qui sont désormais plus de 150. Cette fois-ci, c'est Caroline qui à l'occasion d'échanger avec lui de son parcours professionnel et personnel. Elle évoque l'aventure Via France partagée avec son premier mari et associé, et notamment les complications que cela a pu entraîner dans leur entreprise commune. Aventurière, elle a mis un pied dans cette société d'abord en tant que business angel avec seulement 30 000 euros d'économies. L'entreprise est ensuite devenue une référence parmi les bases de données françaises, puis européennes et mondiales, d'informations touristiques et événementielles. Caroline raconte les défis qu'elle a rencontrés, notamment la gestion des fournisseurs et la négociation des droits de contenus. Elle explique comment elle a optimisé les processus de production pour réduire les coûts et augmenter la valeur ajoutée pour ses clients. Via France a réussi à l'époque à vendre ses contenus à de grands acteurs comme Accor, Eurostar et Air France. Caroline explique que si leur duo avec son mari a été un moteur de création de valeur pour leur entreprise dans les premières années, elle est ensuite passée au bord du précipice car au moment de la vente, elle a dû surmonter des défis personnels mélangés aux professionnels : un divorce et des procédures juridiques nombreuses et complexes. Malgré ces obstacles, elle a réussi à vendre l'entreprise et maintenir une relation de confiance avec les associés, clients et salariés. Caroline revient également sur son investissement dans le fonds OVNI, pour lequel elle souhaite apporter son expertise en due diligence et/ou en tant qu'operating partner. Elle insiste sur l'importance de l'honnêteté et du respect dans les relations professionnelles et recommande deux livres : " À l'ère de l'intelligence artificielle " de Kai-Fu Lee et " Le Petit Prince "
Hat Tip to this week's creators: @leopoldasch, @JoeSlater87, @GaryMarcus, @ulonnaya, @alex, @ttunguz, @mmasnick, @dannyrimer, @imdavidpierce, @asafitch, @ylecun, @nxthompson, @kaifulee, @DaphneKoller, @AndrewYNg, @aidangomez, @Kyle_L_Wiggers, @waynema, @QianerLiu, @nicnewman, @nmasc_, @steph_palazzolo, @nofilmschoolContents* Editorial: * Essays of the Week* Situational Awareness: The Decade Ahead* ChatGPT is b******t* AGI by 2027?* Ilya Sutskever, OpenAI's former chief scientist, launches new AI company* The Series A Crunch Is No Joke* The Series A Crunch or the Seedpocalypse of 2024 * The Surgeon General Is Wrong. Social Media Doesn't Need Warning Labels* Video of the Week* Danny Rimer on 20VC - (Must See)* AI of the Week* Anthropic has a fast new AI model — and a clever new way to interact with chatbots* Nvidia's Ascent to Most Valuable Company Has Echoes of Dot-Com Boom* The Expanding Universe of Generative Models* DeepMind's new AI generates soundtracks and dialogue for videos* News Of the Week* Apple Suspends Work on Next Vision Pro, Focused on Releasing Cheaper Model in Late 2025* Is the news industry ready for another pivot to video?* Cerebras, an Nvidia Challenger, Files for IPO Confidentially* Startup of the Week* Final Cut Camera and iPad Multicam are Truly Revolutionary* X of the Week* Leopold AschenbrennerEditorialI had not heard of Leopold Aschenbrenner until yesterday. I was meeting with Faraj Aalaei (a SignalRank board member) and my colleague Rob Hodgkinson when they began to talk about “Situational Awareness,” his essay on the future of AGI, and its likely speed of emergence.So I had to read it, and it is this week's essay of the week. He starts his 165-page epic with:Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them.So, Leopold is not humble. He finds himself “among” the few people with situational awareness.As a person prone to bigging up myself, I am not one to prematurely judge somebody's view of self. So, I read all 165 pages.He makes one point. The growth of AI capability is accelerating. More is being done at a lower cost, and the trend will continue to be super-intelligence by 2027. At that point, billions of skilled bots will solve problems at a rate we cannot imagine. And they will work together, with little human input, to do so.His case is developed using linear progression from current developments. According to Leopold, all you have to believe in is straight lines.He also has a secondary narrative related to safety, particularly the safety of models and their weightings (how they achieve their results).By safety, he does not mean the models will do bad things. He means that third parties, namely China, can steal the weightings and reproduce the results. He focuses on the poor security surrounding models as the problem. And he deems governments unaware of the dangers.Although German-born, he argues in favor of the US-led effort to see AGI as a weapon to defeat China and threatens dire consequences if it does not. He sees the “free world” as in danger unless it stops others from gaining the sophistication he predicts in the time he predicts.At that point, I felt I was reading a manifesto for World War Three.But as I see it, the smartest people in the space have converged on a different perspective, a third way, one I will dub AGI Realism. The core tenets are simple:* Superintelligence is a matter of national security. We are rapidly building machines smarter than the smartest humans. This is not another cool Silicon Valley boom; this isn't some random community of coders writing an innocent open source software package; this isn't fun and games. Superintelligence is going to be wild; it will be the most powerful weapon mankind has ever built. And for any of us involved, it'll be the most important thing we ever do. * America must lead. The torch of liberty will not survive Xi getting AGI first. (And, realistically, American leadership is the only path to safe AGI, too.) That means we can't simply “pause”; it means we need to rapidly scale up US power production to build the AGI clusters in the US. But it also means amateur startup security delivering the nuclear secrets to the CCP won't cut it anymore, and it means the core AGI infrastructure must be controlled by America, not some dictator in the Middle East. American AI labs must put the national interest first. * We need to not screw it up. Recognizing the power of superintelligence also means recognizing its peril. There are very real safety risks; very real risks this all goes awry—whether it be because mankind uses the destructive power brought forth for our mutual annihilation, or because, yes, the alien species we're summoning is one we cannot yet fully control. These are manageable—but improvising won't cut it. Navigating these perils will require good people bringing a level of seriousness to the table that has not yet been offered. As the acceleration intensifies, I only expect the discourse to get more shrill. But my greatest hope is that there will be those who feel the weight of what is coming, and take it as a solemn call to duty.I persisted in reading it, and I think you should, too—not for the war-mongering element but for the core acceleration thesis.My two cents: Leopold underestimates AI's impact in the long run and overestimates it in the short term, but he is directionally correct.Anthropic released v3.5 of Claude.ai today. It is far faster than the impressive 3.0 version (released a few months ago) and costs a fraction to train and run. it is also more capable. It accepts text and images and has a new feature that allows it to run code, edit documents, and preview designs called ‘Artifacts.'Claude 3.5 Opus is probably not far away.Situational Awareness projects trends like this into the near future, and his views are extrapolated from that perspective.Contrast that paper with “ChatGPT is B******t,” a paper coming out of Glasgow University in the UK. The three authors contest the accusation that ChatGPT hallucinates or lies. They claim that because it is a probabilistic word finder, it spouts b******t. It can be right, and it can be wrong, but it does not know the difference. It's a bullshitter.Hilariously, they define three types of BS:B******t (general)Any utterance produced where a speaker has indifference towards the truth of the utterance.Hard b******tB******t produced with the intention to mislead the audience about the utterer's agenda.Soft b******tB******t produced without the intention to mislead the hearer regarding the utterer's agenda.They then conclude:With this distinction in hand, we're now in a position to consider a worry of the following sort: Is ChatGPT hard b**********g, soft b**********g, or neither? We will argue, first, that ChatGPT, and other LLMs, are clearly soft b**********g. However, the question of whether these chatbots are hard b**********g is a trickier one, and depends on a number of complex questions concerning whether ChatGPT can be ascribed intentions.This is closer to Gary Marcus's point of view in his ‘AGI by 2027?' response to Leopold. It is also below.I think the reality is somewhere between Leopold and Marcus. AI is capable of surprising things, given that it is only a probabilistic word-finder. And its ability to do so is becoming cheaper and faster. The number of times it is useful easily outweighs, for me, the times it is not. Most importantly, AI agents will work together to improve each other and learn faster.However, Gary Marcus is right that reasoning and other essential decision-making characteristics are not logically derived from an LLM approach to knowledge. So, without additional or perhaps different elements, there will be limits to where it can go. Gary probably underestimates what CAN be achieved with LLMs (indeed, who would have thought they could do what they already do). And Leopold probably overestimates the lack of a ceiling in what they will do and how fast that will happen.It will be fascinating to watch. I, for one, have no idea what to expect except the unexpected. OpenAI Founder Illya Sutskever weighed in, too, with a new AI startup called Safe Superintelligence Inc. (SSI). The most important word here is superintelligence, the same word Leopold used. The next phase is focused on higher-than-human intelligence, which can be reproduced billions of times to create scaled Superintelligence.The Expanding Universe of Generative Models piece below places smart people in the room to discuss these developments. Yann LeCun, Nicholas Thompson, Kai-Fu Lee, Daphne Koller, Andrew Ng, and Aidan Gomez are participants. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.thatwastheweek.com/subscribe
[Futurismo] Resenha do livro "AI 2041: ten visions for our future", de Kai-Fu Lee e Chen Qiufan. O texto escrito está nesse link. Como será nosso mundo em 2041? Como estaremos vivendo em pouco menos de 20 anos? O visionário Kai-Fu Lee, que foi presidente da Google na China e executivo senior da Apple e Microsoft, tem conhecimento e bagagem de sobra para fazer essa previsão. Então ele se juntou com Chen Qiufan para criar 10 pequenos contos de ficção que se passam em 2041 e ilustrar as ideias, projeções e análises em cada área da tecnologia. E quer saber? O resultado ficou show! Vem ouvir! Aqui o link para a resenha do livro anterior dele, AI Superpowers. Aqui o link para você comprar o livro na Amazon do Brasil.
AI expert Kai Fu Lee predicts that by 2027, 50% of people who are currently employed will be out of jobs due to AI. This prediction aligns with a government report that estimated 49% of people would be out of jobs by 2050. The rapid pace of AI development suggests that the government report may be too conservative. To stay ahead, entrepreneurs should stay informed about AI, attend local meetups and industry events, and read about AI for at least an hour a day. By embracing AI, entrepreneurs have the opportunity to thrive in the changing landscape. Takeaways *AI expert Kai Fu Lee predicts that 50% of people who are currently employed will be out of jobs by 2027 due to AI. *Entrepreneurs should stay informed about AI to stay ahead in the changing job market. *Attending local meetups and industry events can provide valuable networking opportunities and insights into AI. *Spending at least an hour a day reading about AI can help entrepreneurs understand and leverage its potential. *Embracing AI can present new opportunities for entrepreneurs to thrive in the evolving business landscape. If you have any questions in general you can reach our office at: VastSolutionsGroup.com Phone: (888) 808-8278 Email: info@vastsolutionsgroup.com Monday-Thursday 8:00 AM – 5:00 PM (Pacific) Thank you for listening!
Send us a Text Message.Kitap Kulübü'müzün 41. buluşmasında Kai-Fu Lee ve Chen Qiufan'ın Yapay Zeka 2041 adlı kitabını konuştuk.Kitabın alışılmışın dışında bir yapısı var, yapay zeka sayesinde 2041 yılında gerçekleşebilecek 10 kurgu hikayeden ve her hikayenin ardından da günümüzde yapılan bu alandaki çalışmaların analiz edildiği bölümlerden oluşuyor. Kurgu hikayeleri zamanında Google ve Baidu'da çalışmış olan yazar Chen Qiufan yazmış. Teknik analizleri de bilgisayar mühendisi, teknoloji yatırımcısı ve iş insanı Kai-Fu Lee kaleme almış.Hikayeler sosyal ilişkiler, yapay zeka ile üretilen gerçek görünümlü videolar ve onun yaratacağı sorunlar, kişisel eğitim ve öğrenim, sağlık, oyun ve eğlence, otonom ulaşım, otonom silahlar, kariyer ve işgücü, mutluluk ve yine yapay zekanın yol açacağı düşünülen bolluk ve buna bağlı ekonomik düzen gibi oldukça kapsamlı ve farklı konular hakkında. Kitap ilk olarak 2021 sonunda yayınlanmış, geçen yıl Türkçe'ye kazandırılmış. Yapay zekada gelişmeler aylık hatta haftalık olarak değişim gösterebiliyor, ama kitabın sorduğu sorular bence halen geçerli. Hatta bana şöyle geliyor, gelişen sadece yapay zeka teknolojisi, biz sosyal, hukuki, politik olarak hemen hemen hiçbir telaş içinde gibi görünmüyoruz. Her büyük değişimde olduğu gibi sanırım burada da kayıplar yaşandığı zaman geriden gelip boşlukları doldurma yoluna gideceğiz. Kimse olacakları öngöremiyor, yaşayıp ona göre vaziyet alacağız.Açıkçası ben uzun vadede iyimserim, daha iyi bir planlamayla israfın azalması, doğayı korumada, enerji ve sağlık alanında temel bilimlerin daha iyi anlaşılarak bazı icatların yapılmasının mümkün olduğunu düşünüyorum. Ama kısa vadede özellikle iklim değişiklikleri ile de çakışacak insanı boşa çıkaracak yeniliklerin bir nevi “mükemmel kasırga” oluşturabileceğini ve insanlığın yine büyük bir sınav geçireceğine inanıyorum. İlginç zamanlarda yaşıyoruz, daha ilginç zamanlar önümüzde.Bu bölümde sözlerine yer verebildiğim arkadaşlarım(02:30) Yavuz Abut, (06:04) Müge İrfanoğlu, (09:18) Mustafa Pancarcı, (11:27) Müge İrfanoğlu, (12:26) Ömer Tural, (16:10) Olcay Çat, (18:55) Mete Yurtsever, (19:46) Suat Soy, (23:33) Yavuz Abut, (28:35) Halime Özben Hacı, (30:40) Mete Yurtsever, (31:43) Müge İrfanoğlu, (32:18) Suat Soy, (33:20) Mete Yurtsever ve (34:43) Özden KarakaşSupport the Show.
The Cognitive Crucible is a forum that presents different perspectives and emerging thought leadership related to the information environment. The opinions expressed by guests are their own, and do not necessarily reflect the views of or endorsement by the Information Professionals Association. During this episode, US Air Force Captain Jose Davis discusses his paper, which is entitled: "Leveraging AI for Operations in the Information Environment: 3 Demonstrations in Disinformation, Social Media, and Entropy." The paper focuses on the application of Artificial Intelligence (AI) in the realm of Operations in the Information Environment (OIE), particularly for the Air Force. The paper presents three case studies demonstrating how AI can positively impact OIE and advocates for direct AI research in this area. Additionally, he'll recap an information campaign that US Air Forces in Europe conducted to assure Baltic NATO Allies, following the Russian invasion of Ukraine in 2022. Recording Date: 24 April 2024 Research Questions: Jose Davis suggests as interested student examine: What other common metrics can be developed or researched for assessing the Information Environment, tailor-made for the national security needs and useful for IO and PA operators? What are the practical outcomes in the Information Environment (a complex system) when entropy is influenced, pushed either higher or lower? e.g. Hypothetically, when high entropy is present, humans deploy simplifying heuristics, so this should help improve Key Leader Engagements' (KLE) timing and improve KLE dossiers. Or with PA/IO, high entropy hypothetically may demand a simplified messaging campaign. In what other ways can AI be leveraged to combat nefarious AI use for disinformation? The ideas of watermarking or safeguarding content from manipulation from nefarious Generative AI are a form of immunization, advancing Inoculation Theory as a whole. What other preventative measures along the lines of immunization can be taken to combat disinformation? Resources: Cognitive Crucible Podcast Episodes Mentioned #46 Pat Ryder on Public Affairs and Strategic Communications #174 Kara Masick on Assessment Insights from Program Evaluation #183 Julie Janson on Air Force IO Talent and Strategy Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence Universal adversarial perturbations by Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard Pre-trained Adversarial Perturbations by Yuanhao Ban, Yinpeng Dong Automating OIE with Large Language Models by Cpt Alexander Sferrella, Cpt Joseph Conger, and Maj Kara Masick Claude Shannon AI 2041: Ten Visions for Our Future by former Google technologist Kai-Fu Lee and science fiction writer Chen Qiufan. I'm a huge advocate of using story to educate. For those new to AI or wanting to understand AI's societal impact, this is my go-to book. Written in a series of fictional short stories with in-depth essay analysis at the end of each, the book teaches AI while making it entertaining. A Mind at Play: How Claude Shannon Invented the Information Age by Jimmy Soni and Rob Goodman. I think every Information Warfare professional should know who Claude Shannon is and read his “magna carta” of the Information Age, The Mathematical Theory of Communication. My intellectual hero, this exceptional biography brings the man to life — a polymath, a tinkerer, an innovator. “Attention is all You Need” by Ashish Vaswani et al. This is the seminal paper advancing the Transformer architecture which made Generative AI like ChatGPT possible. Liken this paper to Einstein's book on Relativity or Newton's Principia. Information Theory: Structural Models for Qualitative Data by Klaus Krippendorff Link to full show notes and resources Guest Bio: Jose is a Public Affairs Officer for Headquarters U.S. Air Forces in Europe - Air Forces Africa at Ramstein Air Base, Germany. He has over 14 years of experience in integrated communications, both in the public and private sectors. He is accredited in Public Relations and Military Communications through the Public Relations Society of America. In his current role, he ensures the strategic communication of accurate and timely information about the command's 104-country area of operations. He played a pivotal role at NATO in combating misinformation before, during and after the 2022 Russian invasion of Ukraine. He just finished a rigorous six-month fellowship at a DoD-sponsored AI Accelerator at the Massachusetts Institute of Technology, gaining exposure to some of the brightest and best minds in AI and Machine Learning technologies. As part of his fellowship, Jose was required to apply his newly acquired AI foundational knowledge to produce a research paper addressing a problem or issue facing the U.S. Air Force. The paper is slated for publication in the U.S. Air Force's Air and Space Operations Review journal. About: The Information Professionals Association (IPA) is a non-profit organization dedicated to exploring the role of information activities, such as influence and cognitive security, within the national security sector and helping to bridge the divide between operations and research. Its goal is to increase interdisciplinary collaboration between scholars and practitioners and policymakers with an interest in this domain. For more information, please contact us at communications@information-professionals.org. Or, connect directly with The Cognitive Crucible podcast host, John Bicknell, on LinkedIn. Disclosure: As an Amazon Associate, 1) IPA earns from qualifying purchases, 2) IPA gets commissions for purchases made through links in this post.
In this episode, we explore the Chinese AI ecosystem with 'L-squared,' an anonymous tech worker based in Beijing. We discuss major players, model quality, public engagement, regulation, and the US 'chip ban.' Discover the similarities and differences between US and Chinese AI landscapes, and gain a nuanced perspective on the current state of AI in China. USEFUL RESOURCES: Testing Chinese models: Yi-34B-Chat (made by Kai-Fu Lee's team 01.AI) can be tried out via Replicate (https://replicate.com/01-ai/yi-34b-chat) or Hugging Face. You can also use the ChatGLM playground (https://open.bigmodel.cn/trialcenter) and Baidu's ERNIE (https://yiyan.baidu.com/) without a Chinese SIM card. Benchmarking models: SuperCLUE is one of the most prominent benchmarks - the latest results are on GitHub (https://github.com/CLUEbenchmark/SuperCLUE) and the paper explaining the methodology is here (https://arxiv.org/abs/2307.15020). Regulation: Explainer (https://carnegieendowment.org/2023/07/10/china-s-ai-regulations-and-how-they-get-made-pub-90117) from Matt Sheehan; piece (https://www.chinatalk.media/p/how-tight-ai-regs-hurt-chinese-firms) on how genAI regs are affecting Chinese companies. US-China competition: Jeff Ding's work (https://www.tandfonline.com/doi/full/10.1080/09692290.2023.2173633) on the diffusion deficit in S&T; Bloomberg piece (https://www.bloomberg.com/graphics/2023-china-huawei-semiconductor/) on Huawei's semiconductor development efforts. Staying up to date: Sign up to alerts from CSET's Scout tool (https://scout.eto.tech/); subscribe to Concordia AI's AI safety (https://aisafetychina.substack.com/) in China newsletter (disclaimer: I used to work at Concordia!) A 2016 profile (https://chinai.substack.com/p/chinai-37-happy-20th-anniversary) on Microsoft Research Asia by Wang Jingjing, covered in Jeff Ding's ChinAI newsletter SPONSORS: Oracle Cloud Infrastructure (OCI) is a single platform for your infrastructure, database, application development, and AI needs. OCI has four to eight times the bandwidth of other clouds; offers one consistent price, instead of...does data better than Oracle. If you want to do more and spend less, take a free test drive of OCI at https://oracle.com/cognitive Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off http://www.omneky.com/ The Brave search API can be used to assemble a data set to train your AI models and help with retrieval augmentation at the time of inference. All while remaining affordable with developer first pricing, integrating the Brave search API into your workflow translates to more ethical data sourcing and more human representative data sets. Try the Brave search API for free for up to 2000 queries per month at https://bit.ly/BraveTCR Head to Squad to access global engineering without the headache and at a fraction of the cost: head to choosesquad.com and mention “Turpentine” to skip the waitlist. Plumb is a no-code AI app builder designed for product teams who care about quality and speed. What is taking you weeks to hand-code today can be done confidently in hours. Check out https://bit.ly/PlumbTCR for early access. TIMESTAMPS: (00:00) Introduction (07:24) China's AI Ecosystem (13:40) Public AI Engagement (17:33) Sponsors : OCI / Omneky (18:50) AI Tools Comparison (35:37) Sponsors : Brave / Squad / Plumb (39:14) AI Regulatory Maze (51:02) AI Performance, Censorship (55:28) Chinese AI Regulations (01:04:37) Tech, Research Role (01:12:11) Global AI Ecosystem (01:23:22) Cultural AI Perspectives (01:29:14) AI Safety, Cooperation
Brendan McCord is the founder of Cosmos Institute — a non-profit dedicated to exploring the intersection of AI and philosophy. Brendan joins the show to discuss Cosmos' origins, the pursuit of philosophy as a technologist, the different schools of thought in AI, complex adaptive systems and MUCH more! Important Links: Brendan McCord's Reading List Cosmos Institute Substack Brendan's Twitter Show Notes: The Genesis of the Cosmos Institute Philosophy as a Quixotic Pursuit The Man of the System Dilemma Existential Risk & Scenario Agnosticism The AI Schools of Thought The Religious Nature of the E/Acc Movement What Tocqueville Can Teach Us About AI The Philosophy-to-Code Pipeline “Cars ignited the Sexual Revolution” and Other Unexpected Occurrences The Best Systems are Adaptive Heterogeneity & Resilient Systems Open Source and the US-China Situation Automation, Augmentation & Open-Ended Generation The Underrated Nuance of Russian Realism Cinematic Visions of the Future Great Talent & the Risk of the Tasmanian Devil Brendan as Emperor of the World MORE! Books Mentioned: Murray Rothbard, “For A New Liberty” David R. Hawkins, “Power vs. Force” Jung Chang, “Wild Swans: Three Daughters of China” Jung Chang, “Mao: The Unknown Story” Aleksandr Solzhenitsyn, “The Gulag Archipelago” Arthur Koestler, “Darkness At Noon” Adam Smith, “The Theory of Moral Sentiments” Lewis Carroll, “Alice's Adventures in Wonderland” Lewis Carroll, “What the Tortoise Said To Achilles” Eliezer Yudkowsky, “Harry Potter and the Methods of Rationality” Marc Andreessen, “The Techno-Optimist Manifesto” Alexis De Tocqueville, “Democracy in America” 'Pericles's Funeral Oration' quoted in Thucydides' “History of the Peloponnesian War”. Plato, “Theaetetus” Plato, “The Republic” Nietzsche, “The Gay Science” C.P Snow, “The Two Cultures” Elinor Ostrom, “Governing the Commons: The Evolution of Institutions for Collective Action” James M. Buchanan, "Freedom in Constitutional Contract: Perspectives of a Political Economist” Iain M. Banks, “Consider Phlebas” (Culture Series #1) Chen Qiufan and Kai-Fu Lee, “AI 2041: Ten Visions for Our Future” Christopher Buckley, “Thank You for Smoking” John Stuart Mill, “On Liberty”
Our 159th episode with a summary and discussion of last week's big AI news! Check out our sponsor, the SuperDataScience podcast. You can listen to SDS across all major podcasting platforms (e.g., Spotify, Apple Podcasts, Google Podcasts) plus there's a video version on YouTube. Read out our text newsletter and comment on the podcast at https://lastweekin.ai/ Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai Timestamps + links: Intro / Banter Tools & Apps (00:03:24) Inflection-2.5: meet the world's best personal AI (00:06:37) Introducing Devin, the first AI software engineer (00:11:00) DoorDash's new AI-powered ‘SafeChat+' tool automatically detects verbal abuse (00:12:44) Anthropic releases Claude 3 Haiku, an AI model built for speed and affordability (00:13:30) Pika Labs just added sound effects to its generative AI videos — here's how it sounds (00:15:33) Salesforce announces new AI tools for doctors Applications & Business (00:17:33) Sam Altman Rejoins OpenAI Board Along With Three New Directors (00:21:15) Cohere releases powerful ‘Command-R' language model for enterprise use (00:23:16) Building Meta's GenAI Infrastructure (00:25:53) Baidu Launches China's First 24/7 Robotaxi Service Projects & Open Source (00:26:54) Croissant: a metadata format for ML-ready datasets (00:29:40) SaulLM-7B: A pioneering Large Language Model for Law (00:31:45) Kai-Fu Lee's AI Company “01.AI” Announces the Open Source of the Yi-9B Model Research & Advancements (00:33:50) A generalist AI agent for 3D virtual environments (00:39:16) Stealing Part of a Production Language Model (00:42:01) Data Interpreter: An LLM Agent For Data Science (00:43:54) ShortGPT: Layers in Large Language Models are More Redundant Than You Expect (00:44:55) PixArt-Σ: Weak-to-Strong Training of Diffusion Transformer for 4K Text-to-Image Generation Policy & Safety (00:46:24) World's first major act to regulate AI passed by European lawmakers (00:48:57) US spearheads first UN resolution on artificial intelligence — aimed at ensuring equal access (00:51:27) Google restricts election-related queries for its Gemini chatbot Synthetic Media & Art (00:52:43) Researchers tested leading AI models for copyright infringement using popular books, and GPT-4 performed worst (00:55:27) Nvidia Says NeMo AI Platform Complies With Copyright After Authors' Complaint (00:57:23) Five of this year's Pulitzer finalists are AI-powered Fun! (00:58:12) I made by Superman action figure talk with Pika Labs' new AI lip sync tool — watch this
Transcript Eric Topol (00:06):Well, hello, this is Eric Topol with Ground Truths and I am absolutely thrilled to welcome Daphne Koller, the founder and CEO of insitro, and a person who I've been wanting to meet for some time. Finally, we converged so welcome, Daphne.Daphne Koller (00:21):Thank you Eric. And it's a pleasure to finally meet you as well.Eric Topol (00:24):Yeah, I mean you have been rocking everybody over the years with elected to the National Academy of Engineering and Science and right at the interface of life science and computer science and in my view, there's hardly anyone I can imagine who's doing so much at that interface. I wanted to first start with your meeting in Davos last month because I kind of figured we start broad AI rather than starting to get into what you're doing these days. And you had a really interesting panel [←transcript] with Yann LeCun, Andrew Ng and Kai-Fu Lee and others, and I wanted to get your impression about that and also kind of the general sense. I mean AI is just moving it at speed, that is just crazy stuff. What were your thoughts about that panel just last month, where are we?Video link for the WEF PanelDaphne Koller (01:25):I think we've been living on an exponential curve for multiple decades and the thing about exponential curves is they are very misleading things. In the early stages people basically take the line between whatever we were last year, and this year and they interpolate linearly, and they say, God, things are moving so slowly. Then as the exponential curve starts to pick up, it becomes more and more evident that things are moving faster, but it's still people interpolate linearly and it's only when things really hit that inflection point that people realize that even with the linear interpolation where we'll be next year is just mind blowing. And if you realize that you're on that exponential curve where we will be next year is just totally unanticipatable. I think what we started to discuss in that panel was, are we in fact on an exponential curve? What are the rate limiting factors that may or may not enable that curve to continue specifically availability of data and what it would take to make that curve available in areas outside of the speech, whatever natural language, large language models that exist today and go far beyond that, which is what you would need to have these be applicable to areas such as biology and medicine.Daphne Koller (02:47):And so that was kind of the message to my mind from the panel.Eric Topol (02:53):And there was some differences in opinion, of course Yann can be a little strong and I think it was good to see that you're challenging on some things and how there is this “world view” of AI and how, I guess where we go from here. As you mentioned in the area of life science, there already had been before large language models hit stride, so much progress particularly in imaging cells, subcellular, I mean rare cells, I mean just stuff that was just without any labeling, without fluorescein, just amazing stuff. And then now it's gone into another level. So as we get into that, just before I do that, I want to ask you about this convergence story. Jensen Huang, I'm sure you heard his quote about biology as the opportunity to be engineering, not science. I'm sure if I understand, not science, but what about this convergence? Because it is quite extraordinary to see two fields coming together moving at such high velocity."Biology has the opportunity to be engineering not science. When something becomes engineering not science it becomes...exponentially improving, it can compound on the benefits of previous years." -Jensen Huang, NVIDIA.Daphne Koller (04:08):So, a quote that I will replace Jensen's or will propose a replacement for Jensen's quote, which is one that many people have articulated, is that math is to physics as machine learning is to biology. It is a mathematical foundation that allows you to take something that up until that point had been kind of mysterious and fuzzy and almost magical and create a formal foundation for it. Now physics, especially Newtonian physics, is simple enough that math is the right foundation to capture what goes on in a lot of physics. Biology as an evolved natural system is so complex that you can't articulate a mathematical model for that de novo. You need to actually let the data speak and then let machine learning find the patterns in those data and really help us create a predictability, if you will, for biological systems that you can start to ask what if questions, what would happen if we perturb the system in this way?The ConvergenceDaphne Koller (05:17):How would it react? We're nowhere close to being able to answer those questions reliably today, but as you feed a machine learning system more and more data, hopefully it'll become capable of making those predictions. And in order to do that, and this is where it comes to this convergence of these two disciplines, the fodder, the foundation for all of machine learning is having enough data to feed the beast. The miracle of the convergence that we're seeing is that over the last 10, 15 years, maybe 20 years in biology, we've been on a similar, albeit somewhat slower exponential curve of data generation in biology where we are turning it into a quantitative discipline from something that is entirely observational qualitative, which is where it started, to something that becomes much more quantitative and broad based in how we measure biology. And so those measurements, the tools that life scientists and bioengineers have developed that allow us to measure biological systems is what produces that fodder, that energy that you can then feed into the machine learning models so that they can start making predictions.Eric Topol (06:32):Yeah, well I think the number of layers of data no less what's in these layers is quite extraordinary. So some years ago when all the single cell sequencing was started, I said, well, that's kind of academic interest and now the field of spatial omics has exploded. And I wonder how you see the feeding the beast here. It's at every level. It's not just the cell level subcellular and single cell nuclei sequencing single cell epigenomics, and then you go all the way to these other layers of data. I know you plug into the human patient side as well as it could be images, it could be past slides, it could be the outcomes and treatments and on and on and on. I mean, so when you think about multimodal AI, has anybody really done that yet?Daphne Koller (07:30):I think that there are certainly beginnings of multimodal AI and we have started to see some of the benefits of the convergence of say, imaging and omics. And I will give an example from some of the work that we've recently distributed on a preprint server work that we did at insitro, which took imaging data from standard histopathology slides, H&E slides and aligned them with simple bulk RNA-Seq taken from those same tumor samples. And what we find is that by training models that translate from one to the other, specifically from the imaging to the omics, you're able to, for a fairly large fraction of genes, make very accurate predictions of gene expression levels by looking at the histopath images alone. And in fact, because many of the predictions are made at the tile level, not at the entire slide level, even though the omics was captured in bulk, you're able to spatially resolve the signal and get kind of like a pseudo spatial biology just by making predictions from the H&E image into these omic modalities.Multimodal A.I. and Life ScienceDaphne Koller (08:44):So there are I think beginnings of multimodality, but in order to get to multimodality, you really need to train on at least some data where the two modalities are simultaneously. And so at this point, I think the rate limiting factor is more a matter of data acquisition for training the models. It is for building the models themselves. And so that's where I think things like spatial biology, which I think like you are very excited about, are one of the places where we can really start to capture these paired modalities and get to some of those multimodal capabilities.Eric Topol (09:23):Yeah, I wanted to ask you because I mean spatial temporal is so perfect. It is two modes, and you have as the preprint you refer to and you see things like electronic health records in genomics, electronic health records in medical images. The most we've done is getting two modes of data together. And the question is as this data starts to really accrue, do we need new models to work with it or do you actually foresee that that is not a limiting step?Daphne Koller (09:57):So I think currently data availability is the most significant rate limiting step. The nice thing about modern day machine learning is that it really is structured as a set of building blocks that you can start to put together in different ways for different situations. And so, do we have the exact right models available to us today for these multimodal systems? Probably not, but do we have the right building blocks that if we creatively put them together from what has already been deployed in other settings? Probably, yes. So of course there's still a model exploration to be done and a lot of creativity in how these building blocks should be put together, but I think we have the tools available to solve these problems. What we really need is first I think a really significant data acquisition effort. And the other thing that we need, which is also something that has been a priority for us at insitro, is the right mix of people to be put together so that you can, because what happens is if you take a bunch of even extremely talented and sophisticated machine learning scientists and say, solve a biological problem, here's a dataset, they don't know what questions to ask and oftentimes end up asking questions that might be kind of interesting from machine learning perspective, but don't really answer fundamental biology questions.Daphne Koller (11:16):And conversely, you can take biologists and say, hey, what would you have machine learning do? And they will tell you, well, in our work we do A to B to C to D, and B to C is kind of painful, like counting nuclei is really painful, so can we have the machine do that for us? And it's kind of like that. Yeah, but that's boring. So what you get if you put them in a room together and actually get to the point where they communicate with each other effectively, is that not only do you get better solutions, you get better problems. I think that's really the crux of making progress here besides data is the culture and the people.A.I. and Drug DiscoveryEric Topol (11:54):Well, I'm sure you've assembled that at insitro knowing you, and I mean people tend to forget it's about the people, it's not about the models or even the data when you have all that. Now you've been onto drug discovery paths, there's at least 20 drugs that are AI driven that are in the clinic in phase one or two at some point. Obviously these are not only ones that you've been working on, but do you see this whole field now going into high gear because of this? Or is that the fact that there's all these AI companies partnering with big pharma? Is it a lot of nice agreements that are drawn up with multimillion dollar milestones or is this real?Daphne Koller (12:47):So there's a number of different layers to your question. First of all, let me start by saying that I find the notion of AI driven drugs to be a bit of a weird concept because over time most drugs will have some element of AI in them. I mean, even some of the earlier work used data science in many cases. So where do you draw the boundary? I mean, we're not going to be in a world anytime soon where AI starts out with, oh, I need to work on ALS and at the end there is a clinical trial design ready to be submitted to the FDA without anything, any human intervention in the middle. So, it's always going to be an interplay between a machine and a human with over time more and more capabilities I think being taken on by the machine, but I think inevitably a partnership for a long time to come.Daphne Koller (13:41):But coming to the second part of your question, is this real? Every big pharma has gotten to the point today that they realize they need some of that AI thing that's going around. The level of sophistication of how they incorporate that and their willingness to make some of the hard decisions of, well, if we're going to be doing this with AI, it means we shouldn't be doing it the old way anymore and we need to make a big dramatic internal shift that I think depends very much on the specific company. And some companies have more willingness to take those very big steps than others, so will some companies be able to make the adjustment? Probably. Will all of them? Probably not. I would say however, that in this new world there is also room for companies to emerge that are, if you will, AI native.Daphne Koller (14:39):And we've seen that in every technological revolution that the native companies that were born in the new age move faster, incorporate the technology much more deeply into every aspect of their work, and they end up being dominant players if not the dominant player in that new world. And you could look at the internet revolution and think back to Google did not emerge from the yellow pages. Netflix did not emerge from blockbuster, Amazon did not emerge from Walmart so some of those incumbents did make the adjustment and are still around, some did not and are no longer around. And I think the same thing will happen with drug discovery and development where there will be a new crop of leading companies to I think maybe together with some of the incumbents that we're able to make the adjustment.Eric Topol (15:36):Yeah, I think your point there is essential, and another part of this story is that a lot of people don't realize there's so many nodes of ways that AI can facilitate this whole process. I mean from the elemental data mining that identified Baricitinib for Covid and now being used even for many other indications, repurposing that to how to simulate for clinical trials and everything in between. Now, what seems like because of your incredible knack and this convergence, I mean your middle name is like convergence really, you are working at the level of really in my view, this unique aspect of bringing cells and all the other layers of data together to amp things up. Is that a fair assessment of where insitro in your efforts are directed?Three BucketsDaphne Koller (16:38):So first of all, maybe it's useful to kind of create the high level map and the simplest version I've heard is where you divide the process into three major buckets. One is what you think of as biology discovery, which is the discovery of new therapeutic hypotheses. Basically, if you modulate this target in this group of humans, you will end up affecting this clinical outcome. That's the first third. The middle third is, okay, well now we need to turn that hypothesis into an actual molecule that does that. So basically generating molecules. And then finally there's the enablement and acceleration of the clinical development process, which is the final third. Most companies in the AI space have really focused in on that middle third because it is well-defined, you know when you've succeeded if someone gives you a target and what's called a target product profile (TPP) at the end of whatever, two, three years, whether you've been able to create a molecule that achieves the appropriate properties of selectivity and solubility and all those other things. The first third is where a lot of the mistakes currently happen in drug discovery and development. Most drugs that go into the clinic don't fail because we didn't have the right molecule. I mean that happens, but it's not the most common failure mode. The most common failure mode is that the target was just a wrong target for this disease in this patient population.Daphne Koller (18:09):So the real focus of us, the core of who we are as a company is on that early third of let's make sure we're going after the right clinical hypotheses. Now with that, obviously we need to make molecules and some of those molecules we make in-house, and obviously we use machine learning to do that as well. And then the last third is we discover that if you have the right therapeutic hypothesis, which includes which is the right patient population, that can also accelerate and enable your clinical trials, so we end up doing some of that as well. But the core of what we believe is the failure mode of drug discovery and what it's going to take to move it to the next level is the articulation of therapeutic hypotheses that actually translate into clinical outcome. And so in order to do that, we've put together, to your point about convergence, two very distinct types of data.Daphne Koller (19:04):One is data that we print in our own internal data factory where we have this incredible set of capabilities that uses stem cells and CRISPR and microscopy and single cell measurements and spatial biology and all that to generate massive amounts of in-house data. And then because ultimately you care not about curing cells, you care about curing people, you also need to bring in the clinical data. And again, here also we look at multiple high content data modalities, imaging and omics, and of course human genetics, which is one of the few sources of ground truth for causality that is available in medicine and really bring all those different data modalities across these two different scales together to come up with what we believe are truly high quality therapeutic hypotheses that we then advance into the clinic.AlphaFold2, the ExemplarEric Topol (19:56):Yeah, no, I think that's an extraordinary approach. It's a bold, ambitious one, but at least it is getting to the root of what is needed. One of the things you mentioned of course, is the coming up with molecules, and I wanted to get your comments about the AlphaFold2 world and the ability to not just design proteins now of course that are not extant proteins, but it isn't just proteins, it could be antibodies, it could be peptides and small molecules. How much does that contribute to your perspective?Daphne Koller (20:37):So first of all, let me say that I consider the AlphaFold story across its incarnations to be one of the best examples of the hypothesis that we set out trying to achieve or trying to prove, which is if you feed a machine learning model enough data, it will learn to do amazing things. And the space of protein folding is one of those areas where there has been enough data in biology that is the sequence to structure mapping is something that over the years, because it's so consistent across different cells, across different species even, we have a lot of data of sequence to structure, which is what enabled AlphaFold to be successful. Now since then, of course, they've taken it to a whole new level. I think what we are currently able to do with protein-based therapeutics is entirely sort of a consequence of that line of development. Whether that same line of development is also going to unlock other therapeutic modalities such as small molecules where the amount of data is unfortunately much less abundant and often locked away in the bowels of big pharma companies that are not eager to share.Daphne Koller (21:57):I think that question remains. I have not yet seen that same level of performance in de novo design of small molecule therapeutics because of the data availability limitations. Now people have a lot of creative ideas about that. We use DNA encoded libraries as a way of generating data at scale for small molecules. Others have used other approaches including active learning and pre-training and all sorts of approaches like that. We're still waiting, I think for a truly convincing demonstration that you can get to that same level of de novo design in small molecules as you can in protein therapeutics. Now as to how that affects us, I'm so excited about this development because our focus, as I mentioned, is the discovery of novel therapeutic hypotheses. You then need to turn those therapeutic hypotheses into actual molecules that do the work. We know we're not going to be the expert in every single therapeutic modality from small molecules to macro cycles, to the proteins to mRNA, siRNA, there's so many of those that you need to have therapeutic modality experts in each of those modalities that can then as you discover a target that you want to modulate, you can basically go and ask what is the right partner to help turn this into an actual therapeutic intervention?Daphne Koller (23:28):And we've already had some conversations with some modality partners as we like to call them that help us take some of our hypotheses and turn it into molecules. They often are very hungry for new targets because they oftentimes kind of like, okay, here's the three or four or whatever, five low hanging fruits that our technology uniquely unlocks. But then once you get past those well validated targets like, okay, what's next? Am I just going to go read a bunch of papers and hope for the best? And so oftentimes they're looking for new hypotheses and we're looking for partners to make molecules. It's a great partnership.Can We Slow the Aging Process?Eric Topol (24:07):Oh yeah, no question about that. Now, we've seen in recent times some leaps in drugs that were worked on for decades, like the GLP-1s for obesity, which are having effects potentially well beyond obesity didn't require any AI, but just slogging away at it for decades. And you previously were at Calico, which is trying to deal with aging. Do you think that we're going to see drug interventions that are going to slow the aging process because of this unique time of this exponential point we are in where we're a computer and science and digital biology come together?Daphne Koller (24:52):So I think the GLP-1s are an incredible achievement. And I would point out, I know you said and incorrectly that it didn't use any AI, but they did actually use an understanding of human genetics. And I think human genetics and the genotype phenotype statistical associations that they revealed is in some ways the biological precursor to AI it is a way of leveraging very large amounts of data, admittedly using simpler statistical tools, but still to discover in a data-driven way, novel therapeutic hypothesis. So I consider the work that we do to be a progeny of the kind of work that statistical geneticists have done. And of course a lot of heavy lifting needed to be done after that in order to make a drug that actually worked and kudos to the leaders in that space. In terms of the modulation of aging, I mean aging is a process of decline over time, and the rate of that decline is definitely something that is modifiable.Daphne Koller (26:07):And we all know that external factors such as lifestyle, diet, exercise, even exposure to sun or smoking, accelerates the aging process. And you could easily imagine, as we've seen in the GLP-1s that a therapeutic intervention can change that trajectory. So will we be able to using therapeutic interventions, increase health span so that we live healthy longer? I think the answer to that is undoubtedly, yes. And we've seen that consistently with therapeutic interventions, not even just the GLP-1s, but going backwards, I mean even statins and earlier things. Will we be able to increase the maximum life span so that people habitually live past 120, 150? I don't know. I don't know that anybody knows the answer to that question. I personally would be quite happy with increasing my health span so that at the age of 80, I'm still able to actively go hiking and scuba diving at 90 and 100 and that would be a pretty good place to start.Eric Topol (27:25):Well, I'm with you on that, but I just want to ask though, because the drugs we have today that are highly effective, I mean statins is a good example. They work at a particular level of the body. They don't have across the board modulation of effect. And I guess what I was asking is, do you foresee we will have some way to do that across all systems? I mean, that is getting to, now that we have so many different ways to intervene on the process, is there a way that you envision in the future that we'll be able to here, I'm not talking about in expanding lifespan, I'm talking about promoting health, whether it's the immune system or whether it's through mitochondria and mTOR, caloric, I mean all these different things you think that's conceivable or is that just, I mean companies like Calico and others have been chasing this. What do you think?Daphne Koller (28:30):Again, I think it's a thing that is hard to predict. I mean, we know that different organ systems age at different rates, and is there a single bio even in a single individual, and it's been well established that you can test brain age versus muscle health versus cardiovascular, and they can be quite different in the same individual, so is there a single hub? No, that governs all forms of aging. I don't know if that's true. I think it's oftentimes different. We know protein folding has an effect, you know DNA damage has an effect. That's why our skin ages because it's exposed to sun. Is there going to be a single switch that reverts it all back? Certainly some companies are pursuing that single bullet approach. I personally would probably say that based on the biology that I've seen, there's at least as much potential in trying to find ways to slow the decline in a way that specific to say as we discussed the immune system or correcting protein, misfolding dysfunction or things like that. And I'm not dismissing there is a single magic switch, but let's just say I think we should be exploring multiple alternatives.Eric Topol (29:58):Yeah, no, I like your reasoning. I think it's actually like everything else you said here. It makes a lot of sense. The logic is hard to argue with. Well, I think what you're doing there at insitro is remarkable and it seems to be quite distinct from other strategies, and that's not at all surprising knowing your background and your aspiration.Daphne Koller (30:27):Never like to follow the crowd. It's boring.Eric Topol (30:30):Right, and I do know you left an aging directed company effort at Calico to do what you're doing. So that must have been an opening for you that you saw was much more diverse perhaps, or maybe I'm mistaken that Calico is not really age specific in its goals.Daphne Koller (30:49):So what inspired me to go found insitro was the realization that we are making medicines today in a way that is not that different from the way in which we were making medicines 20 or 30 years ago in terms of the process by which we go from a, here's what I want to work on to here's a drug is a very much an artisanal one-off each one of them is a snowflake. There is very little commonality and sharing of insights and infrastructure across those efforts except in relatively limited tool-based ways. And I wanted to change that. I wanted to take the tools of engineering and data and machine learning and build a very different approach of going from a problem definition to a therapeutic intervention. And it didn't make sense to build that within a company that's focused on any single biology, not just aging because it is such a broad-based foundation.Daphne Koller (31:58):And I will tell you that I think we are on the path to building the thing that I set out to build. And as one example of that, I will use the work that we've recently done in metabolic disease where based on the foundations that we've built using both the clinical machine learning work and the cellular machine learning work, we were able to go from a problem articulation of this is the indication that we want to work on to a proof of concept in a translatable animal model in one year. That is pretty unusual. Admittedly, this is with an SiRNA tool compound. Nice thing about things that are liver directed is that it's not that difficult of a path to go from an SiRNA tool compound to an actual SiRNA drug. And so hopefully that's a fairly linear journey from there even, which is great.Daphne Koller (32:51):But the fact that we were able to go from problem articulation to a proof of concept in a translatable animal model in one year, that is unusual. And we're starting to see that now across our other therapeutic areas. It takes a long time to build a platform because you're basically building a foundation. It's like, okay, where's the fruit of all of that? I mean, you're building and building and building and nothing comes out for a while because you're building so much of the infrastructure. But once you've built it, you turn the crank and stuff starts to come out, you turn the crank again, and it works faster and better than the previous time. And so the essence of what we've built and what has turned into the tagline for the company is what we call pipeline through platform, which is we're building a pipeline of therapeutic interventions that comes off of a platform. And that's rare in biopharma, the only platform companies that really have emerged by and larger therapeutic modality platforms, things like Moderna and Alnylam, which have gotten really good at a particular modality and that's awesome. We're building a discovery platform and that is a fairly unusual thing.Eric Topol (34:02):Right. Well, I have no doubt you'll be discovering a lot of important things. That one sounds like it could be a big impact on NASH.Daphne Koller (34:14):Yeah, we hope so.Eric Topol (34:14):A big unmet need that's not going to be fixed by what we have today. So Daphne, it's really a joy to talk with you and palpable enthusiasm for where the field is going as one of its real leaders and we'll be cheering for you. I hope we'll reconnect in the times ahead to get another progress report because you're definitely rocking it there and you've got a lot of great ideas for how to change the life science medical world of the future.Daphne Koller (34:48):Thank you so much. It's a pleasure to meet you, and it's a long and difficult journey, but I think we're on the right path, so looking forward to seeing that all that pan out.Eric Topol (34:58):You made a compelling case in a short visit, so thank you.Daphne Koller (35:02):Thank you so much.Thanks for your subscription and listening/reading these posts.All content on Ground Truths—newsletter analyses and podcasts—is free.Voluntary paid subscriptions all go to support Scripps Research. Get full access to Ground Truths at erictopol.substack.com/subscribe
Kai-Fu Lee, an AI expert and prominent investor who helped Google and Microsoft get established in China, says his new startup 01.AI will create the first “killer apps” of generative AI. Thanks for listening to WIRED. Talk to you next time for more stories from WIRED.com and read this story here. Learn more about your ad choices. Visit megaphone.fm/adchoices
Overview: Today we examine the complex relationship between technological change, particularly AI, and personal identity. We reflect on insights from change management expert Bree Goff and authors like James Clear and Kai-Fu Lee. The focus is on understanding how AI impacts our sense of self and how to adapt while preserving our unique human qualities. Daily SPARK Prompt (edit, cut and paste this into your favorite AI tool) Tip: If you don't pay for GPT4, try Co-Pilot by Microsoft (it's free!) Hi! I'm reflecting on my identity in the rapidly evolving AI era and need your assistance. Here's my SPARK scenario: Situation: In a world where AI is redefining traditional roles and tasks, I am exploring who I want to be. My current situation involves [describe your current interaction with AI and how it impacts your daily life]. Problem: I feel [describe any concerns or feelings of loss/displacement due to AI]. I'm struggling to understand how AI affects my sense of self and how to adapt without losing my unique identity. Aspiration: I aspire to use AI as a tool to enhance, not replace, my identity. Inspired by James Clear's 'Atomic Habits,' I want to focus on who I wish to become, emphasizing my [mention specific human qualities like creativity, empathy, etc.]. Results: Ideally, by embracing AI in this way, I hope to [describe the positive changes you wish to see, such as becoming more adaptable, resilient, or creative]. Kismet: Can you provide 5 strategies or daily habits that would help me reinforce my unique identity in this AI-driven world? And, importantly, what might I be missing in this approach? Key Points: Change and Loss of Control: Bree Goff's perspective on how people don't resist change but resist loss, especially relevant in the era of AI. AI's Personal Impact: The infiltration of AI into aspects of life traditionally unique to humans, challenging our roles, tasks, and creativity. The Serenity Prayer's Relevance: Using this prayer to accept what we can't change and to find the courage and wisdom to adapt. Identity and Habits: Drawing from James Clear's 'Atomic Habits,' focusing on who we wish to become in the AI era and the importance of self-identity in times of change. AI as a Mirror: Using AI to reflect on and enhance human qualities like empathy and creativity, rather than replacing them. Let's Connect and Keep the Learning Going Website: designingschools.org LinkedIn: Dr. Sabba Quidwai Twitter: @askMsQ Instagram: @askMsQ
Join me on "AI Experience" as we delve into the book "AI 2041" by Kai-Fu Lee and Chen Qiufan. This episode offers a captivating glimpse into a future shaped by artificial intelligence. Discover how AI could revolutionize our lives in the next two decades, from transforming medicine and education to challenging our economic and social norms. Get ready for an insightful journey into a world where the boundaries between human and machine blur, promising both unparalleled advancements and profound ethical questions. Don't miss this exploration of a future that's closer than it seems!Hosted by Ausha. See ausha.co/privacy-policy for more information.
In this special episode of the Karma School of Business Podcast, we bring you a curated list of impactful book recommendations from our esteemed private equity industry guests, offering a wealth of knowledge to enhance your strategic thinking and personal growth. 1:34 – Author Nick Shaw's poignant and insightful "My Teacher, My Son," is a book that promises to transform your perspective on life and leadership My Teacher, My Son: https://www.amazon.com/My-Teacher-Son-Lessons-Life/dp/B0CH7F5MGW 5:06 - Dive into the intricacies of technological advancements with Managing Partner Scott Estill's picks, "Chip Wars" by Chris Miller and "AI 2041" by Kai-Fu Lee and Chen Qiufan Chip Wars: https://www.amazon.com/Chip-War-Dominate-Critical-Technology/dp/B09TX24J5Y/ AI 2041: https://www.amazon.com/AI-2041-Ten-Visions-Future/dp/B08SFL53HL/ 9:50 - Gain historical business insights from private equity operating partner Joe DeLuca's favorite, “The Making of the Atomic Bomb” by Richard Rhodes. The Making of the Atomic Bomb: https://www.amazon.com/Making-Atomic-Bomb-Richard-Rhodes/dp/1451677618 13:00 – Entrepreneur and attorney Scott Becker emphasizes the significance of team building with “Good to Great” by Jim Collins and the importance of health and longevity with “Outlive” by Peter Attia. Good to Great: https://www.amazon.com/Good-to-Great-Jim-Collins-audiobook/dp/B003VXI5MS/ Outlive: https://www.amazon.com/Outlive-Longevity-Peter-Attia-MD/dp/0593236599/ 16:25 – Private equity partner John Kirk reminds us of the power of collective success over individual correctness through “Us” by Terrence Real Us: https://www.amazon.com/Us-Getting-Build-Loving-Relationship/dp/B09BBN9LGW/ 19:48 – PE operating partner Brit Yonge explores the sovereignty of choice in Viktor Frankl's “Man's Search for Meaning.” Man's Search for Meaning: https://www.amazon.com/Mans-Search-for-Meaning-audiobook/dp/B0006IU470/ 23:35 – Private equity managing director Doug Horn provides a glimpse into the future of industry and geopolitics with “The End of the World is Just the Beginning” by Peter Zeihan and celebrates American entrepreneurial spirit in “Americana” by Bhu Srinivasan The End of the World is Just the Beginning: https://www.amazon.com/End-World-Just-Beginning-Globalization/dp/B09CS8FRRD/ Americana: https://www.amazon.com/Americana-Bhu-Srinivasan-audiobook/dp/B075659K9K/ 26:41 – PE managing director Mohit Kansal underscores the value of data over narrative with “Moneyball”by Michael Lewis. Moneyball: https://www.amazon.com/Moneyball-Michael-Lewis-audiobook/dp/B005FFRQYS/ 28:23 – Private equity managing director Marshall Phelps draws leadership lessons from "Washington" by Ron Chernow and delves into the intrigue of "Conspiracy" by Ryan Holiday Washington: https://www.amazon.com/Washington-Ron-Chernow-audiobook/dp/B0045XYQ12/ Conspiracy: https://www.amazon.com/Conspiracy-Ryan-Holiday-audiobook/dp/B0794CLD44/ 32:35 – Private equity managing partner Doug McCormick offers a refreshing perspective on global progress with "Factfulness" by Hans Rosling. Factfulness: https://www.amazon.com/Factfulness-audiobook/dp/B07BFDCWZP/ Join us as we explore these diverse and thought-provoking works that have shaped the minds of business leaders and will undoubtedly influence your approach to business and life.
On this episode of the Energy Security Cubed Podcast, Kelly Ogle has a conversation with Daniel Yergin about the post-Ukraine global energy order, and reflections on the lesson from his book, "The New Map: Energy, Climate, and the Clash of Nations". You can find his book here: https://www.amazon.ca/New-Map-Energy-Climate-Nations/dp/1594206430 For the intro session, Kelly and Joe Calnan chat about the COP28 climate conference and the UAE's position in global affairs. Guest Bio: - Daniel Yergin is a Pulitzer Prize winning author and the Vice Chairman of S&P Global Host Bio: - Kelly Ogle is the CEO of the Canadian Global Affairs Institute Reading recommendations: - "Edmund Burke: The First Conservative", by Jesse Norman: https://www.amazon.ca/Edmund-Burke-Jesse-Norman-author/dp/0465062938 - "The Last Ships from Hamburg: Business, Rivalry, and the Race to Save Russia's Jews on the Eve of World War I", by Steven Ujifusa: https://www.amazon.ca/Last-Ships-Hamburg-Business-Rivalry/dp/0062971875 - "AI Superpowers: China, Silicon Valley, and the New World Order", by Kai-Fu Lee: https://www.amazon.ca/AI-Superpowers-China-Silicon-Valley/dp/132854639X - "The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma", by Mustafa Suleyman and Michael Bhaskar: https://www.amazon.com/Coming-Wave-Technology-Twenty-first-Centurys/dp/0593593952 Interview recording Date: December 6, 2023 Energy Security Cubed is part of the CGAI Podcast Network. Follow the Canadian Global Affairs Institute on Facebook, Twitter (@CAGlobalAffairs), or on LinkedIn. Head over to our website at www.cgai.ca for more commentary. Produced by Joe Calnan. Music credits to Drew Phillips.
Our 143rd episode with a summary and discussion of last week's big AI news, now back with the usual hosts! Read out our text newsletter and comment on the podcast at https://lastweekin.ai/ Email us your questions and feedback at contact@lastweekin.ai Check out our sponsor, the SuperDataScience podcast. You can listen to SDS across all major podcasting platforms (e.g., Spotify, Apple Podcasts, Google Podcasts) plus there's a video version on YouTube. Timestamps + links: Applications & Business(02:53) Sam Altman fired as CEO of OpenAI (12:00) Everything announced at OpenAI's first developer event (16:05) GM's Cruise suspends supervised and manual car trips, expands probes (18:30) Under Fire Over Robotaxi Safety, GM Halts Production Of Cruise Driverless Van (20:55) Microsoft unveils first AI chip, Maia 100, and Cobalt CPU (21:35) AV Startup May Mobility Locks Up Another $105M From NTT, Toyota (22:38) Aleph Alpha raises $500m Series B in Europe's third-largest AI round ever (23:22) AI startup Twelve Labs attracts US$10 mln from Nvidia, Intel, others (24:16) Defense Tech Startup Shield AI Raises $200M At $2.7B Valuation Tools & Apps(25:00) Elon Musk debuts 'Grok' AI bot to rival ChatGPT, others (27:38) Brave responds to Bing and ChatGPT with a new ‘anonymous and secure' AI chatbot (30:03) ChatGPT is combining its different abilities into a single ‘Voltron-style' chat (31:01) LinkedIn's new AI chatbot wants to help you get a job (32:07) Meet Samsung's Answer to ChatGPT: A New AI Model Called Gauss (33:33) Humane's AI Pin: all the news about the new AI-powered wearable Projects & Open Source(36:06) Valued at $1B, Kai-Fu Lee's LLM startup unveils open source model Research & Advancements(40:03) Google DeepMind wants to define what counts as artificial general intelligence (42:35) Google DeepMind breaks new ground with ‘Mirasol3B' for advanced video analysis (45:15) MetNet-3: A state-of-the-art neural weather model available in Google products (46:06) Instant3D: Instant Text-to-3D Generation (47:54) Title:One-2-3-45++: Fast Single Image to 3D Objects with Consistent Multi-View Generation and 3D Diffusion (49:12) Holistic Evaluation of Text-To-Image Models Policy & Safety(51:37) Biden Issues Executive Order to Create A.I. Safeguards (56:10) Amy Klobuchar and John Thune introduce legislation for creating generative AI framework (57:55) Midjourney, Stability AI and DeviantArt win a victory in copyright case by artists — but the fight continues (01:00:19) At UK's AI Summit developers and govts agree on testing to help manage risks (01:02:15) White faces generated by AI are more convincing than photos, finds survey Synthetic Media & Art(01:04:12) Striking Actors and Hollywood Studios Agree to a Deal (01:07:46) Google is embedding inaudible watermarks right into its AI generated music (01:10:10) YouTube previews AI tool that clones famous singers — with their permission (01:11:08) Microsoft is bringing AI characters to Xbox (01:11:49) CBS News Launches New Venture To Identify AI Deepfakes And Misinformation
AI Chat: ChatGPT & AI News, Artificial Intelligence, OpenAI, Machine Learning
In this episode, we discuss the latest move by Kai-Fu Lee's LLM startup, now valued at $1 billion, as it unveils its new open source model, diving into the potential impact this could have on the AI community and the broader technology landscape. We'll explore the implications of making such advanced AI technology openly accessible and what it signifies for the future of AI development and collaboration. Invest in AI Box: https://republic.com/ai-box Get on the AI Box Waitlist: https://AIBox.ai/ Facebook Community: https://www.facebook.com/groups/739308654562189 Follow me on X: https://twitter.com/jaeden_ai
No podemos predecir el futuro, pero sí podemos explorarlo. /// Jorge Alor | @elpadrino Mario Valle | @bilbeny Jaime Limón | @mrlemon /// Learn more about your ad choices. Visit megaphone.fm/adchoices
(0:00) Bestie Intros: Sacks keeps receipts! (1:14) Jared Kushner joins the show: background, Trump's campaign validation (13:12) State of Israel vs Hamas, escalation risks (23:04) Historical context around Israel's relationship with the Arab world, understanding the modern Middle East (38:55) Failed solutions, Israel's response, paths to stability (1:04:54) GOP debate, establishment blind spots, pragmatic politics, tribal infighting (1:15:36) Improving macro picture, potential impact on 2024 election cycle (1:28:03) Russia-Ukraine (1:32:53) Big week in AI: OpenAI DevDay, xAI launches Grok, Kai-Fu Lee's announcement Follow the besties: https://twitter.com/chamath https://linktr.ee/calacanis https://twitter.com/DavidSacks https://twitter.com/friedberg Follow Jared: https://twitter.com/jaredkushner Follow the pod: https://twitter.com/theallinpod https://linktr.ee/allinpodcast Intro Music Credit: https://rb.gy/tppkzl https://twitter.com/yung_spielburg Intro Video Credit: https://twitter.com/TheZachEffect Referenced in the show: https://twitter.com/DavidSacks/status/1722342698110304752 https://twitter.com/DavidSacks/status/1722288226235498511 https://www.youtube.com/watch?v=In1MYEc0-10 https://fred.stlouisfed.org/series/WM2NS https://fred.stlouisfed.org/series/CORESTICKM159SFRBATL https://fred.stlouisfed.org/series/T10YIE https://fred.stlouisfed.org/series/MMMFFAQ027S https://www.google.com/finance/quote/ADYEY:OTCMKTS https://www.google.com/finance/quote/DASH:NASDAQ https://www.google.com/finance/quote/DDOG:NASDAQ https://twitter.com/sundeep/status/1722635065983897910 https://twitter.com/elonmusk/status/1721029443160772875 https://twitter.com/kaifulee/status/1721321096727994590 https://openai.com/blog/introducing-gpts https://openai.com/blog/new-models-and-developer-products-announced-at-devday https://twitter.com/chamath/status/1721234497482670438
ON THIS INSTALLMENT…DONATE: PAYPAL.ME/JAPANWUT … We discuss a robot worker killing a human worker in South Korea, economic inconsistencies on why Japan is doing so great and bad at the same time, and how the Philippines is moving away from China's BRI and toward Japan's and the US' wide open arms.Follow Matt: Twitter / Facebook Page / InstagramOfficial Website: matthewpmbigelow.comShow Notes for Episode 117SOCIETY 5.0Japan eyes $13bn in aid for chips, generative AI in stimulus budgetAn industrial robot crushed a worker to death at a vegetable packing plant in South KoreaJapanJapan's April-Sept current surplus triples to record ¥12.7 trilJapan household spending down for 7th month in Sept. as wages slideGermany set to overtake Japan as 3rd-largest economy Kishida announces stimulus package of 17 trillion yenAlbert Edwards: "Will The BOJ Fiasco Be The Straw That Breaks The Back Of US Debt"Japan's green transition bonds find 'more open-minded' investors Massive trove of ancient coins dug up in Gunma Prefecture | The Asahi Shimbun: Breaking News, Japan News and AnalysisJapan whaling mother ship calls it a day amid industry stupor | The Asahi Shimbun: Breaking News, Japan News and AnalysisEAT THE BUGS東京で「ザザムシ」新メニュー提供 上農高生が創作 長野県CHINAWhy the Philippines is exiting the Belt and RoadValued at $1B, Kai-Fu Lee's LLM startup unveils open source modelAsia's first Gay Games to kick off in Hong Kong, fostering hopes for wider LGBTQ+ inclusion | The Asahi Shimbun: Breaking News, Japan News and AnalysisChina's AI Regulations and How They Get Made - Carnegie Endowment for International PeaceSGOTWVietnamese people have the highest number of foreigners arrested, surpassing Chinese people, background such as disappearance of trainees
O termo Inteligência Artificial vem permeando nosso dia a dia de uma forma cada vez mais contundente (e assustadora, para alguns). Dizem que pode roubar empregos, dizem que pode criar oportunidades incríveis. E na relação com as finanças pessoais, como seria? Chamamos o especialista e professor na matéria, Magno Maciel, para nos dar uma aula sobre o assunto. Acabou com mitos, trouxe exemplos e curiosidades, comentou sobre aplicações atuais e futuras da tal IA (ou AI, em inglês). Não deixe de ouvir e ficar antenado. A IA não vai tirar seu emprego, mas alguém que saiba de AI pode tentar... Para saber mais sobre o Magno: Instagram: https://www.instagram.com/magnomaciel.tech/ Linkedin: https://www.linkedin.com/in/magnomaciel/ Dicas de livro: - “Inteligência artificial” de Kai-Fu Lee - “2041: Como a inteligência artificial vai mudar sua vida nas próximas décadas” de Kai-Fu Lee e Chen Qiufan Episódios citados: Ep 27 “Ferramentas Digitais” com Magno Maciel Assine o canal, siga-nos nas redes sociais e interaja conosco: Caco Santos: @cacosantos_cfp linkedin.com/in/cacosantos-cfp/ Leandro Paiva: @planejador_leandro_paiva linkedin.com/in/leandro-leal-paiva-b329b32
What is the most important thing to you? Money? Fame? Material Possessions? Impact? Maybe you don't have to pick one. Is it possible to have it all? The short answer is yes BUT at what cost? Tune into today's episode with one of the most successful Artificial intelligence investors of all time as we follow his journey of what may happen as a result of having it all. Source: What I learned being a workaholic | Dr. Kai-Fu Lee Connect with Kai Fu Lee: LinkedIn: Kai-Fu Lee Instagram: kaifu_lee Twitter: Kai-Fu Lee @kaifulee Books: AI 2041: Ten Visions for Our Future (Most Recent) My Journey into AI: The Story Behind the Man Who Helped Launch 5 A.I. Companies Worth $25 Billion Hosted by Malikee Josephs (Pronounced Muh leek Jo seffs) Give Me A Shout: Follow Me On Instagram @DepressionDetoxShow. Email me: mj@depressiondetoxshow.com Support The Show: Donate
This podcast is a commentary and does not contain any copyrighted material of the reference source. We strongly recommend accessing/buying the reference source at the same time. ■Reference Source https://www.ted.com/talks/kai_fu_lee_how_ai_can_save_our_humanity ■Post on this topic (You can get FREE learning materials!) https://englist.me/167-academic-words-reference-from-kai-fu-lee-how-ai-can-save-our-humanity-ted-talk/ ■Youtube Video https://youtu.be/M8K0AHze5u0 (All Words) https://youtu.be/gB724wqoR48 (Advanced Words) https://youtu.be/Q6mr4EyYsAY (Quick Look) ■Top Page for Further Materials https://englist.me/ ■SNS (Please follow!)
Chen Qiufan (AKA Stanley Chan) is an award-winning science fiction writer, screenwriter, creative producer, and columnist. He is the president of the World Chinese Science Fiction Association and the founder of the content development studio Thema Mundi. Chen joins the show to discuss his latest novel, AI 2041: Ten Visions for the Future, which he co-wrote with former Google China president Kai-Fu Lee. Part science fiction, part science forecasting, over ten short stories AI 2041 imagines the different ways, good and bad, that AI will impact our society. The central thesis? AI will transform our lives, but we remain masters of our fate. Important Links: Qiufan's Website Qiufan's Twitter Show Notes: Qiufan's sci-fi influences When did the third wave of AI begin? Why is modern sci-fi so dystopian? How AI is going to impact education Hidden biases & the objective function Deep fakes & narrative collapse Accelerationism, balance & Daoism Do we need real jobs? Happiness is a byproduct Living in a post-scarcity society What's next? MORE! Books Mentioned: AI 2041: Ten Visions for Our Future; by Kai-Fu Lee & Chen Qiufan Bullshit Jobs: A Theory; by David Graeber Tao Te Ching; by Lao Tzu Trekonomics: The Economics of Star Trek; by Manu Saadia Waste Tide; by Chen Qiufan
Imaginez un monde où votre compagnie d'assurance tentera d'interférer dans vos relations sociales pendant que Cendrillon fera la classe à vos enfants... Non, il ne s'agit pas de futurs scénarios d'épisodes de “Black Mirror” mais des prédictions très étayées du chercheur taiwanais Kai-Fu Lee, annoncées à l'horizon... 2042 ! On passe les travaux de ce spécialiste de l'Intelligence artificielle (IA) à La Loupe avec Laetitia Strauch-Bonart, rédactrice en chef du service Idées de l'Express.Cet épisode a été diffusé une première fois le 26 octobre 2022.Retrouvez tous les détails de l'épisode ici et inscrivez-vous à notre newsletter.L'équipe : Écriture : Margaux LannuzelPrésentation et rédaction en chef : Xavier YvonMontage : Charlotte BarisRéalisation : Jules KrotCrédits : NetflixMusique et habillage : Emmanuel Herschon / Studio Torrent Logo : Anne-Laure Chapelain / Thibaut ZschieschePour nous écrire : laloupe@lexpress.fr Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.
Imaginez un monde où votre compagnie d'assurance tentera d'interférer dans vos relations sociales pendant que Cendrillon fera la classe à vos enfants... Non, il ne s'agit pas de futurs scénarios d'épisodes de “Black Mirror” mais des prédictions très étayées du chercheur taiwanais Kai-Fu Lee, annoncées à l'horizon... 2042 ! On passe les travaux de ce spécialiste de l'Intelligence artificielle (IA) à La Loupe avec Laetitia Strauch-Bonart, rédactrice en chef du service Idées de l'Express.Retrouvez tous les détails de l'épisode ici et inscrivez-vous à notre newsletter.L'équipe : Écriture : Margaux LannuzelPrésentation et rédaction en chef : Xavier YvonMontage : Charlotte BarisRéalisation : Jules KrotCrédits : NetflixMusique et habillage : Emmanuel Herschon / Studio Torrent Logo : Anne-Laure Chapelain / Thibaut ZschieschePour nous écrire : laloupe@lexpress.fr Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.
This week Paul and Mike talk about three AI stories in the news and add their take on these developments and what it means for marketers and business leaders. On October 4, the White House released what it calls an “AI Bill of Rights,” a document that offers a blueprint of “five principles that should guide the design, use, and deployment of automated systems.” It is not binding in any way, legally or otherwise. But they are an important initial effort by the US government to draw attention to the impact of artificial intelligence on our daily lives. In an exclusive interview with McKinsey released in late September, Dr. Kai-Fu Lee, a world-leading AI expert, investor, and author spoke with McKinsey about how business leaders can use AI in their operations and what it means to be a truly AI-driven company. Meta, the company formerly known as Facebook, just released Make-A-Video, an AI system that turns text prompts into machine-generated video clips. This means you can type in a prompt, like “a horse drinking water,” and Make-A-Video will understand the prompt and create a video clip of it in a specific style. This type of generative AI does for video what a tool like DALL-E 2 does for images: it creates unique visuals from a simple text prompt in seconds. The tool isn't yet publicly available like DALL-E 2 but has major implications for businesses and creators.
In the latest episode of Looking Outside, we explore Exploration itself, and what it means to be truly curious about people, places and the future, with Zak Dychtwald, Founder of Young China Group and author of Young China. Zak shares how he approaches exploration: from visiting countries that are ‘misunderstood', to immersing himself in stories about the far off possible future, to building an ‘intimacy' with foreign cultures, places and people that you wouldn't get from descriptions by outsiders. Zak and Jo also discuss their love of science fiction. It was Zak's love of sci-fi that drove him to study, live and start a company in China, a place he found more unique and exciting than any other part of the world, more indicative of the future, and greatly misunderstood by foreigners. Jo and Zak also speak to the responsibility of science fiction to craft more positive futures, vs the dystopian ones we're used to fearing, because those visions are, as Zak says, ‘seeds into the consciousness of the world'; they help others imagine what could be and ultimately influence the futures that are created. -- To look outside, Zak likes to read (a lot). When Zak is about to travel, he likes to read stories, narrative non-fiction or modern fiction, written by authors from those places he's about to visit. Zak believes in the power of stories to impact people. -- On the show we mentioned: https://www.amazon.com/AI-2041-Ten-Visions-Future/dp/B08SFL53HL/ref=sr_1_1?crid=1NHX5KF58Z12S&keywords=2041&qid=1660509383&sprefix=2041%2Caps%2C81&sr=8-1 (AI 2041) by Kai-Fu Lee & Chen Qiufan https://www.amazon.com/s?k=Ray+Bradbury&i=audible&ref=dp_byline_sr_audible_1 (Ray Bradbury) https://www.amazon.com/Abundance-Future-Better-Than-Think/dp/145161683X/ref=asc_df_145161683X/?tag=hyprod-20&linkCode=df0&hvadid=312710253827&hvpos=&hvnetw=g&hvrand=2789654842507937244&hvpone=&hvptwo=&hvqmt=&hvdev=c&hvdvcmdl=&hvlocint=&hvlocphy=9003566&hvtargid=pla-405693451805&psc=1&tag=&ref=&adgrpid=61681020945&hvpone=&hvptwo=&hvadid=312710253827&hvpos=&hvnetw=g&hvrand=2789654842507937244&hvqmt=&hvdev=c&hvdvcmdl=&hvlocint=&hvlocphy=9003566&hvtargid=pla-405693451805 (Abundance )by Peter Diamandis (co-founder of Singularity University) https://www.amazon.com/The-Three-Body-Problem-audiobook/dp/B00P00QPPY/ref=sr_1_1?crid=1ON9KG2U2RBC8&keywords=the+three+body+problem&qid=1660509621&s=audible&sprefix=the+three+body+problem%2Caudible%2C72&sr=1-1 (The Three-Body Problem) by Cixin Lu https://www.amazon.com/Waste-Tide-audiobook/dp/B07R7RPG34/ref=sr_1_1?crid=16J68AWH5B1A9&keywords=waste+tide&qid=1660509690&s=audible&sprefix=waste+tide%2Caudible%2C74&sr=1-1 (Waste Tide) by Chen Quifan https://www.amazon.com/Young-China-Restless-Generation-Country/dp/1250078814 (Young China )by Zak himself Great Sci-Fi including Minority Report, Solaris, Star Trek, Interstellar. -- Zak Dychtwald is the author of critically acclaimed Young China: How the Restless Generation Will Change Their Country and the World and founder of market insights firm, Young China Group. Zak's work has been featured in Harvard Business Review, Wall Street Journal, Bloomberg, The Atlantic, and many other platforms. A fluent mandarin speaker with over a decade on the ground in China, Zak is an expert in dissecting China's consumer trends and cultural norms and making China approachable and actionable for global thinkers, companies, and teams. Zak has been invited to share his expertise for audiences on six continents at events such as the Wall Street Journal CEO Council Summit, Aspen Ideas Festival, Google Talks, Cannes Lions International Festival of Creativity, TEDx and more. Find out more on Young China Group https://youngchinagroup.com/ (youngchinagroup.com) and Zak https://zakdychtwald.com (zakdychtwald.com)....
Narratives of the future play an important role in shaping our reality. Depending on the point of view from which they are crafted, they can describe hopes and fears of citizens, the political project of the ruling classes, or can offer alternatives to the status quo. In this episode of The Sound of Economics, Giuseppe Porcaro and Alicia García-Herrero are joined by LYU Guangzhao, who helped navigate some of China's science fictions works and these visions of the future. Artwork and science fictions mentioned: • China 2098 artwork by FAN Wennan • Waste Tide by CHEN Qiufan • AI 2041 by Kai-Fu Lee and CHEN Qiufan • My Country Does Not Dream by HAN Song • Ether by ZHANG Ran • Silent City by MA Boyong This episode is part of the ZhōngHuá Mundus series of The Sound of Economics. ZhōngHuá Mundus is a newsletter by Bruegel, bringing you monthly analysis of China in the world, as seen from Europe. Sign up now to receive it in your mailbox!
In 2017, leading AI expert Kai-Fu Lee shared a dire prediction: half of all jobs – both blue collar and white collar – could be automated within ten years, replacing the workforce with solutions built on artificial intelligence. Brad and Kai-Fu discuss what this coming change means for national economies and for people who care about their work. Kai-Fu lays out practical steps policy makers can take today to prepare, the three areas he believes human intelligence will continue to lead, and why he remains an AI optimist.Dr. Kai-Fu Lee has driven innovation in AI research and development for over three decades. He is the Chairman and CEO of Sinovation Ventures and President of Sinovation Venture's Artificial Intelligence Institute. Prior to founding Sinovation in 2009, Dr. Lee was the President of Google China, and a senior executive at Microsoft, SGI, and Apple. In the field of AI, Dr. Lee built one of the first game playing programs to defeat a world champion, as well as the world's first large-vocabulary, speaker-independent continuous speech recognition system. His bestselling book AI Superpowers: China, Silicon Valley, and the New World Order discusses US-China co-leadership in the age of AI, as well as the greater societal impacts wrought by the AI technology revolution. His new co-authored book AI 2041 explores how AI will change our world over the next 20 years.Click here for the episode transcript.
IN THIS EPISODE, YOU'LL LEARN:01:06 - What is the secret to the success of Silicon Valley?03:40 - How to manage a company that is growing at 2.5% – per day!19:54 - How to use your network to find new job opportunities.26:11 - Why financing and distribution is just as important as the product you are selling.40:17 - Ask The Investor's Podcast: How do I value fast growing unlisted companies?*Disclaimer: Slight timestamp discrepancies may occur due to podcast platform differences.BOOKS AND RESOURCESThe Investor's Podcast's episode about Reid Hoffman's book, The Start-up of you.Kai-Fu Lee's book, AI Superpowers – Read reviews of this book.One of Steve Jobs' favorite books, Inside the Tornado – Read reviews of this book.John Carreyrou's book, Bad Blood – Read reviews of this book.Our tool for picking stock winners and managing our portfolios: TIP Finance Tool.Check out our favorite Apps and Services.New to the show? Check out our We Study Billionaires Starter Packs.Find Pros & Fair Pricing for Any Home Project for Free with Angi.Protect your online activity TODAY with ExpressVPN, the VPN rated #1 by CNET and Wired, and get an extra 3 months FREE on a one-year package.Take the next step in your working-life or get ready for a change, by being a Snooze franchise partner. To learn more, head to Snooze.com.au and scroll down the page for “franchising”.Confidently take control of your online world without worrying about viruses, phishing attacks, ransomware, hacking attempts, and other cybercrimes with Avast One.Personalize your plans in improving your metabolism, reducing stress, improving sleep, and omptimizing your health with InsideTracker. Use discount code TIP to get 20% off the entire InsideTracker store.Every 28 seconds an entrepreneur makes their first sale on Shopify. Access powerful tools to help you find customers, drive sales, and manage your day-to-day. Start a FREE fourteen-day trial right now!Invest in high quality, cash flowing real estate without all of the hassle with Passive Investing.Use Keeper Security's enterprise password management platform to enforce strong passwords and to make it easy for your teams to securely share credentials.Connect all your apps, automate routine tasks, and streamline your processes with Zapier.Browse through all our episodes (complete with transcripts) here.Support our free podcast by supporting our sponsors.HELP US OUT!What do you love about our podcast? Here's our guide on how you can leave a rating and review for the show. We always enjoy reading your comments and feedback!See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Noel Gallager said, "I don't live to work; I work to live." I first heard this expression when I was living in Spain, and a Spanish friend told me that, "you Americans live to work, while we Spaniards work to live." As I look back, I should have listened closer to my friend's wise words because they are the keys to living a rewarding life. In this episode, I discuss the story of Kai-Fu Lee and the experiences of Australian author Bronnie Ware. I use these stories to discuss why experiences are the keys to enjoying a rewarding life, not the pursuit of possessions. I describe the scientific benefits of experiences and how you can relate this to your own journey of intentionality. You work to live, not live to work. Work on what brings you happiness. The tools from today's episode ought to be helpful for anyone looking to understand how to create a rewarding life. Our Patreon Page: https://www.patreon.com/passionstruck. Thank you to our sponsors: MAGIC MIND. Get 20% off https://magicmind.co/passionstruck; use code passionstruck. SURF SHARK: Get 83% off Surf Shark VPN and three extra months free https://surfshark.deals/passionstruck; use the promo code passionstruck. CODA: get started having your team all working together on the same page for FREE. Head over to Coda.io/PASSIONSTRUCK. COINBASE: For a limited time, new users can get $10 in free Bitcoin when you sign up today at Coinbase.com/PASSIONSTRUCK Thank you for watching this video. I hope you keep up with the weekly videos I post on the channel, subscribe to, and share your learnings with those who need to hear them. Your comments are my oxygen, so please take a second and say 'Hey' ;). -- ► Subscribe to My Channel Here: https://www.youtube.com/c/JohnRMiles Links: Top Five Regrets of the Dying: A Life Transformed by the Dearly Departing by Bronnie Ware Best Self by Mike Bayer My Journey into AI: The Story Behind the Man Who Helped Launch 5 A.I. Companies Worth $25 Billion by Kai-Fu Lee AI 2041: Ten Visions for Our Future by Kai-Fu Lee and Chen Qiufan The Happiness Track by Emma Seppala The Happiness Project by Gretchen Rubin The Art of Making Memories, by Meik Wiking Socials: * Twitter: https://twitter.com/Milesjohnr * Facebook: https://www.facebook.com/johnrmiles.c0m * Medium: https://medium.com/@JohnRMiles * Instagram: https://www.instagram.com/john_r_miles * LinkedIn: https://www.linkedin.com/in/milesjohn/ * Blog: https://passionstruck.com/blog/ * Instagram: https://www.instagram.com/passion_struck_podcast/ * Gear: https://www.zazzle.com/store/passion_struck/ Timestamps: 0:00 Keys to a Rewarding Life Introduction 2:57 Surfshark and Coda 5:37 The Story of Kai-Fu Lee 8:42 Valuing possessions over shared experiences 11:48 Coinbase and Magic Mind 14:03 How Bronnie Ware chronicled top deathbed remorse 15:11 Why we are far more than the sum of our possessions 16:23 Experiences improve our interpersonal relationships 17:09 The difference between hedonistic vs eudaimonistic happiness 18:19 Having less clutter promotes being happy 19:20 Experiences reduce the comparison of oneself with others 20:03 Experiences provide better memories 20:51 How to experience the most rewarding life possible 23:40 Conclusion and Synthesis -- John R Miles is a serial entrepreneur and the CEO and founder of Passion Struck. This full-service media company helps people live intentionally by creating best-in-class educational and entertainment content. John is also a prolific public speaker, venture capitalist, and author named to the ComputerWorld Top 100 IT Leaders. John is the host of the Passion Struck Podcast, a show focused on exploring the mindset and philosophy of the world's highest achievers to learn their lessons to living intentionally. Passion Struck aspires to speak to the humanity of people in a way that makes them want to live better, be better and impact. Stay tuned for John's latest project, his upcoming book, which will be published in summer 2022. Learn more about me: https://johnrmiles.com. New to this channel and the passion-struck podcast? Check out our starter packs which are our favorite episodes grouped by topic, to allow you to get a sense of all the podcast has to offer. Go to Spotify or https://passionstruck.com/starter-packs/. Like this? Please join me on my new platform for peak performance, life coaching, self-improvement, intentional living, and personal growth: https://passionstruck.com/ and sign up for our email list.
How will AI evolve in the future? Can AI replace humans in the future? How much do we really understand AI? In this episode (Part 1 of 2), my long-time friend, Kai-Fu Lee, computer scientist, businessman, and writer came on to The James Altucher Show to talk about his new book, AI 2041, Ten Visions for Our Future. Such an amazing book about the future of AI. Unlike all the other AI books out there, he forecasts the future of AI in an imaginative storytelling way! Listen to this Part 1 out of 2, and Part 2 will come out the same day too! My new book Skip The Line is out! Make sure you get a copy wherever you get your new book! Join You Should Run For President 2.0 Facebook Group, and we discuss why should run for president. I write about all my podcasts! Check out the full post and learn what I learned at jamesaltucher.com/podcast. Thanks so much for listening! If you like this episode, please subscribe to "The James Altucher Show" and rate and review wherever you get your podcasts: Apple Podcasts Stitcher iHeart Radio Spotify Follow me on Social Media: YouTube Twitter Facebook ------------What do YOU think of the show? Head to JamesAltucherShow.com/listeners and fill out a short survey that will help us better tailor the podcast to our audience!Are you interested in getting direct answers from James about your question on a podcast? Go to JamesAltucherShow.com/AskAltucher and send in your questions to be answered on the air!------------Visit Notepd.com to read our idea lists & sign up to create your own!My new book, Skip the Line, is out! Make sure you get a copy wherever books are sold!Join the You Should Run for President 2.0 Facebook Group, where we discuss why you should run for President.I write about all my podcasts! Check out the full post and learn what I learned at jamesaltuchershow.com------------Thank you so much for listening! If you like this episode, please rate, review, and subscribe to "The James Altucher Show" wherever you get your podcasts: Apple PodcastsiHeart RadioSpotifyFollow me on social media:YouTubeTwitterFacebookLinkedIn
In this part 2 (of 2), Kai-Fu Lee, computer scientist, businessman, and writer, explained The Good, The Bad, and the Ugly of AI's future. How a good intention, could lead to a bad or undesirable outcome. How does privacy come into play with AI? Are people comfortable with sharing their personal data with AI? Can we actually have the "Good" AI? Listen to this Part 2 out of 2, and make sure you check out Part 1 if you missed it! My new book Skip The Line is out! Make sure you get a copy wherever you get your new book! Join You Should Run For President 2.0 Facebook Group, and we discuss why should run for president. I write about all my podcasts! Check out the full post and learn what I learned at jamesaltucher.com/podcast. Thanks so much for listening! If you like this episode, please subscribe to "The James Altucher Show" and rate and review wherever you get your podcasts: Apple Podcasts Stitcher iHeart Radio Spotify Follow me on Social Media: YouTube Twitter Facebook ------------What do YOU think of the show? Head to JamesAltucherShow.com/listeners and fill out a short survey that will help us better tailor the podcast to our audience!Are you interested in getting direct answers from James about your question on a podcast? Go to JamesAltucherShow.com/AskAltucher and send in your questions to be answered on the air!------------Visit Notepd.com to read our idea lists & sign up to create your own!My new book, Skip the Line, is out! Make sure you get a copy wherever books are sold!Join the You Should Run for President 2.0 Facebook Group, where we discuss why you should run for President.I write about all my podcasts! Check out the full post and learn what I learned at jamesaltuchershow.com------------Thank you so much for listening! If you like this episode, please rate, review, and subscribe to "The James Altucher Show" wherever you get your podcasts: Apple PodcastsiHeart RadioSpotifyFollow me on social media:YouTubeTwitterFacebookLinkedIn
As the Taliban crack down on women protesting in Kabul and new shocking segregation measures are installed in Afghan universities, Shkula Zadran, who was Afghan Youth Representative to the United Nations in 2020, joins Christiane Amanpour to explain why she refuses to be silent despite the grave risks. Turning to the United States, Carol Moseley Braun, the first African American woman elected to the Senate, assesses Texas' new restrictive abortion law and what it means for the erosion of women's rights in the country. Then journalists Peter Baker and Susan Glasser reflect on the impact of Afghanistan on Biden's presidency and their recent biography of James Baker. And our Hari Sreenivasan speaks to tech expert and best-selling author Kai-Fu Lee about his new collection of short stories “AI 2041” in which he imagines how artificial intelligence will impact the way we live and work in the future. To learn more about how CNN protects listener privacy, visit cnn.com/privacy