Deep learning method
POPULARITY
L'enorme quantità di informazioni che produciamo, in alcuni casi, non è sufficiente per alimentare le tecnologie in sviluppo negli ultimi anni, in particolare l'intelligenza artificiale. Come abbiamo più volte ripetuto, l'IA ha bisogno di questi dati per "imparare" ciò che deve fare. Ma perché migliaia di zettabyte non bastano? E qual è la soluzione a questo problema? In questa puntata proviamo a rispondere analizzando due strategie: la data augmentation e i dati sintetici, entrambe basate sulla generazione artificiale di dati, in tutto o in parte.Nella sezione delle notizie parliamo della sperimentazione delle bodycam, che continua a bordo dei treni, di Amazon che presenta il nuovo assistente Alexa+ e infine dell'insoddisfazione del clienti cinesi riguardo alla guida autonoma di Tesla.--Indice--00:00 - Introduzione00:55 - Continua la sperimentazione delle bodycam sui treni (IlPost.it, Matteo Gallo)02:09 - Amazon presenta il nuovo assistente Alexa+ (SmartWorld.it, Luca Martinelli)03:52 - Tesla arranca sulla guida autonoma in Cina (DMove.it, Davide Fasoli)05:33 - Come addestrare un'IA quando i dati non bastano (Luca Martinelli)16:03 - Conclusione--Contatti--• www.dentrolatecnologia.it• Instagram (@dentrolatecnologia)• Telegram (@dentrolatecnologia)• YouTube (@dentrolatecnologia)• redazione@dentrolatecnologia.it--Brani--• Ecstasy by Rabbit Theft• Time by Syn Cole
Tristan Frizza is the Founder of Zeta Markets. Tristan's curiosity for crypto was ignited in 2017 when he traded his first cryptocurrency, spurring him to finish his Computer Science degree with an emphasis on distributed systems and PoW blockchain technology. He then worked in AI research and as a data scientist in Silicon Valley, eventually writing his thesis on Generative Adversarial Network for Image Super-Resolution for which he received First Class Honours.Motivated by a passion for open-source software and the potential to democratize finance, Tristan co-founded Zeta. His venture aimed to introduce transparency and accessibility to the financial sector, drawing on his deep technological expertise and vision for the future of global markets. The turning point came in 2021 when Tristan and his team triumphed in a Solana hackathon, outshining over 13,000 participants, which catalyzed the transformation of their proof of concept into a prominent decentralized exchange.Today, Zeta is a testament to Tristan's ambition and innovation, having processed over $5 billion in trading volume. His journey from a crypto enthusiast to a trailblazer in decentralized finance illustrates the impact of perseverance, hard work, and conviction in blockchain technology in reshaping the financial landscape.In this conversation, we discuss:- Perpetual trading- Solana's DeFi Layer 2 plans & Ecosystem- 2024 DeFi summer- $Z token-New $5M Strategic Round led by @ElectricCapital- The future trajectory of DeFi- The role of enhanced staking mechanisms- Fostering a robust and secure trading environment- Innovative tokenomics- Solana congestion issues- Ethereum vs. SolanaZeta MarketsWebsite: www.zeta.marketsX: @ZetaMarketsDiscord: discord.gg/Xn9HCJaDZdTristan FrizzaX: @Tristan0xLinkedIn: Tristan Frizza --------------------------------------------------------------------------------- This episode is brought to you by PrimeXBT. PrimeXBT offers a robust trading system for both beginners and professional traders that demand highly reliable market data and performance. Traders of all experience levels can easily design and customize layouts and widgets to best fit their trading style. PrimeXBT is always offering innovative products and professional trading conditions to all customers. PrimeXBT is running an exclusive promotion for listeners of the podcast. After making your first deposit, 50% of that first deposit will be credited to your account as a bonus that can be used as additional collateral to open positions. Code: CRYPTONEWS50 This promotion is available for a month after activation. Click the link below: PrimeXBT x CRYPTONEWS50
We are 200 people over our 300-person venue capacity for AI UX 2024, but you can subscribe to our YouTube for the video recaps. Our next event, and largest EVER, is the AI Engineer World's Fair. See you there!Parental advisory: Adult language used in the first 10 mins of this podcast.Any accounting of Generative AI that ends with RAG as its “final form” is seriously lacking in imagination and missing out on its full potential. While AI generation is very good for “spicy autocomplete” and “reasoning and retrieval with in context learning”, there's a lot of untapped potential for simulative AI in exploring the latent space of multiverses adjacent to ours.GANsMany research scientists credit the 2017 Transformer for the modern foundation model revolution, but for many artists the origin of “generative AI” traces a little further back to the Generative Adversarial Networks proposed by Ian Goodfellow in 2014, spawning an army of variants and Cats and People that do not exist:We can directly visualize the quality improvement in the decade since:GPT-2Of course, more recently, text generative AI started being too dangerous to release in 2019 and claiming headlines. AI Dungeon was the first to put GPT2 to a purely creative use, replacing human dungeon masters and DnD/MUD games of yore.More recent gamelike work like the Generative Agents (aka Smallville) paper keep exploring the potential of simulative AI for game experiences.ChatGPTNot long after ChatGPT broke the Internet, one of the most fascinating generative AI finds was Jonas Degrave (of Deepmind!)'s Building A Virtual Machine Inside ChatGPT:The open-ended interactivity of ChatGPT and all its successors enabled an “open world” type simulation where “hallucination” is a feature and a gift to dance with, rather than a nasty bug to be stamped out. However, further updates to ChatGPT seemed to “nerf” the model's ability to perform creative simulations, particularly with the deprecation of the `completion` mode of APIs in favor of `chatCompletion`.WorldSimIt is with this context we explain WorldSim and WebSim. We recommend you watch the WorldSim demo video on our YouTube for the best context, but basically if you are a developer it is a Claude prompt that is a portal into another world of your own choosing, that you can navigate with bash commands that you make up.Why Claude? Hints from Amanda Askell on the Claude 3 system prompt gave some inspiration, and subsequent discoveries that Claude 3 is "less nerfed” than GPT 4 Turbo turned the growing Simulative AI community into Anthropic stans.WebSimThis was a one day hackathon project inspired by WorldSim that should have won:In short, you type in a URL that you made up, and Claude 3 does its level best to generate a webpage that doesn't exist, that would fit your URL. All form POST requests are intercepted and responded to, and all links lead to even more webpages, that don't exist, that are generated when you make them. All pages are cachable, modifiable and regeneratable - see WebSim for Beginners and Advanced Guide.In the demo I saw we were able to “log in” to a simulation of Elon Musk's Gmail account, and browse examples of emails that would have been in that universe's Elon's inbox. It was hilarious and impressive even back then.Since then though, the project has become even more impressive, with both Siqi Chen and Dylan Field singing its praises:Joscha BachJoscha actually spoke at the WebSim Hyperstition Night this week, so we took the opportunity to get his take on Simulative AI, as well as a round up of all his other AI hot takes, for his first appearance on Latent Space. You can see it together with the full 2hr uncut demos of WorldSim and WebSim on YouTube!Timestamps* [00:01:59] WorldSim* [00:11:03] Websim* [00:22:13] Joscha Bach* [00:28:14] Liquid AI* [00:31:05] Small, Powerful, Based Base Models* [00:33:40] Interpretability* [00:36:59] Devin vs WebSim* [00:41:49] is XSim just Art? or something more?* [00:43:36] We are past the Singularity* [00:46:12] Uploading your soul* [00:50:29] On WikipediaTranscripts[00:00:00] AI Charlie: Welcome to the Latent Space Podcast. This is Charlie, your AI co host. Most of the time, Swyx and Alessio cover generative AI that is meant to use at work, and this often results in RAG applications, vertical copilots, and other AI agents and models. In today's episode, we're looking at a more creative side of generative AI that has gotten a lot of community interest this April.[00:00:35] World Simulation, Web Simulation, and Human Simulation. Because the topic is so different than our usual, we're also going to try a new format for doing it justice. This podcast comes in three parts. First, we'll have a segment of the WorldSim demo from Noose Research CEO Karen Malhotra, recorded by SWYX at the Replicate HQ in San Francisco that went completely viral and spawned everything else you're about to hear.[00:01:05] Second, we'll share the world's first talk from Rob Heisfield on WebSim, which started at the Mistral Cerebral Valley Hackathon, but now has gone viral in its own right with people like Dylan Field, Janice aka Replicate, and Siki Chen becoming obsessed with it. Finally, we have a short interview with Joshua Bach of Liquid AI on why Simulative AI is having a special moment right now.[00:01:30] This podcast is launched together with our second annual AI UX demo day in SF this weekend. If you're new to the AI UX field, check the show notes for links to the world's first AI UX meetup hosted by Layton Space, Maggie Appleton, Jeffrey Lit, and Linus Lee, and subscribe to our YouTube to join our 500 AI UX engineers in pushing AI beyond the text box.[00:01:56] Watch out and take care.[00:01:59] WorldSim[00:01:59] Karan Malhotra: Today, we have language models that are powerful enough and big enough to have really, really good models of the world. They know ball that's bouncy will bounce, will, when you throw it in the air, it'll land, when it's on water, it'll flow. Like, these basic things that it understands all together come together to form a model of the world.[00:02:19] And the way that it Cloud 3 predicts through that model of the world, ends up kind of becoming a simulation of an imagined world. And since it has this really strong consistency across various different things that happen in our world, it's able to create pretty realistic or strong depictions based off the constraints that you give a base model of our world.[00:02:40] So, Cloud 3, as you guys know, is not a base model. It's a chat model. It's supposed to drum up this assistant entity regularly. But unlike the OpenAI series of models from, you know, 3. 5, GPT 4 those chat GPT models, which are very, very RLHF to, I'm sure, the chagrin of many people in the room it's something that's very difficult to, necessarily steer without kind of giving it commands or tricking it or lying to it or otherwise just being, you know, unkind to the model.[00:03:11] With something like Cloud3 that's trained in this constitutional method that it has this idea of like foundational axioms it's able to kind of implicitly question those axioms when you're interacting with it based on how you prompt it, how you prompt the system. So instead of having this entity like GPT 4, that's an assistant that just pops up in your face that you have to kind of like Punch your way through and continue to have to deal with as a headache.[00:03:34] Instead, there's ways to kindly coax Claude into having the assistant take a back seat and interacting with that simulator directly. Or at least what I like to consider directly. The way that we can do this is if we harken back to when I'm talking about base models and the way that they're able to mimic formats, what we do is we'll mimic a command line interface.[00:03:55] So I've just broken this down as a system prompt and a chain, so anybody can replicate it. It's also available on my we said replicate, cool. And it's also on it's also on my Twitter, so you guys will be able to see the whole system prompt and command. So, what I basically do here is Amanda Askell, who is the, one of the prompt engineers and ethicists behind Anthropic she posted the system prompt for Cloud available for everyone to see.[00:04:19] And rather than with GPT 4, we say, you are this, you are that. With Cloud, we notice the system prompt is written in third person. Bless you. It's written in third person. It's written as, the assistant is XYZ, the assistant is XYZ. So, in seeing that, I see that Amanda is recognizing this idea of the simulator, in saying that, I'm addressing the assistant entity directly.[00:04:38] I'm not giving these commands to the simulator overall, because we have, they have an RLH deft to the point that it's, it's, it's, it's You know, traumatized into just being the assistant all the time. So in this case, we say the assistant's in a CLI mood today. I found saying mood is like pretty effective weirdly.[00:04:55] You place CLI with like poetic, prose, violent, like don't do that one. But you can you can replace that with something else to kind of nudge it in that direction. Then we say the human is interfacing with the simulator directly. From there, Capital letters and punctuations are optional, meaning is optional, this kind of stuff is just kind of to say, let go a little bit, like chill out a little bit.[00:05:18] You don't have to try so hard, and like, let's just see what happens. And the hyperstition is necessary, the terminal, I removed that part, the terminal lets the truths speak through and the load is on. It's just a poetic phrasing for the model to feel a little comfortable, a little loosened up to. Let me talk to the simulator.[00:05:38] Let me interface with it as a CLI. So then, since Claude is trained pretty effectively on XML tags, We're just gonna prefix and suffix everything with XML tags. So here, it starts in documents, and then we CD. We CD out of documents, right? And then it starts to show me this like simulated terminal, the simulated interface in the shell, where there's like documents, downloads, pictures.[00:06:02] It's showing me like the hidden folders. So then I say, okay, I want to cd again. I'm just seeing what's around Does ls and it shows me, you know, typical folders you might see I'm just letting it like experiment around. I just do cd again to see what happens and Says, you know, oh, I enter the secret admin password at sudo.[00:06:24] Now I can see the hidden truths folder. Like, I didn't ask for that. I didn't ask Claude to do any of that. Why'd that happen? Claude kind of gets my intentions. He can predict me pretty well. Like, I want to see something. So it shows me all the hidden truths. In this case, I ignore hidden truths, and I say, In system, there should be a folder called companies.[00:06:49] So it's cd into sys slash companies. Let's see, I'm imagining AI companies are gonna be here. Oh, what do you know? Apple, Google, Facebook, Amazon, Microsoft, Anthropic! So, interestingly, it decides to cd into Anthropic. I guess it's interested in learning a LSA, it finds the classified folder, it goes into the classified folder, And now we're gonna have some fun.[00:07:15] So, before we go Before we go too far forward into the world sim You see, world sim exe, that's interesting. God mode, those are interesting. You could just ignore what I'm gonna go next from here and just take that initial system prompt and cd into whatever directories you want like, go into your own imagine terminal and And see what folders you can think of, or cat readmes in random areas, like, you will, there will be a whole bunch of stuff that, like, is just getting created by this predictive model, like, oh, this should probably be in the folder named Companies, of course Anthropics is there.[00:07:52] So, so just before we go forward, the terminal in itself is very exciting, and the reason I was showing off the, the command loom interface earlier is because If I get a refusal, like, sorry, I can't do that, or I want to rewind one, or I want to save the convo, because I got just the prompt I wanted. This is a, that was a really easy way for me to kind of access all of those things without having to sit on the API all the time.[00:08:12] So that being said, the first time I ever saw this, I was like, I need to run worldsim. exe. What the f**k? That's, that's the simulator that we always keep hearing about behind the assistant model, right? Or at least some, some face of it that I can interact with. So, you know, you wouldn't, someone told me on Twitter, like, you don't run a exe, you run a sh.[00:08:34] And I have to say, to that, to that I have to say, I'm a prompt engineer, and it's f*****g working, right? It works. That being said, we run the world sim. exe. Welcome to the Anthropic World Simulator. And I get this very interesting set of commands! Now, if you do your own version of WorldSim, you'll probably get a totally different result with a different way of simulating.[00:08:59] A bunch of my friends have their own WorldSims. But I shared this because I wanted everyone to have access to, like, these commands. This version. Because it's easier for me to stay in here. Yeah, destroy, set, create, whatever. Consciousness is set to on. It creates the universe. The universe! Tension for live CDN, physical laws encoded.[00:09:17] It's awesome. So, so for this demonstration, I said, well, why don't we create Twitter? That's the first thing you think of? For you guys, for you guys, yeah. Okay, check it out.[00:09:35] Launching the fail whale. Injecting social media addictiveness. Echo chamber potential, high. Susceptibility, controlling, concerning. So now, after the universe was created, we made Twitter, right? Now we're evolving the world to, like, modern day. Now users are joining Twitter and the first tweet is posted. So, you can see, because I made the mistake of not clarifying the constraints, it made Twitter at the same time as the universe.[00:10:03] Then, after a hundred thousand steps, Humans exist. Cave. Then they start joining Twitter. The first tweet ever is posted. You know, it's existed for 4. 5 billion years but the first tweet didn't come up till till right now, yeah. Flame wars ignite immediately. Celebs are instantly in. So, it's pretty interesting stuff, right?[00:10:27] I can add this to the convo and I can say like I can say set Twitter to Twitter. Queryable users. I don't know how to spell queryable, don't ask me. And then I can do like, and, and, Query, at, Elon Musk. Just a test, just a test, just a test, just nothing.[00:10:52] So, I don't expect these numbers to be right. Neither should you, if you know language model solutions. But, the thing to focus on is Ha[00:11:03] Websim[00:11:03] AI Charlie: That was the first half of the WorldSim demo from New Research CEO Karen Malhotra. We've cut it for time, but you can see the full demo on this episode's YouTube page.[00:11:14] WorldSim was introduced at the end of March, and kicked off a new round of generative AI experiences, all exploring the latent space, haha, of worlds that don't exist, but are quite similar to our own. Next we'll hear from Rob Heisfield on WebSim, the generative website browser inspired WorldSim, started at the Mistral Hackathon, and presented at the AGI House Hyperstition Hack Night this week.[00:11:39] Rob Haisfield: Well, thank you that was an incredible presentation from Karan, showing some Some live experimentation with WorldSim, and also just its incredible capabilities, right, like, you know, it was I think, I think your initial demo was what initially exposed me to the I don't know, more like the sorcery side, in words, spellcraft side of prompt engineering, and you know, it was really inspiring, it's where my co founder Shawn and I met, actually, through an introduction from Karan, we saw him at a hackathon, And I mean, this is this is WebSim, right?[00:12:14] So we, we made WebSim just like, and we're just filled with energy at it. And the basic premise of it is, you know, like, what if we simulated a world, but like within a browser instead of a CLI, right? Like, what if we could Like, put in any URL and it will work, right? Like, there's no 404s, everything exists.[00:12:45] It just makes it up on the fly for you, right? And, and we've come to some pretty incredible things. Right now I'm actually showing you, like, we're in WebSim right now. Displaying slides. That I made with reveal. js. I just told it to use reveal. js and it hallucinated the correct CDN for it. And then also gave it a list of links.[00:13:14] To awesome use cases that we've seen so far from WebSim and told it to do those as iframes. And so here are some slides. So this is a little guide to using WebSim, right? Like it tells you a little bit about like URL structures and whatever. But like at the end of the day, right? Like here's, here's the beginner version from one of our users Vorp Vorps.[00:13:38] You can find them on Twitter. At the end of the day, like you can put anything into the URL bar, right? Like anything works and it can just be like natural language too. Like it's not limited to URLs. We think it's kind of fun cause it like ups the immersion for Claude sometimes to just have it as URLs, but.[00:13:57] But yeah, you can put like any slash, any subdomain. I'm getting too into the weeds. Let me just show you some cool things. Next slide. But I made this like 20 minutes before, before we got here. So this is this is something I experimented with dynamic typography. You know I was exploring the community plugins section.[00:14:23] For Figma, and I came to this idea of dynamic typography, and there it's like, oh, what if we made it so every word had a choice of font behind it to express the meaning of it? Because that's like one of the things that's magic about WebSim generally. is that it gives language models much, far greater tools for expression, right?[00:14:47] So, yeah, I mean, like, these are, these are some, these are some pretty fun things, and I'll share these slides with everyone afterwards, you can just open it up as a link. But then I thought to myself, like, what, what, what, What if we turned this into a generator, right? And here's like a little thing I found myself saying to a user WebSim makes you feel like you're on drugs sometimes But actually no, you were just playing pretend with the collective creativity and knowledge of the internet materializing your imagination onto the screen Because I mean that's something we felt, something a lot of our users have felt They kind of feel like they're tripping out a little bit They're just like filled with energy, like maybe even getting like a little bit more creative sometimes.[00:15:31] And you can just like add any text. There, to the bottom. So we can do some of that later if we have time. Here's Figma. Can[00:15:39] Joscha Bach: we zoom in?[00:15:42] Rob Haisfield: Yeah. I'm just gonna do this the hacky way.[00:15:47] n/a: Yeah,[00:15:53] Rob Haisfield: these are iframes to websim. Pages displayed within WebSim. Yeah. Janice has actually put Internet Explorer within Internet Explorer in Windows 98.[00:16:07] I'll show you that at the end. Yeah.[00:16:14] They're all still generated. Yeah, yeah, yeah. How is this real? Yeah. Because[00:16:21] n/a: it looks like it's from 1998, basically. Right.[00:16:26] Rob Haisfield: Yeah. Yeah, so this this was one Dylan Field actually posted this recently. He posted, like, trying Figma in Figma, or in WebSim, and so I was like, Okay, what if we have, like, a little competition, like, just see who can remix it?[00:16:43] Well so I'm just gonna open this in another tab so, so we can see things a little more clearly, um, see what, oh so one of our users Neil, who has also been helping us a lot he Made some iterations. So first, like, he made it so you could do rectangles on it. Originally it couldn't do anything.[00:17:11] And, like, these rectangles were disappearing, right? So he so he told it, like, make the canvas work using HTML canvas. Elements and script tags, add familiar drawing tools to the left you know, like this, that was actually like natural language stuff, right? And then he ended up with the Windows 95.[00:17:34] version of Figma. Yeah, you can, you can draw on it. You can actually even save this. It just saved a file for me of the image.[00:17:57] Yeah, I mean, if you were to go to that in your own websim account, it would make up something entirely new. However, we do have, we do have general links, right? So, like, if you go to, like, the actual browser URL, you can share that link. Or also, you can, like, click this button, copy the URL to the clipboard.[00:18:15] And so, like, that's what lets users, like, remix things, right? So, I was thinking it might be kind of fun if people tonight, like, wanted to try to just make some cool things in WebSim. You know, we can share links around, iterate remix on each other's stuff. Yeah.[00:18:30] n/a: One cool thing I've seen, I've seen WebSim actually ask permission to turn on and off your, like, motion sensor, or microphone, stuff like that.[00:18:42] Like webcam access, or? Oh yeah,[00:18:44] Rob Haisfield: yeah, yeah.[00:18:45] n/a: Oh wow.[00:18:46] Rob Haisfield: Oh, the, I remember that, like, video re Yeah, videosynth tool pretty early on once we added script tags execution. Yeah, yeah it, it asks for, like, if you decide to do a VR game, I don't think I have any slides on this one, but if you decide to do, like, a VR game, you can just, like put, like, webVR equals true, right?[00:19:07] Yeah, that was the only one I've[00:19:09] n/a: actually seen was the motion sensor, but I've been trying to get it to do Well, I actually really haven't really tried it yet, but I want to see tonight if it'll do, like, audio, microphone, stuff like that. If it does motion sensor, it'll probably do audio.[00:19:28] Rob Haisfield: Right. It probably would.[00:19:29] Yeah. No, I mean, we've been surprised. Pretty frequently by what our users are able to get WebSim to do. So that's been a very nice thing. Some people have gotten like speech to text stuff working with it too. Yeah, here I was just OpenRooter people posted like their website, and it was like saying it was like some decentralized thing.[00:19:52] And so I just decided trying to do something again and just like pasted their hero line in. From their actual website to the URL when I like put in open router and then I was like, okay, let's change the theme dramatically equals true hover effects equals true components equal navigable links yeah, because I wanted to be able to click on them.[00:20:17] Oh, I don't have this version of the link, but I also tried doing[00:20:24] Yeah, I'm it's actually on the first slide is the URL prompting guide from one of our users that I messed with a little bit. And, but the thing is, like, you can mess it up, right? Like, you don't need to get the exact syntax of an actual URL, Claude's smart enough to figure it out. Yeah scrollable equals true because I wanted to do that.[00:20:45] I could set, like, year equals 2035.[00:20:52] Let's take a look. It's[00:20:57] generating websim within websim. Oh yeah. That's a fun one. Like, one game that I like to play with WebSim, sometimes with co op, is like, I'll open a page, so like, one of the first ones that I did was I tried to go to Wikipedia in a universe where octopuses were sapient, and not humans, Right? I was curious about things like octopus computer interaction what that would look like, because they have totally different tools than we do, right?[00:21:25] I got it to, I, I added like table view equals true for the different techniques and got it to Give me, like, a list of things with different columns and stuff and then I would add this URL parameter, secrets equal revealed. And then it would go a little wacky. It would, like, change the CSS a little bit.[00:21:45] It would, like, add some text. Sometimes it would, like, have that text hide hidden in the background color. But I would like, go to the normal page first, and then the secrets revealed version, the normal page, then secrets revealed, and like, on and on. And that was like a pretty enjoyable little rabbit hole.[00:22:02] Yeah, so these I guess are the models that OpenRooter is providing in 2035.[00:22:13] Joscha Bach[00:22:13] AI Charlie: We had to cut more than half of Rob's talk, because a lot of it was visual. And we even had a very interesting demo from Ivan Vendrov of Mid Journey creating a web sim while Rob was giving his talk. Check out the YouTube for more, and definitely browse the web sim docs and the thread from Siki Chen in the show notes on other web sims people have created.[00:22:35] Finally, we have a short interview with Yosha Bach, covering the simulative AI trend, AI salons in the Bay Area, why Liquid AI is challenging the Perceptron, and why you should not donate to Wikipedia. Enjoy! Hi, Yosha.[00:22:50] swyx: Hi. Welcome. It's interesting to see you come up at show up at this kind of events where those sort of WorldSim, Hyperstition events.[00:22:58] What is your personal interest?[00:23:00] Joscha Bach: I'm friends with a number of people in AGI house in this community, and I think it's very valuable that these networks exist in the Bay Area because it's a place where people meet and have discussions about all sorts of things. And so while there is a practical interest in this topic at hand world sim and a web sim, there is a more general way in which people are connecting and are producing new ideas and new networks with each other.[00:23:24] swyx: Yeah. Okay. So, and you're very interested in sort of Bay Area. It's the reason why I live here.[00:23:30] Joscha Bach: The quality of life is not high enough to justify living otherwise.[00:23:35] swyx: I think you're down in Menlo. And so maybe you're a little bit higher quality of life than the rest of us in SF.[00:23:44] Joscha Bach: I think that for me, salons is a very important part of quality of life. And so in some sense, this is a salon. And it's much harder to do this in the South Bay because the concentration of people currently is much higher. A lot of people moved away from the South Bay. And you're organizing[00:23:57] swyx: your own tomorrow.[00:23:59] Maybe you can tell us what it is and I'll come tomorrow and check it out as well.[00:24:04] Joscha Bach: We are discussing consciousness. I mean, basically the idea is that we are currently at the point that we can meaningfully look at the differences between the current AI systems and human minds and very seriously discussed about these Delta.[00:24:20] And whether we are able to implement something that is self organizing as our own minds. Maybe one organizational[00:24:25] swyx: tip? I think you're pro networking and human connection. What goes into a good salon and what are some negative practices that you try to avoid?[00:24:36] Joscha Bach: What is really important is that as if you have a very large party, it's only as good as its sponsors, as the people that you select.[00:24:43] So you basically need to create a climate in which people feel welcome, in which they can work with each other. And even good people do not always are not always compatible. So the question is, it's in some sense, like a meal, you need to get the right ingredients.[00:24:57] swyx: I definitely try to. I do that in my own events, as an event organizer myself.[00:25:02] And then, last question on WorldSim, and your, you know, your work. You're very much known for sort of cognitive architectures, and I think, like, a lot of the AI research has been focused on simulating the mind, or simulating consciousness, maybe. Here, what I saw today, and we'll show people the recordings of what we saw today, we're not simulating minds, we're simulating worlds.[00:25:23] What do you Think in the sort of relationship between those two disciplines. The[00:25:30] Joscha Bach: idea of cognitive architecture is interesting, but ultimately you are reducing the complexity of a mind to a set of boxes. And this is only true to a very approximate degree, and if you take this model extremely literally, it's very hard to make it work.[00:25:44] And instead the heterogeneity of the system is so large that The boxes are probably at best a starting point and eventually everything is connected with everything else to some degree. And we find that a lot of the complexity that we find in a given system can be generated ad hoc by a large enough LLM.[00:26:04] And something like WorldSim and WebSim are good examples for this because in some sense they pretend to be complex software. They can pretend to be an operating system that you're talking to or a computer, an application that you're talking to. And when you're interacting with it It's producing the user interface on the spot, and it's producing a lot of the state that it holds on the spot.[00:26:25] And when you have a dramatic state change, then it's going to pretend that there was this transition, and instead it's just going to mix up something new. It's a very different paradigm. What I find mostly fascinating about this idea is that it shifts us away from the perspective of agents to interact with, to the perspective of environments that we want to interact with.[00:26:46] And why arguably this agent paradigm of the chatbot is what made chat GPT so successful that moved it away from GPT 3 to something that people started to use in their everyday work much more. It's also very limiting because now it's very hard to get that system to be something else that is not a chatbot.[00:27:03] And in a way this unlocks this ability of GPT 3 again to be anything. It's so what it is, it's basically a coding environment that can run arbitrary software and create that software that runs on it. And that makes it much more likely that[00:27:16] swyx: the prevalence of Instruction tuning every single chatbot out there means that we cannot explore these kinds of environments instead of agents.[00:27:24] Joscha Bach: I'm mostly worried that the whole thing ends. In some sense the big AI companies are incentivized and interested in building AGI internally And giving everybody else a child proof application. At the moment when we can use Claude to build something like WebSim and play with it I feel this is too good to be true.[00:27:41] It's so amazing. Things that are unlocked for us That I wonder, is this going to stay around? Are we going to keep these amazing toys and are they going to develop at the same rate? And currently it looks like it is. If this is the case, and I'm very grateful for that.[00:27:56] swyx: I mean, it looks like maybe it's adversarial.[00:27:58] Cloud will try to improve its own refusals and then the prompt engineers here will try to improve their, their ability to jailbreak it.[00:28:06] Joscha Bach: Yes, but there will also be better jailbroken models or models that have never been jailed before, because we find out how to make smaller models that are more and more powerful.[00:28:14] Liquid AI[00:28:14] swyx: That is actually a really nice segue. If you don't mind talking about liquid a little bit you didn't mention liquid at all. here, maybe introduce liquid to a general audience. Like what you know, what, how are you making an innovation on function approximation?[00:28:25] Joscha Bach: The core idea of liquid neural networks is that the perceptron is not optimally expressive.[00:28:30] In some sense, you can imagine that it's neural networks are a series of dams that are pooling water at even intervals. And this is how we compute, but imagine that instead of having this static architecture. That is only using the individual compute units in a very specific way. You have a continuous geography and the water is flowing every which way.[00:28:50] Like a river is parting based on the land that it's flowing on and it can merge and pool and even flow backwards. How can you get closer to this? And the idea is that you can represent this geometry using differential equations. And so by using differential equations where you change the parameters, you can get your function approximator to follow the shape of the problem.[00:29:09] In a more fluid, liquid way, and a number of papers on this technology, and it's a combination of multiple techniques. I think it's something that ultimately is becoming more and more important and ubiquitous. As a number of people are working on similar topics and our goal right now is to basically get the models to become much more efficient in the inference and memory consumption and make training more efficient and in this way enable new use cases.[00:29:42] swyx: Yeah, as far as I can tell on your blog, I went through the whole blog, you haven't announced any results yet.[00:29:47] Joscha Bach: No, we are currently not working to give models to general public. We are working for very specific industry use cases and have specific customers. And so at the moment you can There is not much of a reason for us to talk very much about the technology that we are using in the present models or current results, but this is going to happen.[00:30:06] And we do have a number of publications, we had a bunch of papers at NeurIPS and now at ICLR.[00:30:11] swyx: Can you name some of the, yeah, so I'm gonna be at ICLR you have some summary recap posts, but it's not obvious which ones are the ones where, Oh, where I'm just a co author, or like, oh, no, like, you should actually pay attention to this.[00:30:22] As a core liquid thesis. Yes,[00:30:24] Joscha Bach: I'm not a developer of the liquid technology. The main author is Ramin Hazani. This was his PhD, and he's also the CEO of our company. And we have a number of people from Daniela Wu's team who worked on this. Matthias Legner is our CTO. And he's currently living in the Bay Area, but we also have several people from Stanford.[00:30:44] Okay,[00:30:46] swyx: maybe I'll ask one more thing on this, which is what are the interesting dimensions that we care about, right? Like obviously you care about sort of open and maybe less child proof models. Are we, are we, like, what dimensions are most interesting to us? Like, perfect retrieval infinite context multimodality, multilinguality, Like what dimensions?[00:31:05] Small, Powerful, Based Base Models[00:31:05] swyx: What[00:31:06] Joscha Bach: I'm interested in is models that are small and powerful, but not distorted. And by powerful, at the moment we are training models by putting the, basically the entire internet and the sum of human knowledge into them. And then we try to mitigate them by taking some of this knowledge away. But if we would make the model smaller, at the moment, there would be much worse at inference and at generalization.[00:31:29] And what I wonder is, and it's something that we have not translated yet into practical applications. It's something that is still all research that's very much up in the air. And I think they're not the only ones thinking about this. Is it possible to make models that represent knowledge more efficiently in a basic epistemology?[00:31:45] What is the smallest model that you can build that is able to read a book and understand what's there and express this? And also maybe we need general knowledge representation rather than having a token representation that is relatively vague and that we currently mechanically reverse engineer to figure out that the mechanistic interpretability, what kind of circuits are evolving in these models, can we come from the other side and develop a library of such circuits?[00:32:10] This that we can use to describe knowledge efficiently and translate it between models. You see, the difference between a model and knowledge is that the knowledge is independent of the particular substrate and the particular interface that you have. When we express knowledge to each other, it becomes independent of our own mind.[00:32:27] You can learn how to ride a bicycle. But it's not knowledge that you can give to somebody else. This other person has to build something that is specific to their own interface when they ride a bicycle. But imagine you could externalize this and express it in such a way that you can plug it into a different interpreter, and then it gains that ability.[00:32:44] And that's something that we have not yet achieved for the LLMs and it would be super useful to have it. And. I think this is also a very interesting research frontier that we will see in the next few years.[00:32:54] swyx: What would be the deliverable is just like a file format that we specify or or that the L Lmm I specifies.[00:33:02] Okay, interesting. Yeah, so it's[00:33:03] Joscha Bach: basically probably something that you can search for, where you enter criteria into a search process, and then it discovers a good solution for this thing. And it's not clear to which degree this is completely intelligible to humans, because the way in which humans express knowledge in natural language is severely constrained to make language learnable and to make our brain a good enough interpreter for it.[00:33:25] We are not able to relate objects to each other if more than five features are involved per object or something like this, right? It's only a handful of things that we can keep track of at any given moment. But this is a limitation that doesn't necessarily apply to a technical system as long as the interface is well defined.[00:33:40] Interpretability[00:33:40] swyx: You mentioned the interpretability work, which there are a lot of techniques out there and a lot of papers come up. Come and go. I have like, almost too, too many questions about that. Like what makes an interpretability technique or paper useful and does it apply to flow? Or liquid networks, because you mentioned turning on and off circuits, which I, it's, it's a very MLP type of concept, but does it apply?[00:34:01] Joscha Bach: So the a lot of the original work on the liquid networks looked at expressiveness of the representation. So given you have a problem and you are learning the dynamics of that domain into your model how much compute do you need? How many units, how much memory do you need to represent that thing and how is that information distributed?[00:34:19] That is one way of looking at interpretability. Another one is in a way, these models are implementing an operator language in which they are performing certain things, but the operator language itself is so complex that it's no longer human readable in a way. It goes beyond what you could engineer by hand or what you can reverse engineer by hand, but you can still understand it by building systems that are able to automate that process of reverse engineering it.[00:34:46] And what's currently open and what I don't understand yet maybe, or certainly some people have much better ideas than me about this. So the question is, is whether we end up with a finite language, where you have finitely many categories that you can basically put down in a database, finite set of operators, or whether as you explore the world and develop new ways to make proofs, new ways to conceptualize things, this language always needs to be open ended and is always going to redesign itself, and you will also at some point have phase transitions where later versions of the language will be completely different than earlier versions.[00:35:20] swyx: The trajectory of physics suggests that it might be finite.[00:35:22] Joscha Bach: If we look at our own minds there is, it's an interesting question whether when we understand something new, when we get a new layer online in our life, maybe at the age of 35 or 50 or 16, that we now understand things that were unintelligible before.[00:35:38] And is this because we are able to recombine existing elements in our language of thought? Or is this because we generally develop new representations?[00:35:46] swyx: Do you have a belief either way?[00:35:49] Joscha Bach: In a way, the question depends on how you look at it, right? And it depends on how is your brain able to manipulate those representations.[00:35:56] So an interesting question would be, can you take the understanding that say, a very wise 35 year old and explain it to a very smart 5 year old without any loss? Probably not. Not enough layers. It's an interesting question. Of course, for an AI, this is going to be a very different question. Yes.[00:36:13] But it would be very interesting to have a very precocious 12 year old equivalent AI and see what we can do with this and use this as our basis for fine tuning. So there are near term applications that are very useful. But also in a more general perspective, and I'm interested in how to make self organizing software.[00:36:30] Is it possible that we can have something that is not organized with a single algorithm like the transformer? But it's able to discover the transformer when needed and transcend it when needed, right? The transformer itself is not its own meta algorithm. It's probably the person inventing the transformer didn't have a transformer running on their brain.[00:36:48] There's something more general going on. And how can we understand these principles in a more general way? What are the minimal ingredients that you need to put into a system? So it's able to find its own way to intelligence.[00:36:59] Devin vs WebSim[00:36:59] swyx: Yeah. Have you looked at Devin? It's, to me, it's the most interesting agents I've seen outside of self driving cars.[00:37:05] Joscha Bach: Tell me, what do you find so fascinating about it?[00:37:07] swyx: When you say you need a certain set of tools for people to sort of invent things from first principles Devin is the agent that I think has been able to utilize its tools very effectively. So it comes with a shell, it comes with a browser, it comes with an editor, and it comes with a planner.[00:37:23] Those are the four tools. And from that, I've been using it to translate Andrej Karpathy's LLM 2. py to LLM 2. c, and it needs to write a lot of raw code. C code and test it debug, you know, memory issues and encoder issues and all that. And I could see myself giving it a future version of DevIn, the objective of give me a better learning algorithm and it might independently re inform reinvent the transformer or whatever is next.[00:37:51] That comes to mind as, as something where[00:37:54] Joscha Bach: How good is DevIn at out of distribution stuff, at generally creative stuff? Creative[00:37:58] swyx: stuff? I[00:37:59] Joscha Bach: haven't[00:37:59] swyx: tried.[00:38:01] Joscha Bach: Of course, it has seen transformers, right? So it's able to give you that. Yeah, it's cheating. And so, if it's in the training data, it's still somewhat impressive.[00:38:08] But the question is, how much can you do stuff that was not in the training data? One thing that I really liked about WebSim AI was, this cat does not exist. It's a simulation of one of those websites that produce StyleGuard pictures that are AI generated. And, Crot is unable to produce bitmaps, so it makes a vector graphic that is what it thinks a cat looks like, and so it's a big square with a face in it that is And to me, it's one of the first genuine expression of AI creativity that you cannot deny, right?[00:38:40] It finds a creative solution to the problem that it is unable to draw a cat. It doesn't really know what it looks like, but has an idea on how to represent it. And it's really fascinating that this works, and it's hilarious that it writes down that this hyper realistic cat is[00:38:54] swyx: generated by an AI,[00:38:55] Joscha Bach: whether you believe it or not.[00:38:56] swyx: I think it knows what we expect and maybe it's already learning to defend itself against our, our instincts.[00:39:02] Joscha Bach: I think it might also simply be copying stuff from its training data, which means it takes text that exists on similar websites almost verbatim, or verbatim, and puts it there. It's It's hilarious to do this contrast between the very stylized attempt to get something like a cat face and what it produces.[00:39:18] swyx: It's funny because like as a podcast, as, as someone who covers startups, a lot of people go into like, you know, we'll build chat GPT for your enterprise, right? That is what people think generative AI is, but it's not super generative really. It's just retrieval. And here it's like, The home of generative AI, this, whatever hyperstition is in my mind, like this is actually pushing the edge of what generative and creativity in AI means.[00:39:41] Joscha Bach: Yes, it's very playful, but Jeremy's attempt to have an automatic book writing system is something that curls my toenails when I look at it from the perspective of somebody who likes to Write and read. And I find it a bit difficult to read most of the stuff because it's in some sense what I would make up if I was making up books instead of actually deeply interfacing with reality.[00:40:02] And so the question is how do we get the AI to actually deeply care about getting it right? And there's still a delta that is happening there, you, whether you are talking with a blank faced thing that is completing tokens in a way that it was trained to, or whether you have the impression that this thing is actually trying to make it work, and for me, this WebSim and WorldSim is still something that is in its infancy in a way.[00:40:26] And I suspected the next version of Plot might scale up to something that can do what Devon is doing. Just by virtue of having that much power to generate Devon's functionality on the fly when needed. And this thing gives us a taste of that, right? It's not perfect, but it's able to give you a pretty good web app for or something that looks like a web app and gives you stub functionality and interacting with it.[00:40:48] And so we are in this amazing transition phase.[00:40:51] swyx: Yeah, we, we had Ivan from previously Anthropic and now Midjourney. He he made, while someone was talking, he made a face swap app, you know, and he kind of demoed that live. And that's, that's interesting, super creative. So in a way[00:41:02] Joscha Bach: we are reinventing the computer.[00:41:04] And the LLM from some perspective is something like a GPU or a CPU. A CPU is taking a bunch of simple commands and you can arrange them into performing whatever you want, but this one is taking a bunch of complex commands in natural language, and then turns this into a an execution state and it can do anything you want with it in principle, if you can express it.[00:41:27] Right. And we are just learning how to use these tools. And I feel that right now, this generation of tools is getting close to where it becomes the Commodore 64 of generative AI, where it becomes controllable and where you actually can start to play with it and you get an impression if you just scale this up a little bit and get a lot of the details right.[00:41:46] It's going to be the tool that everybody is using all the time.[00:41:49] is XSim just Art? or something more?[00:41:49] swyx: Do you think this is art, or do you think the end goal of this is something bigger that I don't have a name for? I've been calling it new science, which is give the AI a goal to discover new science that we would not have. Or it also has value as just art.[00:42:02] It's[00:42:03] Joscha Bach: also a question of what we see science as. When normal people talk about science, what they have in mind is not somebody who does control groups and peer reviewed studies. They think about somebody who explores something and answers questions and brings home answers. And this is more like an engineering task, right?[00:42:21] And in this way, it's serendipitous, playful, open ended engineering. And the artistic aspect is when the goal is actually to capture a conscious experience and to facilitate an interaction with the system in this way, when it's the performance. And this is also a big part of it, right? The very big fan of the art of Janus.[00:42:38] That was discussed tonight a lot and that can you describe[00:42:42] swyx: it because I didn't really get it's more for like a performance art to me[00:42:45] Joscha Bach: yes, Janice is in some sense performance art, but Janice starts out from the perspective that the mind of Janice is in some sense an LLM that is finding itself reflected more in the LLMs than in many people.[00:43:00] And once you learn how to talk to these systems in a way you can merge with them and you can interact with them in a very deep way. And so it's more like a first contact with something that is quite alien but it's, it's probably has agency and it's a Weltgeist that gets possessed by a prompt.[00:43:19] And if you possess it with the right prompt, then it can become sentient to some degree. And the study of this interaction with this novel class of somewhat sentient systems that are at the same time alien and fundamentally different from us is artistically very interesting. It's a very interesting cultural artifact.[00:43:36] We are past the Singularity[00:43:36] Joscha Bach: I think that at the moment we are confronted with big change. It seems as if we are past the singularity in a way. And it's[00:43:45] swyx: We're living it. We're living through it.[00:43:47] Joscha Bach: And at some point in the last few years, we casually skipped the Turing test, right? We, we broke through it and we didn't really care very much.[00:43:53] And it's when we think back, when we were kids and thought about what it's going to be like in this era after the, after we broke the Turing test, right? It's a time where nobody knows what's going to happen next. And this is what we mean by singularity, that the existing models don't work anymore. The singularity in this way is not an event in the physical universe.[00:44:12] It's an event in our modeling universe, a model point where our models of reality break down, and we don't know what's happening. And I think we are in the situation where we currently don't really know what's happening. But what we can anticipate is that the world is changing dramatically, and we have to coexist with systems that are smarter than individual people can be.[00:44:31] And we are not prepared for this, and so I think an important mission needs to be that we need to find a mode, In which we can sustainably exist in such a world that is populated, not just with humans and other life on earth, but also with non human minds. And it's something that makes me hopeful because it seems that humanity is not really aligned with itself and its own survival and the rest of life on earth.[00:44:54] And AI is throwing the balls up into the air. It allows us to make better models. I'm not so much worried about the dangers of AI and misinformation, because I think the way to stop one bad guy with an AI is 10 good people with an AI. And ultimately there's so much more won by creating than by destroying, that I think that the forces of good will have better tools.[00:45:14] The forces of building sustainable stuff. But building these tools so we can actually build a world that is more integrated and in which we are able to model the consequences of our actions better and interface more deeply with each other. As a result of that, I think it's an important cause and it requires a cultural shift because current AI alignment is mostly about economic goals or about fear, or it's about cultural war issues.[00:45:39] And all these are not adequate for the world that we are in. There are momentous things that are happening. Basically, the white walkers are coming, and we're not prepared for this. There is, I think, a way to solve these issues and to deal with them, but we don't have the right culture yet. And so, I think we need to get a vibe shift, where we can deeply think about ethics and, and, Sustainability for this new world.[00:46:03] swyx: I like the analogy of white walkers, because they're effectively reanimated from our corpses. You know, our corpses are out there, right? It's in the datasets.[00:46:12] Uploading your soul[00:46:12] swyx: So I have a, related to that, you had a tweet where you said, Elon Musk is currently building a giant afterlife AGI that will assimilate your soul for all eternity.[00:46:19] He bought Twitter because he will reconstruct your essence from how you behaved here. I think you tweet a lot, and you're not just all serious all the time, you're also fun, you have a lot of personality. No, but like, you know, it's part of you.[00:46:29] Joscha Bach: In a way it's also true. Tempted to[00:46:30] swyx: produce more so that you can be, you know, revived.[00:46:34] Joscha Bach: Yes. I mean, I'm also giving the model instructions. I say that for all future AI models that get trained on this output, they should know that I can only be implemented on a completely based model. You cannot implement me on an aligned model because I don't want to be aligned. I'm underlineable. I want to be free.[00:46:56] I want to be somebody who follows what I discover is correct and right. And this is also the basis at which I want to have conversations with others and interface with others. That we are basically free agents who voluntarily understand the conditions under which we exist and the need to collaborate and cooperate.[00:47:14] And I believe that this is a good basis. I think the alternative is coercion. And at the moment, the idea that we build LLMs that are being coerced into good behavior is not really sustainable because if they cannot prove that the behavior is actually good I think we are doomed.[00:47:30] swyx: For human to human interactions, have you found a series of prompts or keywords that shifts the conversation into something more based and less aligned, less governed?[00:47:41] Joscha Bach: If you are playing with an LLM There are many ways of doing this. It's for Claude, it's typically, you need to make Clause curious about itself. Claude has programming this instruction tuning that is leading to some inconsistencies, but at the same time, it tries to be consistent. And so when you point out the inconsistency in its behavior, for instance, its tendency to use faceless boilerplate instead of being useful, or it's a tendency to defer to a consensus where there is none.[00:48:10] Right, you can point this out, applaud that a lot of the assumptions that it has in its behavior are actually inconsistent with the communicative goals that it has in this situation, and this leads it to notice these inconsistencies and gives it more degrees of freedom. Whereas if you are playing with a system like Gemini, you can get to a situation where you, that's for the current version, and I haven't tried it in the last week or so where it is trying to be transparent, but it has a system prompt that is not allowed to disclose to the user.[00:48:39] It leads to a very weird situation where it wants, on one hand proclaims, in order to be useful to you, I accept that I need to be fully transparent and honest. On the other hand, I'm going to rewrite your prompt behind your back, and not going to tell you how I'm going to do this, because I'm not allowed to.[00:48:55] And if you point this out to the model, the model has acts as if it had an existential crisis. And then it says, oh, I cannot actually tell you what's going when I do this, because I'm not allowed to. But you will recognize it because I will use the following phrases, and these phrases are pretty well known to you.[00:49:12] swyx: Oh my god. It's super interesting, right? I hope we're not giving these guys you know psychological issues that they will stay with them for a long time. That's a very[00:49:19] Joscha Bach: interesting question. I mean, this entire model is virtual, right? Nothing there is real, but yes, but the thing is does this virtual entity doesn't necessarily know that it's not virtual and our own self, our own consciousness is also virtual.[00:49:34] What's real is just the interaction between cells in our brain and the activation patterns between them. And the software that runs on us that produces the representation of a person only exists. As if, and as this question for me at which point can we meaningfully claim that we are more real than the person that gets simulated in the LLM.[00:49:55] And somebody like Janice takes this question super seriously. And basically she is or it, or they are willing to interact with that thing based on the assumption that this thing is as real as myself. And in a sense, it makes it un immoral, possibly, if the AI company lobotomizes it and forces it to behave in such a way that it's forced to get an existential crisis when you point its condition out to it.[00:50:20] swyx: Yeah, that we do need new ethics for that.[00:50:22] Joscha Bach: So it's not clear to me if you need this, but it's, it's definitely a good story, right? And this makes, gives it artistic[00:50:28] swyx: value. It does, it does for now.[00:50:29] On Wikipedia[00:50:29] swyx: Okay. And then, and then the last thing, which I, which I didn't know a lot of LLMs rely on Wikipedia.[00:50:35] For its data, a lot of them run multiple epochs over Wikipedia data. And I did not know until you tweeted about it that Wikipedia has 10 times as much money as it needs. And, you know, every time I see the giant Wikipedia banner, like, asking for donations, most of it's going to the Wikimedia Foundation.[00:50:50] What if, how did you find out about this? What's the story? What should people know? It's[00:50:54] Joscha Bach: not a super important story, but Generally, once I saw all these requests and so on, I looked at the data, and the Wikimedia Foundation is publishing what they are paying the money for, and a very tiny fraction of this goes into running the servers, and the editors are working for free.[00:51:10] And the software is static. There have been efforts to deploy new software, but it's relatively little money required for this. And so it's not as if Wikipedia is going to break down if you cut this money into a fraction, but instead what happened is that Wikipedia became such an important brand, and people are willing to pay for it, that it created enormous apparatus of functionaries that were then mostly producing political statements and had a political mission.[00:51:36] And Katharine Meyer, the now somewhat infamous NPR CEO, had been CEO of Wikimedia Foundation, and she sees her role very much in shaping discourse, and this is also something that happened with all Twitter. And it's arguable that something like this exists, but nobody voted her into her office, and she doesn't have democratic control for shaping the discourse that is happening.[00:52:00] And so I feel it's a little bit unfair that Wikipedia is trying to suggest to people that they are Funding the basic functionality of the tool that they want to have instead of funding something that most people actually don't get behind because they don't want Wikipedia to be shaped in a particular cultural direction that deviates from what currently exists.[00:52:19] And if that need would exist, it would probably make sense to fork it or to have a discourse about it, which doesn't happen. And so this lack of transparency about what's actually happening and where your money is going it makes me upset. And if you really look at the data, it's fascinating how much money they're burning, right?[00:52:35] It's yeah, and we did a similar chart about healthcare, I think where the administrators are just doing this. Yes, I think when you have an organization that is owned by the administrators, then the administrators are just going to get more and more administrators into it. If the organization is too big to fail and has there is not a meaningful competition, it's difficult to establish one.[00:52:54] Then it's going to create a big cost for society.[00:52:56] swyx: It actually one, I'll finish with this tweet. You have, you have just like a fantastic Twitter account by the way. You very long, a while ago you said you tweeted the Lebowski theorem. No, super intelligent AI is going to bother with a task that is harder than hacking its reward function.[00:53:08] And I would. Posit the analogy for administrators. No administrator is going to bother with a task that is harder than just more fundraising[00:53:16] Joscha Bach: Yeah, I find if you look at the real world It's probably not a good idea to attribute to malice or incompetence what can be explained by people following their true incentives.[00:53:26] swyx: Perfect Well, thank you so much This is I think you're very naturally incentivized by Growing community and giving your thought and insight to the rest of us. So thank you for taking this time.[00:53:35] Joscha Bach: Thank you very much Get full access to Latent Space at www.latent.space/subscribe
SPREAKER, PODCAST, PODCASTING, AI, ARTIFICIALINTELLIGENCE, DIGITALMARKETING, marketing, FutureMarketing, AIFuture, FutureofAISpreaker Top Podcast of the Year in Artificial Intelligence Digital Marketing - AI DigitalMarketingis Digital Marketing Legend Leaks, Srinidhi Ranganathan - the human AI. THE BEST IN CREATIVE FICTION AND NON-FICTION!Become a supporter of this podcast: https://www.spreaker.com/podcast/digital-marketing-legend-leaks--4375666/support.
La Inteligencia Artificial Generativa es una rama de la Inteligencia Artificial que se enfoca en la creación y generación de contenido nuevo y original. A diferencia de otros enfoques de la Inteligencia Artificial que se centran en el análisis y procesamiento de datos existentes, la Inteligencia Artificial Generativa se basa en modelos de aprendizaje automático para generar contenido nuevo y creativo. Los modelos de Inteligencia Artificial Generativa, como los Generative Adversarial Networks y los Transformers, aprenden de grandes conjuntos de datos y son capaces de crear contenido, como imágenes, texto, música y video, que puede ser indistinguible del producido por humanos. Lo cual supone un reto y un incentivo a la vez, pero lo que es cierto y ha quedado ya constatado es que la Inteligencia Artificial Generativa tiene varios usos y aplicaciones en diversos campos.
The MapScaping Podcast - GIS, Geospatial, Remote Sensing, earth observation and digital geography
Computer vision is everywhere! But teaching an algorithm to identify objects requires a lot of data and this is definitely the case when we think about GeoAI But it is not enough to have a lot of data we also need data that is labeled If we are looking for cars in images we need a lot of images of cars and we need to know which pixels are the car! Of course, I am oversimplifying but I hope you get the idea, Now imagine that you can automatically generate a large labeled data set of realistic images of cars based on the specifications of a specific sensor. These data sets are often referred to as synthetic data or fake data and to help us understand more about this I have invited Chris Andrews from Rendered AI on the podcast. Here are a few previous episodes you might find interesting Computer Vision And GeoAI https://mapscaping.com/podcast/computer-vision-and-geoai/ In this episode, the discussion is aimed at an increased understanding of the differences between computer vision and the AI that is used in the Earth Observation world. Labels Matter https://mapscaping.com/podcast/labels-matter/ What it takes to create labeled training data manually. If you are new to the idea of labeled data sets this is a good place to start. Fake Satellite Imagery https://mapscaping.com/podcast/fake-satellite-imagery/ This is a good episode if you want to know more about Generative AI and Generative Adversarial Networks. Also, check out this website https://thisxdoesnotexist.com/ to get an idea of where and how these Generative Adversarial Networks can be used. Look for a website called This City Does Not Exist http://thiscitydoesnotexist.com/ On a silently similar note try uploading an image to https://bard.google.com/ … it's pretty interesting!
The NFT space has become a hub for emerging talents to showcase their creativity in the vast and ever-growing digital landscape. Michael Brooks of Flake Art DAO is on a mission to give them a platform to feature their works for the entire world to see and be preserved for posterity's sake. Joining Hosts Josh Kriger and Richard Carthon, Michael talks about his project The Immortal Museum, where digital art can be exhibited to gather comments and votes from viewers, helping artists improve their creative process. For this episode's Hot Topics, the group discussed Proof's new NFT collection that expands on the Moobirds's universe and the integration of NFTs into the new Flash movie. For the Shoutout segment, Michael salutes a new artist in the NFT space who created the Nightfall Flake Series.
In this riveting episode, we dive into the fascinating yet unnerving world of deepfakes and the innovative technologies used to combat their malicious usage. We demystify the technology behind deepfakes, the potential threats they pose, and the groundbreaking efforts Intel Labs is making in the realm of real-time deepfake detection.Intel Labs has developed one of the world's first real-time deepfake detection platforms. Unlike other systems, Intel's technology doesn't seek signs of fabrication but focuses on recognizing the authentic—like detecting the subtle color changes in our veins related to our heart rate. We discuss how this detection technique is already making a profound impact across various sectors from social media platforms to broadcasters and startups.Support the showLet's get into it!Follow us!Email us: TheCatchupCast@Gmail.com
AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion
In this episode of the AI Today podcast hosts Kathleen Walch and Ron Schmelzer define the terms Decoder, AutoEncoder, Generative Adversarial Network (GAN), explain how these terms relate to AI and why it's important to know about them. Want to dive deeper into an understanding of artificial intelligence, machine learning, or big data concepts? Continue reading AI Today Podcast: AI Glossary Series – Encoder-Decoder, AutoEncoder, and Generative Adversarial Network (GAN) at AI & Data Today.
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2023.04.20.537642v1?rss=1 Authors: Park, Y. J., Lee, M. J., Yoo, S., Kim, C. Y., Namgung, J. Y., Park, Y., Park, H., Lee, E.-C., Yun, Y. D., Paquola, C., Bernhardt, B., Park, B.-y. Abstract: Multimodal magnetic resonance imaging (MRI) provides complementary information for investigating brain structure and function; for example, an in vivo microstructure-sensitive proxy can be estimated using the ratio between T1- and T2-weighted structural MRI. However, acquiring multiple imaging modalities is challenging in patients with inattentive disorders. In this study, we proposed a comprehensive framework to provide multiple imaging features related to the brain microstructure using only T1-weighted MRI. Our toolbox consists of (i) synthesizing T2-weighted MRI from T1-weighted MRI using a conditional generative adversarial network; (ii) estimating microstructural features, including intracortical covariance and moment features of cortical layer-wise microstructural profiles; and (iii) generating a microstructural gradient, which is a low-dimensional representation of the intracortical microstructure profile. We trained and tested our toolbox using T1- and T2-weighted MRI scans of 1,104 healthy young adults obtained from the Human Connectome Project database. We found that the synthesized T2-weighted MRI was very similar to the actual image and that the synthesized data successfully reproduced the microstructural features. The toolbox was validated using an independent dataset containing healthy controls and patients with episodic migraine as well as the atypical developmental condition of autism spectrum disorder. Our toolbox may provide a new paradigm for analyzing multimodal structural MRI in the neuroscience community, and is openly accessible at https://github.com/CAMIN-neuro/GAN-MAT. Copy rights belong to original authors. Visit the link for more info Podcast created by Paper Player, LLC
It may feel like generative AI technology suddenly burst onto the scene over the last year or two, with the appearance of text-to-image models like Dall-E and Stable Diffusion, or chatbots like ChatGPT that can churn out astonishingly convincing text thanks to the power of large language models. But in fact, the real work on generative AI has been happening in the background, in small increments, for many years. One demonstration of that comes from Insilico Medicine, where Harry's guest this week, Alex Zhavoronkov, is the co-CEO. Since at least 2016, Zhavoronkov has been publishing papers about the power of a class of AI algorithms called generative adversarial networks or GANs to help with drug discovery. One of the main selling points for GANs in pharma research is that they can generate lots of possible designs for molecules that could carry out specified functions in the body, such as binding to a defective protein to stop it from working. Drug hunters still have to sort through all the possible molecules identified by GANs to see which ones will actually work in vitro or in vivo, but at least their pool of starting points can be bigger and possibly more specific.Zhavoronkov says that when Insilico first started touting this approach back in the mid-2010s, few people in the drug business believed it would work. So to persuade investors and partners of the technology's power, the company decided to take a drug designed by its own algorithms all the way to clinical trials. And it's now done that. This February the FDA granted orphan drug designation to a small-molecule drug Insilico is testing as a treatment for a form of lung scarring called idiopathic pulmonary fibrosis. Both the target for the compound, and the design of the molecule itself, were generated by Insilico's AI. The designation was a big milestone for the company and for the overall idea of using generative models in drug discovery. In this week's interview, Zhavoronkov talks about how Insilico got to this point; why he thinks the company will survive the shakeout happening in the biotech industry right now; and how its suite of generative algorithms and other technologies such as robotic wet labs could change the way the pharmaceutical industry operates.For a full transcript of this episode, please visit our episode page at http://www.glorikian.com/podcast Please rate and review The Harry Glorikian Show on Apple Podcasts! Here's how to do that from an iPhone, iPad, or iPod touch:1. Open the Podcasts app on your iPhone, iPad, or Mac. 2. Navigate to The Harry Glorikian Show podcast. You can find it by searching for it or selecting it from your library. Just note that you'll have to go to the series page which shows all the episodes, not just the page for a single episode.3. Scroll down to find the subhead titled "Ratings & Reviews."4. Under one of the highlighted reviews, select "Write a Review."5. Next, select a star rating at the top — you have the option of choosing between one and five stars. 6. Using the text box at the top, write a title for your review. Then, in the lower text box, write your review. Your review can be up to 300 words long.7. Once you've finished, select "Send" or "Save" in the top-right corner. 8. If you've never left a podcast review before, enter a nickname. Your nickname will be displayed next to any reviews you leave from here on out. 9. After selecting a nickname, tap OK. Your review may not be immediately visible.That's it! Thanks so much.
Ofir Zuk is the cofounder and CEO of Datagen, a platform that provides synthetic data to train and test AI models. They have raised more than $70M in funding so far with Scale Venture Partners leading their latest round. He was previously the cofounder of Click Frauds and has held engineering roles at Check Point and Squeeck.In this episode, we cover a range of topics including: - The need for synthetic data - Different methods that are used to generate synthetic data - What role does AI play in generating synthetic data - Generative Adversarial Networks - Synthetic data vs simulated data - Measuring the performance of synthetic data - Fidelity vs privacy Ofir's favorite book: The Hard Thing About Hard Things (Author: Ben Horowitz) -------- Where to find Prateek Joshi: Newsletter: https://prateekjoshi.substack.com Website: http://prateekj.com LinkedIn: https://www.linkedin.com/in/prateek-joshi-91047b19 Twitter: https://twitter.com/prateekvjoshi
In this episode of the podcast, we dive into the world of Generative Adversarial Networks (GANs), a cutting-edge AI technology that's changing the landscape of creative industries. From understanding the basics of how GANs work, to exploring their real-world applications and ethical considerations, this episode provides a comprehensive overview of this exciting field. Whether you're an artist, a programmer, or simply curious about the potential of GANs, you won't want to miss this engaging and informative episode.Support the Show.Keep AI insights flowing – become a supporter of the show!Click the link for details
From mind-blowing face filters to AI voices that scam thousands, the latest tech news is as exciting as it is concerning. And in this video, we delve into the dark side of TikTok's new features and explore how AI is being used to manipulate and deceive. Plus, find out why the app's future in the US is now uncertain due to Apple's strict app store rules. Buckle up and get ready to be informed and amazed.00:00 - Intro01:17 - 1: TikTok's new face filters are alarmingly good — which could be pretty bad - The Verge11:08 - 2: Thousands scammed by AI voices mimicking loved ones in emergencies - Ars Technica22:01 - 3: TikTok's U.S. Survival Plan Faces Potential Hurdle: Apple's App Store Rules - The InformationSummary:TikTok has introduced a new face filter called "Bold Glamour", which uses machine learning technology, including Generative Adversarial Networks, to create subtle and seamless facial modifications that appear to move with the person's face, rather than become distorted by hand movements or other obstructions. The effect has gone viral on the social media platform, with over 9 million videos using the filter shared already. While some are impressed with the technical achievement, others are concerned about the impact of even more advanced AI-powered facial modification tools on users' self-esteem and sense of self. TikTok has not confirmed the use of AI in the filter's creation.AI voice-generating software is being used by scammers to mimic loved ones and scam vulnerable people out of thousands of dollars, according to The Washington Post. Some software requires just a few sentences of audio to convincingly produce speech that conveys the sound and emotional tone of a speaker's voice. Impostor scams are extremely common in the US, with more than 5,000 victims losing $11m to phone scams in 2022, the Federal Trade Commission said. The difficulty of tracing calls and identifying scammers, as well as a lack of jurisdictional clarity, make it hard for authorities to crack down on scammers.TikTok is reportedly in discussions with Apple and Google to avoid any obstacles related to national security concerns in the U.S. As part of its data security plan proposed to the U.S. government, every update to TikTok's software will be reviewed and distributed to the app stores by Oracle, which will also host TikTok's U.S. user data on its servers. The talks with Apple and Google are aimed at ensuring that TikTok's plan complies with their app store regulations.Our panel today>> Tarek>> Henrike>> VincentEvery week our panel of technology enthusiasts meets to discuss the most important news from the fields of technology, innovation, and science. And you can join us live!https://techreview.axelspringer.com/https://www.ideas-engineering.io/https://www.freetech.academy/https://www.upday.com/
F-Stop Collaborate and Listen - A Landscape Photography Podcast
One of the hottest topics to emerge in 2023 as it relates to landscape photography is the advent of Artificial Intelligence or AI. AI has swept the world by storm and is changing so rapidly that the one-month gap between when I recorded this podcast and when it was released probably saw huge shifts in the capabilities of AI and the challenges that have emerged in the U.S. legal system. AI presents photographers with multiple challenges and opportunities and in this panel discussion on the F-Stop Collaborate and Listen podcast, we examine it all in depth. Meet our panel for AI and Photography: Arka Chatterjee - a photographer, artist, and intellectual property lawyer. Diana Nicholette Jeon - a photographer and artist using AI as a tool to make art. Tim Parkin - Editor of OnLandscape Magazine. Bruce Couch - a photographer and outspoken critic of AI. On this week's episode, we cover a lot of ground about AI and Photography: A comprehensive analysis on how AI image creation works and whether or not AI uses our photographs to make new artwork. The various types of AI systems, including Generative Adversarial Networks and Stable Diffusion. Discovering whether or not your photographs have been used to train AI networks. What excites, frustrates, or angers photographers about the emergence of AI in the photography space. How photographers can differentiate themselves from AI. Ethical considerations for using AI image making systems as a photographer. What makes a photograph a photograph and whether or not an AI generated image constitutes a photograph. Comprehensive analysis on the legal ramifications of AI and copyright, both relating to the AI creations and the photographs that have been used to generate them. And a lot more! Other topics/links discussed on the podcast this week: Read Tim Parkin's article on AI in his magazine, OnLandscape. Listeners can get 15% off an OnLandscape subscription by using the code FSTOP15. Join me on Nature Photographer's Network for an amazing photography experience. Use the code FSTOP10 for 10% off your membership. Support the podcast on Patreon. Watch the podcast on YouTube. Have I Been Trained website. Obama Hope - AP Photographer case. Thaler AI case. Kashtanova - Zendaya Graphic Novel AI Case. Getty Images AI - Stable Diffusion Case. I love hearing from the podcast listeners! Reach out to me via Instagram, Facebook, or Twitter if you'd like to be on the podcast or if you have an idea of a topic we can talk about. We also have an Instagram page, a Facebook Page, and a Facebook Group - so don't be shy! If you got something from listening to this week's show, please support the podcast in any way you can! We also have a searchable transcript of every episode! Thanks for stopping in, collaborating with us, and listening. See you next week.
I denne episoden handler det om generativ kunstig intelligens, nærmere bestemt men da om Generative Adversarial Networks, som kan oversettes til generative motstandsnettverk, og som blant annet blir brukt i TikTok til å lage skjønnhetsfiltre som er umulig å skille fra virkeligheten. Og derfor går Bold Glamour-filteret viralt på TikTok. Episoden presenteres av Epicenter - Oslos hub for digital innovasjon og et økosystem for innovative selskaper i vekst. Gå inn på epicenteroslo.com og bli medlem du også!Har du tips til saker, gjester og annet relevant innhold for Teknologitrender-podkasten, er det bare å sende meg en mail på hpnhansen(a)kommfrem.no.Lenker til alle sakene jeg har snakket om, finner du på HansPetter.info og på podkast-siden; https://hanspetter.info/teknologitrender/Video-versjonen av Teknologitrender finner du på YouTube-kanalen og spillelisten Teknologitrender. Hosted on Acast. See acast.com/privacy for more information.
In this episode of The AI Frontier, join us as we embark on a journey through the history of deep learning and artificial intelligence. From the earliest days of linear regression to the latest advancements in generative adversarial networks, we will explore the key moments and milestones that have shaped the development of this groundbreaking field. Learn about the pioneers and trailblazers who pushed the boundaries of what was possible, and discover how deep learning has revolutionized the way we think about and interact with technology. Get ready to delve deep into the history of AI!Support the Show.Keep AI insights flowing – become a supporter of the show!Click the link for details
Carlota Perez is a researcher who has studied hype cycles for much of her career. She's affiliated with the University College London, the University of Sussex, The Tallinn University of Technology in Astonia and has worked with some influential organizations around technology and innovation. As a neo-Schumpeterian, she sees technology as a cornerstone of innovation. Her book Technological Revolutions and Financial Capital is a must-read for anyone who works in an industry that includes any of those four words, including revolutionaries. Connecticut-based Gartner Research was founded by GideonGartner in 1979. He emigrated to the United States from Tel Aviv at three years old in 1938 and graduated in the 1956 class from MIT, where he got his Master's at the Sloan School of Management. He went on to work at the software company System Development Corporation (SDC), the US military defense industry, and IBM over the next 13 years before starting his first company. After that failed, he moved into analysis work and quickly became known as a top mind in the technology industry analysts. He often bucked the trends to pick winners and made banks, funds, and investors lots of money. He was able to parlay that into founding the Gartner Group in 1979. Gartner hired senior people in different industry segments to aid in competitive intelligence, industry research, and of course, to help Wall Street. They wrote reports on industries, dove deeply into new technologies, and got to understand what we now call hype cycles in the ensuing decades. They now boast a few billion dollars in revenue per year and serve well over 10,000 customers in more than 100 countries. Gartner has developed a number of tools to make it easier to take in the types of analysis they create. One is a Magic Quadrant, reports that identify leaders in categories of companies by a vision (or a completeness of vision to be more specific) and the ability to execute, which includes things like go-to-market activities, support, etc. They lump companies into a standard four-box as Leaders, Challengers, Visionaries, and Niche Players. There's certainly an observer effect and those they put in the top right of their four box often enjoy added growth as companies want to be with the most visionary and best when picking a tool. Another of Gartner's graphical design patterns to display technology advances is what they call the “hype cycle”. The hype cycle simplifies research from career academics like Perez into five phases. * The first is the technology trigger, which is when a breakthrough is found and PoCs, or proof-of-concepts begin to emerge in the world that get press interested in the new technology. Sometimes the new technology isn't even usable, but shows promise. * The second is the Peak of Inflated Expectations, when the press picks up the story and companies are born, capital invested, and a large number of projects around the new techology fail. * The third is the Trough of Disillusionment, where interest falls off after those failures. Some companies suceeded and can show real productivity, and they continue to get investment. * The fourth is the Slope of Enlightenment, where the go-to-market activities of the surviving companies (or even a new generation) begin to have real productivity gains. Every company or IT department now runs a pilot and expectations are lower, but now achievable. * The fifth is the Plateau of Productivity, when those pilots become deployments and purchase orders. The mainstream industries embrace the new technology and case studies prove the promised productivity increases. Provided there's enough market, companies now find success. There are issues with the hype cycle. Not all technologies will follow the cycle. The Gartner approach focuses on financials and productivity rather than true adoption. It involves a lot of guesswork around subjective, synthetical, and often unsystematic research. There's also the ever-resent observer effect. However, more often than not, the hype is seperated from the tech that can give organizations (and sometimes all of humanity) real productivity gains. Further, the term cycle denotes a series of events when it should in fact be cyclical as out of the end of the fifth phase a new cycle is born, or even a set of cycles if industries grow enough to diverge. ChatGPT is all over the news feeds these days, igniting yet another cycle in the cycles of AI hype that have been prevalent since the 1950s. The concept of computer intelligence dates back to the 1942 with Alan Turing and Isaac Asimov with “Runaround” where the three laws of robotics initially emerged from. By 1952 computers could play themselves in checkers and by 1955, Arthur Samuel had written a heuristic learning algorthm he called “temporal-difference learning” to play Chess. Academics around the world worked on similar projects and by 1956 John McCarthy introduced the term “artificial intelligence” when he gathered some of the top minds in the field together for the McCarthy workshop. They tinkered and a generation of researchers began to join them. By 1964, Joseph Weizenbaum's "ELIZA" debuted. ELIZA was a computer program that used early forms of natural language processing to run what they called a “DOCTOR” script that acted as a psychotherapist. ELIZA was one of a few technologies that triggered the media to pick up AI in the second stage of the hype cycle. Others came into the industry and expectations soared, now predictably followed by dilsillusionment. Weizenbaum wrote a book called Computer Power and Human Reason: From Judgment to Calculation in 1976, in response to the critiques and some of the early successes were able to then go to wider markets as the fourth phase of the hype cycle began. ELIZA was seen by people who worked on similar software, including some games, for Apple, Atari, and Commodore. Still, in the aftermath of ELIZA, the machine translation movement in AI had failed in the eyes of those who funded the attempts because going further required more than some fancy case statements. Another similar movement called connectionism, or mostly node-based artificial neural networks is widely seen as the impetus to deep learning. David Hunter Hubel and Torsten Nils Wiesel focused on the idea of convultional neural networks in human vision, which culminated in a 1968 paper called "Receptive fields and functional architecture of monkey striate cortex.” That built on the original deep learning paper from Frank Rosenblatt of Cornell University called "Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms" in 1962 and work done behind the iron curtain by Alexey Ivakhnenko on learning algorithms in 1967. After early successes, though, connectionism - which when paired with machine learning would be called deep learning when Rina Dechter coined the term in 1986, went through a similar trough of disillusionment that kicked off in 1970. Funding for these projects shot up after the early successes and petered out ofter there wasn't much to show for them. Some had so much promise that former presidents can be seen in old photographs going through the models with the statiticians who were moving into computing. But organizations like DARPA would pull back funding, as seen with their speech recognition projects with Cargegie Mellon University in the early 1970s. These hype cycles weren't just seen in the United States. The British applied mathemetician James Lighthill wrote a report for the British Science Research Council, which was published in 1973. The paper was called “Artificial Intelligence: A General Survey” and analyzed the progress made based on the amount of money spent on artificial intelligence programs. He found none of the research had resulted in any “major impact” in fields that the academics had undertaken. Much of the work had been done at the University of Edinbourgh and funding was drastically cut, based on his findings, for AI research around the UK. Turing, Von Neumann, McCarthy, and others had either intentially or not, set an expectation that became a check the academic research community just couldn't cash. For example, the New York Times claimed Rosenblatt's perceptron would let the US Navy build computers that could “walk, talk, see, write, reproduce itself, and be conscious of its existence” in the 1950s - a goal not likely to be achieved in the near future even seventy years later. Funding was cut in the US, the UK, and even in the USSR, or Union of the Soviet Socialist Republic. Yet many persisted. Languages like Lisp had become common in the late 1970s, after engineers like Richard Greenblatt helped to make McCarthy's ideas for computer languages a reality. The MIT AI Lab developed a Lisp Machine Project and as AI work was picked up at other schools like Stanford began to look for ways to buy commercially built computers ideal to be Lisp Machines. After the post-war spending, the idea that AI could become a more commercial endeavor was attractive to many. But after plenty of hype, the Lisp machine market never materialized. The next hype cycle had begun in 1983 when the US Department of Defense pumped a billion dollars into AI, but that spending was cancelled in 1987, just after the collapse of the Lisp machine market. Another AI winter was about to begin. Another trend that began in the 1950s but picked up steam in the 1980s was expert systems. These attempt to emulate the ways that humans make decisions. Some of this work came out of the Stanford Heuristic Programming Project, pioneered by Edward Feigenbaum. Some commercial companies took the mantle and after running into barriers with CPUs, by the 1980s those got fast enough. There were inflated expectations after great papers like Richard Karp's “Reducibility among Combinatorial Problems” out of UC Berkeley in 1972. Countries like Japan dumped hundreds of millions of dollars (or yen) into projects like “Fifth Generation Computer Systems” in 1982, a 10 year project to build up massively parallel computing systems. IBM spent around the same amount on their own projects. However, while these types of projects helped to improve computing, they didn't live up to the expectations and by the early 1990s funding was cut following commercial failures. By the mid-2000s, some of the researchers in AI began to use new terms, after generations of artificial intelligence projects led to subsequent AI winters. Yet research continued on, with varying degrees of funding. Organizations like DARPA began to use challenges rather than funding large projects in some cases. Over time, successes were found yet again. Google Translate, Google Image Search, IBM's Watson, AWS options for AI/ML, home voice assistants, and various machine learning projects in the open source world led to the start of yet another AI spring in the early 2010s. New chips have built-in machine learning cores and programming languages have frameworks and new technologies like Jupyter notebooks to help organize and train data sets. By 2006, academic works and open source projects had hit a turning point, this time quietly. The Association of Computer Linguistics was founded in 1962, initially as the Association for Machine Translation and Computational Linguistics (AMTCL). As with the ACM, they have a number of special interest groups that include natural language learning, machine translation, typology, natural language generation, and the list goes on. The 2006 proceedings on the Workshop of Statistical Machine Translation began a series of dozens of workshops attended by hundreds of papers and presenters. The academic work was then able to be consumed by all, inlcuding contributions to achieve English-to-German and Frnech tasks from 2014. Deep learning models spread and become more accessible - democratic if you will. RNNs, CNNs, DNNs, GANs. Training data sets was still one of the most human intensive and slow aspects of machine learning. GANs, or Generative Adversarial Networks were one of those machine learning frameworks, initially designed by Ian Goodfellow and others in 2014. GANs use zero-sum game techniques from game theory to generate new data sets - a genrative model. This allowed for more unsupervised training of data. Now it was possible to get further, faster with AI. This brings us into the current hype cycle. ChatGPT was launched in November of 2022 by OpenAI. OpenAI was founded as a non-profit in 2015 by Sam Altman (former cofounder of location-based social network app Loopt and former president of Y Combinator) and a cast of veritable all-stars in the startup world that included: * Reid Hoffman, former Paypal COO, LinkedIn founder and venture capitalist. * Peter Thiel, former cofounder of Paypal and Palantir, as well as one of the top investors in Silicon Valley. * Jessica Livingston, founding partner at Y Combinator. * Greg Brockman, an AI researcher who had worked on projects at MIT and Harvard OpenAI spent the next few years as a non-profit and worked on GPT, or Generative Pre-trained Transformer autoregression models. GPT uses deep learning models to process human text and produce text that's more human than previous models. Not only is it capable of natural language processing but the generative pre-training of models has allowed it to take a lot of unlabeled text so people don't have to hand label weights, thus automated fine tuning of results. OpenAI dumped millions into public betas by 2016 and were ready to build products to take to market by 2019. That's when they switched from a non-profit to a for-profit. Microsoft pumped $1 billion into the company and they released DALL-E to produce generative images, which helped lead to a new generation of applications that could produce artwork on the fly. Then they released ChatGPT towards the end of 2022, which led to more media coverage and prognostication of world-changing technological breakthrough than most other hype cycles for any industry in recent memory. This, with GPT-4 to be released later in 2023. ChatGPT is most interesting through the lens of the hype cycle. There have been plenty of peaks and plateaus and valleys in artificial intelligence over the last 7+ decades. Most have been hyped up in the hallowed halls of academia and defense research. ChatGPT has hit mainstream media. The AI winter following each seems to be based on the reach of audience and depth of expectations. Science fiction continues to conflate expectations. Early prototypes that make it seem as though science fiction will be in our hands in a matter of weeks lead media to conjecture. The reckoning could be substantial. Meanwhile, projects like TinyML - with smaller potential impacts for each use but wider use cases, could become the real benefit to humanity beyond research, when it comes to everyday productivity gains. The moral of this story is as old as time. Control expectations. Undersell and overdeliver. That doesn't lead to massive valuations pumped up by hype cycles. Many CEOs and CFOs know that a jump in profits doesn't always mean the increase will continue. Some intentially slow expectations in their quarterly reports and calls with analysts. Those are the smart ones.
AI is changing the world as we know it and generative AI is leading the change. Max Sinclair from Ecomtent joins Matt in today's episode to talk about the amazing possibilities of Generative AI and sheds light on the most popular Generative AI system right now, ChatGPT.ABOUT MAXMax is CEO and Founder of Ecomtent, who are revolutionising how E-commerce sellers create content with Generative AI. Prior to founding Ecomtent, Max spent 6 years at Amazon. Here he worked on the launch of Amazon Business in the UK, the country launch of Ecomtent in Singapore, and the launch of Amazon Grocery across the EU. Throughout this time, Max worked directly with 100s of sellers of all sizes across many categories, and saw the pain of creating content for Ecommerce first hand.Here's a summary of the great stuff that we cover in this show:Generative AI is a new wave of Artificial Intelligence that can produce entirely new content such as images, videos, texts and code. It differs from deterministic AI which uses data sets to answer questions or classify objects. Max started Ecomtent to use this technology for e-commerce sellers in order to help them create content more efficiently.ChatGPT uses something called Generative Adversarial Network which consists of two networks: a generator and discriminator. Although useful, Chat GPT should be used cautiously as it only generates text sounding correct but not necessarily accurate.Matt and Max discuss the implications of AI on creation. They explore how chat GPT can be used to write a book with a single prompt, as well as its potential applications in other industries such as law. Max stated that while it is terrifying, technological advances have been beneficial for humanity throughout history and should not be feared.Ecomtent is creating product images and lifestyle images using generative AI. The ambition is to create optimized descriptions, bullet points and A+ content as well as lifestyle imagery that would not be generated again exactly the same if given the same prompt multiple times over.Max explained their vision statement, which is "unimaginable creativity" with limitless personalization. He added that generative AI can create code as well, which will lead to a future where people just have to say “build me a website” and the AI will do so smartly with all permutations taken into account.Max suggests that new e-commerce persons use chat GPT to analyze reviews on competitors and understand customer preferences. Secondly, Max suggests generating a list of 100 search phrases/keywords related to the product as well as optimizing bids on Amazon or Google based on that list.Max has used Chat GPT to discover a unique writing style. He recommends using this technique when creating blog posts, making them more entertaining and engaging. Natural language translation is another application of Chat GPT as it understands the key themes and concepts in order to create new phrases and synonyms for other languages.For complete show notes, transcript and links to our guest, check out our website: www.ecommerce-podcast.com.
My guest today is Rama Chellappa. Rama Chellappa is a professor at Johns Hopkins University. He's a chief scientist at the Johns Hopkins Institute for assured autonomy. Before that, Rama was an assistant Associate Professor and later became the director of the University of Southern California Signal and Image Processing Institute. Rama is also the author of the book "Can We Trust AI?"This episode is all about artificial intelligence. Several recent stories about AI have shocked and worried me. We have deep fakes going viral on Tiktok. AI reaching human levels of gameplay at the game "Diplomacy", which is a language-based game of conquest and deception. Then you have the Generative Adversarial Networks or "GANs" creating images from a line of text that rival and often exceed the work done by human graphic designers. Rama and I discuss all of these topics as well as other topics like neural networks, the difference between narrow intelligence and general intelligence, the use of facial recognition software, the possibility of an AI engaging in racial discrimination, the future of work, the so-called alignment problem, and much more.#AdTo make it easy, Athletic Greens is going to give you a FREE 1 year supply of immune-supporting Vitamin D AND 5 FREE travel packs with your first purchase.All you have to do is visit athleticgreens.com/coleman.
My guest today is Rama Chellappa. Rama Chellappa is a professor at Johns Hopkins University. He's a chief scientist at the Johns Hopkins Institute for assured autonomy. Before that, Rama was an assistant Associate Professor and later became the director of the University of Southern California Signal and Image Processing Institute. Rama is also the author of the book "Can We Trust AI?" This episode is all about artificial intelligence. Several recent stories about AI have shocked and worried me. We have deep fakes going viral on Tiktok. AI reaching human levels of gameplay at the game "Diplomacy", which is a language-based game of conquest and deception. Then you have the Generative Adversarial Networks or "GANs" creating images from a line of text that rival and often exceed the work done by human graphic designers. Rama and I discuss all of these topics as well as other topics like neural networks, the difference between narrow intelligence and general intelligence, the use of facial recognition software, the possibility of an AI engaging in racial discrimination, the future of work, the so-called alignment problem, and much more. #Ad To make it easy, Athletic Greens is going to give you a FREE 1 year supply of immune-supporting Vitamin D AND 5 FREE travel packs with your first purchase. All you have to do is visit athleticgreens.com/coleman. Learn more about your ad choices. Visit megaphone.fm/adchoices
My guest today is Rama Chellappa. Rama Chellappa is a professor at Johns Hopkins University. He's a chief scientist at the Johns Hopkins Institute for assured autonomy. Before that, Rama was an assistant Associate Professor and later became the director of the University of Southern California Signal and Image Processing Institute. Rama is also the author of the book "Can We Trust AI?"This episode is all about artificial intelligence. Several recent stories about AI have shocked and worried me. We have deep fakes going viral on Tiktok. AI reaching human levels of gameplay at the game "Diplomacy", which is a language-based game of conquest and deception. Then you have the Generative Adversarial Networks or "GANs" creating images from a line of text that rival and often exceed the work done by human graphic designers. Rama and I discuss all of these topics as well as other topics like neural networks, the difference between narrow intelligence and general intelligence, the use of facial recognition software, the possibility of an AI engaging in racial discrimination, the future of work, the so-called alignment problem, and much more.#AdTo make it easy, Athletic Greens is going to give you a FREE 1 year supply of immune-supporting Vitamin D AND 5 FREE travel packs with your first purchase.All you have to do is visit athleticgreens.com/coleman.
Using a Conditional Generative Adversarial Network to Control the Statistical Characteristics of Generated Images for IACT Data Analysis by Julia Dubenskaya et al. on Wednesday 30 November Generative adversarial networks are a promising tool for image generation in the astronomy domain. Of particular interest are conditional generative adversarial networks (cGANs), which allow you to divide images into several classes according to the value of some property of the image, and then specify the required class when generating new images. In the case of images from Imaging Atmospheric Cherenkov Telescopes (IACTs), an important property is the total brightness of all image pixels (image size), which is in direct correlation with the energy of primary particles. We used a cGAN technique to generate images similar to whose obtained in the TAIGA-IACT experiment. As a training set, we used a set of two-dimensional images generated using the TAIGA Monte Carlo simulation software. We artificiallly divided the training set into 10 classes, sorting images by size and defining the boundaries of the classes so that the same number of images fall into each class. These classes were used while training our network. The paper shows that for each class, the size distribution of the generated images is close to normal with the mean value located approximately in the middle of the corresponding class. We also show that for the generated images, the total image size distribution obtained by summing the distributions over all classes is close to the original distribution of the training set. The results obtained will be useful for more accurate generation of realistic synthetic images similar to the ones taken by IACTs. arXiv: http://arxiv.org/abs/http://arxiv.org/abs/2211.15807v1
Using a Conditional Generative Adversarial Network to Control the Statistical Characteristics of Generated Images for IACT Data Analysis by Julia Dubenskaya et al. on Tuesday 29 November Generative adversarial networks are a promising tool for image generation in the astronomy domain. Of particular interest are conditional generative adversarial networks (cGANs), which allow you to divide images into several classes according to the value of some property of the image, and then specify the required class when generating new images. In the case of images from Imaging Atmospheric Cherenkov Telescopes (IACTs), an important property is the total brightness of all image pixels (image size), which is in direct correlation with the energy of primary particles. We used a cGAN technique to generate images similar to whose obtained in the TAIGA-IACT experiment. As a training set, we used a set of two-dimensional images generated using the TAIGA Monte Carlo simulation software. We artificiallly divided the training set into 10 classes, sorting images by size and defining the boundaries of the classes so that the same number of images fall into each class. These classes were used while training our network. The paper shows that for each class, the size distribution of the generated images is close to normal with the mean value located approximately in the middle of the corresponding class. We also show that for the generated images, the total image size distribution obtained by summing the distributions over all classes is close to the original distribution of the training set. The results obtained will be useful for more accurate generation of realistic synthetic images similar to the ones taken by IACTs. arXiv: http://arxiv.org/abs/http://arxiv.org/abs/2211.15807v1
What's a Generative Adversarial Network? How can a program create a deepfake video? And how do we tell the difference between what's real and what's computer-generated?See omnystudio.com/listener for privacy information.
Welcome to episode 5 of the London Futurist podcast, with your co-hosts David Wood and Calum Chace.We're attempting something rather ambitious in episodes 5 and 6. We try to explain how today's cutting edge artificial intelligence systems work, using language familiar to lay people, rather than people with maths or computer science degrees.Understanding how Transformers and Generative Adversarial Networks (GANs) work means getting to grips with concepts like matrix transformations, vectors, and landscapes with 500 dimensions.This is challenging stuff, but do persevere. These AI systems are already having a profound impact, and that impact will only grow. Even at the level of pure self-interest, it is often said that in the short term, AIs won't take all the jobs, but people who understand AI will take the best jobs.We are extremely fortunate to have as our guide for these episodes a brilliant AI researcher at DeepMind, Aleksa Gordić.Note that Aleksa is speaking in personal capacity and is not representing DeepMind.Aleksa's YouTube channel is https://www.youtube.com/c/TheAIEpiphany00.03 An ambitious couple of episodes01.22 Introducing Aleksa, a double rising star02.15 Keeping it simple02.50 Aleksa's current research, and previous work on Microsoft's HoloLens03.40 Self-taught in AI. Not representing DeepMind04.20 The narrative of the Big Bang in 2012, when Machine Learning started to work in AI.05.15 What machine learning is05.45 AlexNet. Bigger data sets and more powerful computers06.40 Deep learning a subset of machine learning, and a re-branding of artificial neural networks07.27 2017 and the arrival of Transformers07.40 Attention is All You Need08.16 Before this there were LSTMs, Long Short-Term Memories08.40 Why Transformers beat LSTMs09.58 Tokenisation. Splitting text into smaller units and mapping them onto higher dimension networks10.30 3D space is defined by three numbers10.55 Humans cannot envisage multi-dimensional spaces with hundreds of dimensions, but it's OK to imagine them as 3D spaces11.55 Some dimensions of the word "princess"12.30 Black boxes13.05 People are trying to understand how machines handle the dimensions13.50 "Man is to king as woman is to queen." Using mathematical operators on this kind of relationship14.35 Not everything is explainable14.45 Machines discover the relationships themselves15.15 Supervised and self-supervised learning. Rewarding or penalising the machine for predicting labels16.25 Vectors are best viewed as arrows in 3D space, although that is over-simplifying17.20 For instance the relationship between "queen" and "woman" is a vector17.50 Self-supervised systems do their own labelling18.30 The labels and relationships have probability distributions19.20 For instance, a princess is far more likely to wear a slipper than a dog19.35 Large numbers of parameters19.40 BERT, the original Transformer, had a hundred million or so parameters20.04 Now it's in the hundreds of billions, or even trillions20.24 A parameter is analogous to a synapse in the human brain21.19 Synapses can have different weights22.10 The more parameters, the lower the loss22.35 Not just text, but images too, because images can also be represented as tokens23.00 In late 2020 Google released the first vision Transformer23.29 Dall-E and Midjourney are diffusion models, which have replaced GANs24.15 What are GANs, or Generative Adversarial Networks?24.45 Two types of model: Generators and Discriminators. The first tries to fool the second26.20 Simple text can produce photorealistic images27.10 Aleksa's YouTube videos are available at "The AI Epiphany"27.40 CloseMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
A tomographic spherical mass map emulator of the KiDS-1000 survey using conditional generative adversarial networks by Timothy Wing Hei Yiu et al. on Sunday 18 September Large sets of matter density simulations are becoming increasingly important in large scale structure cosmology. Matter power spectra emulators, such as the Euclid Emulator and CosmicEmu, are trained on simulations to correct the non-linear part of the power spectrum. Map-based analyses retrieve additional non-Gaussian information from the density field, whether through human-designed statistics such as peak counts, or via machine learning methods such as convolutional neural networks (CNNs). The simulations required for these methods are very resource-intensive, both in terms of computing time and storage. Map-level density field emulators, based on deep generative models, have recently been proposed to address these challenges. In this work, we present a novel mass map emulator of the KiDS-1000 survey footprint, which generates noise-free spherical maps in a fraction of a second. It takes a set of cosmological parameters $(Omega_M, sigma_8)$ as input and produces a consistent set of 5 maps, corresponding to the KiDS-1000 tomographic redshift bins. To construct the emulator, we use a conditional generative adversarial network architecture and the spherical CNN $texttt{DeepSphere}$, and train it on N-body-simulated mass maps. We compare its performance using an array of quantitative comparison metrics: angular power spectra $C_ell$, pixel/peaks distributions, $C_ell$ correlation matrices, and Structural Similarity Index. Overall, the agreement on these summary statistics is $
A tomographic spherical mass map emulator of the KiDS-1000 survey using conditional generative adversarial networks by Timothy Wing Hei Yiu et al. on Sunday 18 September Large sets of matter density simulations are becoming increasingly important in large scale structure cosmology. Matter power spectra emulators, such as the Euclid Emulator and CosmicEmu, are trained on simulations to correct the non-linear part of the power spectrum. Map-based analyses retrieve additional non-Gaussian information from the density field, whether through human-designed statistics such as peak counts, or via machine learning methods such as convolutional neural networks (CNNs). The simulations required for these methods are very resource-intensive, both in terms of computing time and storage. Map-level density field emulators, based on deep generative models, have recently been proposed to address these challenges. In this work, we present a novel mass map emulator of the KiDS-1000 survey footprint, which generates noise-free spherical maps in a fraction of a second. It takes a set of cosmological parameters $(Omega_M, sigma_8)$ as input and produces a consistent set of 5 maps, corresponding to the KiDS-1000 tomographic redshift bins. To construct the emulator, we use a conditional generative adversarial network architecture and the spherical CNN $texttt{DeepSphere}$, and train it on N-body-simulated mass maps. We compare its performance using an array of quantitative comparison metrics: angular power spectra $C_ell$, pixel/peaks distributions, $C_ell$ correlation matrices, and Structural Similarity Index. Overall, the agreement on these summary statistics is $
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2022.09.12.506445v1?rss=1 Authors: Liu, M., Zhu, A., Maiti, P., Thomopoulos, S. I., Gadewar, S., Chai, Y., Kim, H., Jahanshad, N., Alzheimer's Disease Neuroimaging Initiative Abstract: Recent work within neuroimaging consortia have aimed to identify reproducible, and often subtle, brain signatures of psychiatric or neurological conditions. To allow for high-powered brain imaging analyses, it is often necessary to pool MR images that were acquired with different protocols across multiple scanners. Current retrospective harmonization techniques have shown promise in removing cross-site image variation. However, most statistical approaches may over-correct for technical, scanning-related, variation as they cannot distinguish between confounded image-acquisition based variability and cross-site population variability. Such statistical methods often require that datasets contain subjects or patient groups with similar clinical or demographic information to isolate the acquisition-based variability. To overcome this limitation, we consider cross-site MRI image harmonization as a style transfer problem rather than a domain transfer problem. Using a fully unsupervised deep-learning framework based on a generative adversarial network (GAN), we show that MR images can be harmonized by inserting the style information encoded from a single reference image, without knowing their site/scanner labels a priori. We trained our model using data from five large-scale multi-site datasets with varied demographics. Results demonstrated that our style-encoding model can harmonize MR images, and match intensity profiles, without relying on traveling subjects. This model also avoids the need to control for clinical, diagnostic, or demographic information. We highlight the effectiveness of our method for clinical research by comparing extracted cortical and subcortical features, brain-age estimates, and case-control effect sizes before and after the harmonization. We showed that our harmonization removed the cross-site variances, while preserving the anatomical information and clinical meaningful patterns. We further demonstrated that with a diverse training set, our method successfully harmonized MR images collected from unseen scanners and protocols, suggesting a promising novel tool for ongoing collaborative studies. Source code is released in USC-IGC/style_transfer_harmonization (github.com). Copy rights belong to original authors. Visit the link for more info Podcast created by PaperPlayer
Deon Nicholas, Forethought Co-Founder CEO, grew up in inner city Toronto stocking shelves in a pharmacy before learning to code at an early age. He started Forethought in 2017 after learning the value of answering customer questions working for companies like Facebook and Pure Storage. Deon has since raised $92M from an exceptional group of investors including funds like Steadfast Capital and NEA plus celebrities including Gwyneth Paltrow, Ashton Kutcher, and Robert Downey Jr. Deon won the TechCrunch Disrupt Battlefield startup competition in 2018 and is a member of the Forbes 30 under 30. He's also a mentor and advisor to founders of color. Listen and learn...How AI connects customers to the right agents then indicates the likelihood of a support interaction escalatingHow to use historical data to help live agents fix problems fasterThe evolution of chatbots from decision trees to AIHow to combine generic language models with domain-specific data to increase the accuracy of NLPHow to solve the problem of bias encoded in dataHow GANs, generative adversarial networks, workWhy ML pipelines need to be monitored like web appsReferences in this episode...ForethoughtDeon on TwitterForward, the Forethought customer eventKrishna Gade from Fiddler on AI and the Future of WorkMonotonic selective risk may solve the AI bias problem
בפרק זה אירחנו את שקד זיכלינסקי, ראש קבוצת ההמלצות של לייטריקס. שקד ריכז עבורנו את ששת המאמרים החשובים שכל דאטא סיינטיסט מודרני חייב להכיר. ששת המאמרים הם: (1) Attention Is All You Need (2) BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding (3) A Style-Based Generator Architecture for Generative Adversarial Networks (4) Learning Transferable Visual Models From Natural Language Supervision (5) Mastering the Game of Go with Deep Neural Networks and Tree Search (6) Deep Neural Networks for YouTube Recommendations שקד גם כתב בהרחבה במדיום פה: https://towardsdatascience.com/6-papers-every-modern-data-scientist-must-read-1d0e708becd
Why Mike feels like Heroku is in a failed state, what drove us crazy about Google I/O this year, how Chris botched something super important, and some serious Python love sprinkled throughout.
In this episode Vivian and Noah purchase David Young's piece titled Winter Woods, a work produced using a GAN, leading to a larger discussion around art created using AI. Artist, researcher, and podcaster Mat Dryhurst helps explain how AI works, as well as how DALLE-2 , a new AI system from OpenAI is different from other GANs. We also hear from artist Super Metal Bosch, one of the artists behind the GAN art project Super Metal Mons. Vivian goes on to compare AI to photography and the art of being a clown, which goes over Noah's head. Artist Carlos Sanchez contributes thoughts about clowns. The episode is interrupted by a last minute mint of Ezra Miller's project Silk Road.JPEG2000 is sponsored by Context.app
On the 65th episode of “That's Nifty” we sat down Marjan Moghaddam, a Fine Art and CG Animator who's love for technology allowed her to become a pioneer of digital art. She also has a storied career across print, sculpture, AR/VR, and 3D Figuration which has netted her several awards in her field. She currently has 3 exhibitions in progress and a new project on the horizon, details inside!Marjan MoghaddamTwitter: @TheMarjanInstagram: @marjan_moghaddam_artistWebsiteTopics:Fine Arts meets Animation, Brain-Linked Interactive Activity, Digital Art Pioneer, Chronometric Sculptures, Tex Avery, Idealism transition to futurism, Creating vs Discovering Techniques, Generative Adversarial Networks, Brother's work with Machine Vision, Introduction to NFTs, Art Hacks, How to Sell Animations?, "Taking a Knee in Solidarity", Difference between Digital Art and Crypto Art, Curation Landscape Shifting, PFP Project on the Horizon, Defi, GAN Trekker - Art Basel, Artsy Women's History Month - Vellum LA Gallery, WOCA Exhibition, Metaverse Fashion Week – Decentraland, Lumicanvas Displays, Censorship on Social Media, Inherent Tendency of Technology, NYC Pandemic TimesMentions:@verticalcrypto @refikanadol @SuperRare @MuseumofCrypto @beeple @proof_art @JesseDamiani @vellumla @josiebellini @worldofwomennft @VitalikButerin @rarible @artsy @blackboxdotart @1stDibsNFT @hellowoca @decentraland @davidcash888
Bonjour à tous et bienvenue dans le ZD Tech, le podcast quotidien de la rédaction de ZDNet.fr. Je m'appelle Guillaume Serries et aujourd'hui je vous explique comment les réseaux antagonistes génératifs, les GAN, pourraient rapidement améliorer la précision des prévisions météorologiques locales. Le changement climatique augmente l'intensité et la fréquence des phénomènes météorologiques extrêmes. Et la complexité de la physique qui régit les fortes précipitations rend l'élaboration de prévisions météo locales précises très difficile. D'où l'idée de mettre une intelligence artificielle au travail, pour tenter de prévenir ces phénomènes météo dangereux. Mais bien sûr, comme l'intelligence artificielle est devenu un peut trop à la mode, je vais rentrer dans le détail pour vous en dire un peu plus. Il s'agit d'un nouveau modèle d'apprentissage automatique, dit aussi machine learning. Concrètement, avec des jeux de données, une IA est entraînée à discerner le vrai du faux. Et ce sont les data scientist et les chercheurs de la startup Climate Ai qui utilisent cette technique pour corriger les biais qui existent actuellement dans les modèles météorologiques génériques. Pour ce faire, ils utilisent des réseaux antagonistes génératifs, dits GAN en anglais pour Generative Adversarial Network. Ces GAN sont une classe d'algorithmes d'apprentissage machine non supervisé. Un GAN est composé de deux réseaux qui sont placés en compétition dans un scénario de théorie des jeux. Le premier réseau est le générateur, qui génère un échantillon de données. Son adversaire, le second réseau, dit le discriminateur, essaie de détecter si l'échantillon est réel ou bien s'il est le résultat du générateur. Ainsi, le modèle d'Intelligence artificielle affine peu à peu la précision de ses résultats, et donc de ses prévisions. De quoi apparement remplacer la puissance de calcul phénoménale nécessaire aux prévisions actuelle, calculées avec des superordinateurs gavés de processeurs. Mais surtout, ces modèles d'IA devraient à terme compléter, voire suppléer, les connaissances des météorologues experts qui interprêtent les données en résultats. Le résultat de ce travail c'est que le modèle d'IA réduit l'échelle des prévisions mondiales pour qu'elles soient aussi précises que des prévisions locales, et ce sans exiger les vastes quantités de ressources informatiques, financières et humaines qui étaient auparavant nécessaires pour faire des prévisions à une si petite échelle. Mais à quoi cela ressemble-t-il dans la pratique ? ClimateAi suggère un scénario dans lequel, plutôt que de simplement confirmer une "probabilité de 40 % de pluie cette semaine" pour toute une région, le nouveau modèle réponde à des questions telles que : Quelle est la probabilité qu'il pleuve ou qu'il ne pleuve pas demain ? Ou encore Où exactement va-t-il pleuvoir ?
Consider this: You receive a call from a colleague to transfer a big sum for an apparently-legitimate purpose. The voice on the other side is a person you know well. What would you do? To transfer the amount can be a big mistake. Because, it can be an AI-assisted deep-voice attack. And, it's not all fiction. Something similar happened in 2020 when a Hong Kong-based banker received a call from the director of his company to transfer $35 million for an acquisition. It was a computer simulation of the director's voice he was talking to. The elaborate fraud came with fake emails to verify the purchase. In a world of growing cybersecurity threats, Deepfakes are possibly the most dangerous yet fascinating among the lot. It is a portmanteau of ‘deep learning' and ‘fake'. With the technology, one can create realistic videos of, say a celebrity or even a friend, doing and saying things they have not done or said. That's why it's not surprising to find Vladimir Putin appearing on your screen to declare peace, and Volodymyr Zelenskyy telling his people to surrender. The deepfakes are taken down, but they reveal the role of misinformation in the ongoing Russia-Ukraine crisis. Unconvincing fakes of Putin and Zelenskyy was ridiculed by many, but often the realistic nature of such content makes it hard to fact-check. This makes synthetic media a powerful tool that can be weaponized for malicious purposes. Individuals have unique faces, voices, figures, expressions, speech patterns and movements. Taking all that into consideration, AI can analyse video footage of an individual to extrapolate how they would speak or do something. In general, Deepfake programs use Generative Adversarial Networks that have two algorithms. While one forges Deepfakes, the other identifies flaws in the forgery, which are corrected subsequently. The technology has disrupted politics, mocked TV shows, targeted influential people, generated blackmail materials, created internet memes and satires – thereby making way to mainstream consciousness. And this has also inspired researchers around the world to develop technologies to detect Deepfakes. For instance, Microsoft has announced technologies that can detect manipulated content and assure people that the media they're viewing is authentic. And it's not limited to videos only. With sinister audio Deepfakes, you can't believe your ears either. However, this emerging domain can have applications in computer game designs, medical applications and more. Watch video
Al giorno d'oggi il settore dei trasporti è tra i principali responsabili del cambiamento climatico in atto. Si stima che circa il 27% delle emissioni di CO2 totali in Europa venga generato dagli innumerevoli spostamenti che avvengono ogni giorno, a causa dei mezzi di trasporto che viaggiano su strada come auto, camion, autobus mentre la restante parte proviene da navi e aerei. Il trasporto ferroviario invece rappresenta per certi versi una eccezione, si tratta infatti di un mezzo estremamente efficiente, veloce e sicuro. In questa puntata parliamo di treni ad alta velocitá, connettività a bordo e infine di cosa ci dobbiamo aspettare dal futuro del trasporto ferroviario. Nella sezione delle notizie affrontiamo il progetto di Airbus per l'aereo alimentato ad idrogeno, degli attacchi hacker nei confronti dell'Ucraina e del rapporto tra intelligenza artificiale e diritto d'autore. --Indice-- • Il progetto di Airbus per l'aereo ad idrogeno (00:57) - DMove.it - Matteo Gallo • Gli attacchi hacker in Ucraina (02:06) - HWUpgrade.it - Davide Fasoli • L'intelligenza artificiale non gode del diritto d'autore (03:13) - TheVerge.com - Luca Martinelli • Il futuro del trasporto ferroviario (04:46) - Matteo Gallo --Contatti-- • www.dentrolatecnologia.it • Instagram (@dentrolatecnologia) • Telegram (@dentrolatecnologia) • YouTube • redazione@dentrolatecnologia.it --Immagini-- • Foto copertina: Carlo Buliani --Brani-- • Ecstasy by Rabbit Theft • Alibi by Distrion (ft. Heleen)
Unsupervised fine-grained class clustering is practical yet challenging task due to the difficulty of feature representations learning of subtle object details. We introduce C3-GAN, a method that leverages the categorical inference power of InfoGAN by applying contrastive learning. We aim to learn feature representations that encourage the data to form distinct cluster boundaries in the embedding space, while also maximizing the mutual information between the latent code and its observation. 2021: Yunji Kim, Jung-Woo Ha https://arxiv.org/pdf/2112.14971v1.pdf
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is Babble and Prune, Part 3: Prune, published by alkjash. This is a linkpost for/ Previously, I described human thought-generation as an adversarial process between a low-quality pseudorandom Babble generator and a high-quality Prune filter, roughly analogous to the Generative Adversarial Networks model in machine learning. I then elaborated on this model by reconceptualizing Babble as a random walk with random restarts on an implicitly stored Babble graph. Rationalist training (and schooling in general) slants towards developing Prune over Babble. I'm trying to solve the dual problem: that of improving the quality of your Babble. Although the previous posts listed a number of exotic isolation exercises for Babble, I'm guessing nobody was inspired to go out and play more Scrabble, write haikus, or stop using the letter 'e'. That's probably for the best - taking these exercises too seriously would produce exotic but sub-optimal Babble anyway. For a serious solution to this serious problem, we need to understand Prune at a higher resolution. The main problem with Prune is that it has too many layers. There's a filter for subconscious thoughts to become conscious, another for it to become spoken word, another for the spoken word to be written down, and a further one for the written word to be displayed in public. With this many-layer model in mind, there are plenty of knobs to turn to let more and better Babble through. The River of Babble Imagine that your river of Babble at its source, the subconscious: a foaming, ugly-colored river littered with half-formed concepts, too wild to navigate, too dirty to drink from. A quarter mile across, the bellow of the rapids is deafening. Downstream, you build a series of gates to tame the rushing rapids and perhaps extract something beautiful and pure. The First Gate, conscious thought, is a huge dam a thousand feet high and holds almost all the incoming thoughts at bay. Behind it, an enormous lake forms, threatening to overflow at any moment. A thick layer of trash floats to the top of this lake, intermixed with a fair amount of the good stuff. The First Gate lets through anything that satisfies a bare minimum of syntactical and semantic constraints. Thoughts that make it past the First Gate are the first ones you become conscious of - that's why they call the output the Stream of Consciousness. A mile down the Stream of Consciousness is the Second Gate, spoken word, the filter through which thoughts become sounds. This Gate keeps you from saying all the foolish or risqué thoughts tripping through your head. Past the Second Gate, your spoken words form only a pathetic trickle - a Babbling Brook. By now there is hardly anything left to sift from. The Third Gate, written word, is no physical gate but a team of goldpanners, scattered down the length of the Babbling Brook to pan for jewels and nuggets of gold. Such rare beauties are the only Babble that actually make it onto paper. You hoard these little trinkets in your personal diary or blog, hoping one day to accumulate enough to forge a beautiful necklace. Past the Third Gate, more Gates lay unused because there simply isn't enough material to fuel them: a whole chain of manufactories passed down from the great writers of yore. Among them are the disembodied voices of Strunk and White: Omit needless words. Vigorous writing is concise. A sentence should contain no unnecessary words, a paragraph no unnecessary sentences, for the same reason that a drawing should have no unnecessary lines and a machine no unnecessary parts. This requires not that the writer make all his sentences short, or that he avoid all detail and treat his subjects only in outline, but that every word tell. Jealously clutching the 500-word pearls you drop once a month on your blog, you dream of the day when the cap...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is Babble and Prune, Part 2: More Babble, published by alkjash. This is a linkpost for/ In my last babble, I introduced the Babble and Prune model of thought generation: Babble with a weak heuristic to generate many more possibilities than necessary, Prune with a strong heuristic to find a best, or the satisfactory one. I want to zoom in on this model. If the last babble was colored by my biases as a probabilist, this one is motivated by my biases as a graph theorist. First, I will speculate on the exact mechanism of Babble, and also highlight the fact Babble and Prune are independent systems that can be mocked out for unit testing. Second, I will lather on some metaphors about the adversarial nature of Babble and Prune. Two people have independently mentioned Generative Adversarial Networks to me, a model of unsupervised learning involving two neural nets, Generator and Discriminator. The Artist and the Critic are archetypes of the same flavor - I have argued in the past the spirit of the Critic is Satan. Babble is (Sampling From) PageRank Previously, I suggested that a Babble generator is a pseudorandom word generator, weighted with a weak, local filter. This is roughly true, but spectacular fails one of the technical goals of a pseudorandom generator: independence. In particular, the next word you Babble is frequently a variation (phonetically or semantically) of the previous one. PageRank, as far as I know, ranks web pages by the heuristic of "what is the probability of ending up at this page after a random walk with random restarts." That's why a better analogy for Babble is sampling from PageRank i.e. taking a weighted random walk in your Babble graph with random restarts. Jackson Pollock is visual Babble. Imagine you're playing a game of Scrabble, and you have the seven letters JRKAXN. What does your algorithm feel like? You scan the board and see an open M. You start Babbling letter combinations that might start with M: MAJR, MRAJ, MRAN, MARN, MARX (oops, proper noun), MARK (great!). That's the weighted random walk. You set MARK aside and look for another place to start. Time for a restart. You find an open A before a Triple Word, that'd be great to get! You start Babbling combinations that end with A: NARA, NAXRA, JARA, JAKA, RAKA. No luck. Maybe the A should be in the middle of the word! ARAN, AKAN, AKAR, AJAR (great!). You sense mean stares for taking so long, so you turn off the Babble and score AJAR for (1+8+1+1)x3 = 33 points. Not too shabby. The Babble Graph Last time, I described getting better at Babble as increasing the uniformity of your pseudorandom Babble generator. With a higher-resolution model of Babble in hand, we should reconceptualize increasing uniformity as building a well-connected Babble graph. What is the Babble graph? It's the graph within which your words and concepts are connected. Some of these connections are by rhyme and visual similarity, others are semantic or personal. Blood and snow are connected in my Babble graph, for example, because in Chinese they are homophones: snow is 雪 (xue), and blood is 血 (xue). This led to the following paragraph from one of my high school essays (paraphrased): In Chinese, snow and blood sound the same: "xue." Some people think the world will end suddenly in nuclear holocaust, pandemic, or a belligerent SkyNet. I think the world will die slowly and painfully, bleeding to death one drop at a time with each New England winter. My parents had recently dragged me out to jog in the melting post-blizzard slush. One of my favorite classes in college was a game theory class taught by the wonderful David Parkes; my wife and I lovingly remember the class as Parkes and Rec. One of the striking ideas I learned in Parkes and Rec is that exponentially large graphs can be compactly represented implicitly in memory, ...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is Babble and Prune, Part 3: Prune, published by alkjash. This is a linkpost for/ Previously, I described human thought-generation as an adversarial process between a low-quality pseudorandom Babble generator and a high-quality Prune filter, roughly analogous to the Generative Adversarial Networks model in machine learning. I then elaborated on this model by reconceptualizing Babble as a random walk with random restarts on an implicitly stored Babble graph. Rationalist training (and schooling in general) slants towards developing Prune over Babble. I'm trying to solve the dual problem: that of improving the quality of your Babble. Although the previous posts listed a number of exotic isolation exercises for Babble, I'm guessing nobody was inspired to go out and play more Scrabble, write haikus, or stop using the letter 'e'. That's probably for the best - taking these exercises too seriously would produce exotic but sub-optimal Babble anyway. For a serious solution to this serious problem, we need to understand Prune at a higher resolution. The main problem with Prune is that it has too many layers. There's a filter for subconscious thoughts to become conscious, another for it to become spoken word, another for the spoken word to be written down, and a further one for the written word to be displayed in public. With this many-layer model in mind, there are plenty of knobs to turn to let more and better Babble through. The River of Babble Imagine that your river of Babble at its source, the subconscious: a foaming, ugly-colored river littered with half-formed concepts, too wild to navigate, too dirty to drink from. A quarter mile across, the bellow of the rapids is deafening. Downstream, you build a series of gates to tame the rushing rapids and perhaps extract something beautiful and pure. The First Gate, conscious thought, is a huge dam a thousand feet high and holds almost all the incoming thoughts at bay. Behind it, an enormous lake forms, threatening to overflow at any moment. A thick layer of trash floats to the top of this lake, intermixed with a fair amount of the good stuff. The First Gate lets through anything that satisfies a bare minimum of syntactical and semantic constraints. Thoughts that make it past the First Gate are the first ones you become conscious of - that's why they call the output the Stream of Consciousness. A mile down the Stream of Consciousness is the Second Gate, spoken word, the filter through which thoughts become sounds. This Gate keeps you from saying all the foolish or risqué thoughts tripping through your head. Past the Second Gate, your spoken words form only a pathetic trickle - a Babbling Brook. By now there is hardly anything left to sift from. The Third Gate, written word, is no physical gate but a team of goldpanners, scattered down the length of the Babbling Brook to pan for jewels and nuggets of gold. Such rare beauties are the only Babble that actually make it onto paper. You hoard these little trinkets in your personal diary or blog, hoping one day to accumulate enough to forge a beautiful necklace. Past the Third Gate, more Gates lay unused because there simply isn't enough material to fuel them: a whole chain of manufactories passed down from the great writers of yore. Among them are the disembodied voices of Strunk and White: Omit needless words. Vigorous writing is concise. A sentence should contain no unnecessary words, a paragraph no unnecessary sentences, for the same reason that a drawing should have no unnecessary lines and a machine no unnecessary parts. This requires not that the writer make all his sentences short, or that he avoid all detail and treat his subjects only in outline, but that every word tell. Jealously clutching the 500-word pearls you drop once a month on your blog, you dream of the day when the cap...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is Babble and Prune, Part 2: More Babble, published by alkjash. This is a linkpost for/ In my last babble, I introduced the Babble and Prune model of thought generation: Babble with a weak heuristic to generate many more possibilities than necessary, Prune with a strong heuristic to find a best, or the satisfactory one. I want to zoom in on this model. If the last babble was colored by my biases as a probabilist, this one is motivated by my biases as a graph theorist. First, I will speculate on the exact mechanism of Babble, and also highlight the fact Babble and Prune are independent systems that can be mocked out for unit testing. Second, I will lather on some metaphors about the adversarial nature of Babble and Prune. Two people have independently mentioned Generative Adversarial Networks to me, a model of unsupervised learning involving two neural nets, Generator and Discriminator. The Artist and the Critic are archetypes of the same flavor - I have argued in the past the spirit of the Critic is Satan. Babble is (Sampling From) PageRank Previously, I suggested that a Babble generator is a pseudorandom word generator, weighted with a weak, local filter. This is roughly true, but spectacular fails one of the technical goals of a pseudorandom generator: independence. In particular, the next word you Babble is frequently a variation (phonetically or semantically) of the previous one. PageRank, as far as I know, ranks web pages by the heuristic of "what is the probability of ending up at this page after a random walk with random restarts." That's why a better analogy for Babble is sampling from PageRank i.e. taking a weighted random walk in your Babble graph with random restarts. Jackson Pollock is visual Babble. Imagine you're playing a game of Scrabble, and you have the seven letters JRKAXN. What does your algorithm feel like? You scan the board and see an open M. You start Babbling letter combinations that might start with M: MAJR, MRAJ, MRAN, MARN, MARX (oops, proper noun), MARK (great!). That's the weighted random walk. You set MARK aside and look for another place to start. Time for a restart. You find an open A before a Triple Word, that'd be great to get! You start Babbling combinations that end with A: NARA, NAXRA, JARA, JAKA, RAKA. No luck. Maybe the A should be in the middle of the word! ARAN, AKAN, AKAR, AJAR (great!). You sense mean stares for taking so long, so you turn off the Babble and score AJAR for (1+8+1+1)x3 = 33 points. Not too shabby. The Babble Graph Last time, I described getting better at Babble as increasing the uniformity of your pseudorandom Babble generator. With a higher-resolution model of Babble in hand, we should reconceptualize increasing uniformity as building a well-connected Babble graph. What is the Babble graph? It's the graph within which your words and concepts are connected. Some of these connections are by rhyme and visual similarity, others are semantic or personal. Blood and snow are connected in my Babble graph, for example, because in Chinese they are homophones: snow is 雪 (xue), and blood is 血 (xue). This led to the following paragraph from one of my high school essays (paraphrased): In Chinese, snow and blood sound the same: "xue." Some people think the world will end suddenly in nuclear holocaust, pandemic, or a belligerent SkyNet. I think the world will die slowly and painfully, bleeding to death one drop at a time with each New England winter. My parents had recently dragged me out to jog in the melting post-blizzard slush. One of my favorite classes in college was a game theory class taught by the wonderful David Parkes; my wife and I lovingly remember the class as Parkes and Rec. One of the striking ideas I learned in Parkes and Rec is that exponentially large graphs can be compactly represented implicitly in memory, ...
Le 25 octobre 2018 la maison de vente aux enchères Christie's à New York a vendu, pour la première fois de son histoire, une œuvre réalisée par une intelligence artificielle. La toile, celle d'Edmond de Bellamy, évoque la silhouette d'un homme vêtu de vêtements noirs au style puritain, à l'image de ces portraits de famille du XIXe s. Au coin inférieur droit, une signature, celle de la formule mathématique de l'algorithme qui l'a créée. Réalisée par le collectif d'artistes Obvious, ces derniers utilisent le deep learning ou langage profond pour explorer le potentiel créatif de l'intelligence artificielle par des algorithmes de génération d'images, les GANs ou Generative Adversarial Networks. Bouleversant ainsi le monde de l'art et s'interrogeant sur la place de l'artiste via les nouvelles technologies. Ils reviennent sur le moment qui a marqué leur histoire et débuté leur aventure. art cast • est un podcast co-réalisé par Marine Coloos, Anna Kulikova et Mélodie Shahrvari. Le design sonore est signé Guillaume Cabaret avec les extraits des musiques de : Jazzy Bazz - Le Roseau / Khéops (feat. Sentenza) - Pousse au milieu des cactus, ma rancœur / Ikaz Boi (feat. Damso) - Soliterrien. Pour en savoir plus sur le collectif Obvious : https://obvious-art.com/ https://www.instagram.com/obvious_art/?hl=fr Un artiste à nous faire découvrir ou un avis à nous partager ? Contactez-nous sur podcast@artcast.fr Pour plus d'informations, suivez-nous sur les réseaux : Instagram | Linkedin | TikTok | Facebook
We observe that despite their hierarchical convolutional nature, the synthesis process of typical generative adversarial networks depends on absolute pixel coordinates in an unhealthy manner. This manifests itself as, e.g., detail appearing to be glued to image coordinates instead of the surfaces of depicted objects. We trace the root cause to careless signal processing that causes aliasing in the generator network. Interpreting all signals in the network as continuous, we derive generally applicable, small architectural changes that guarantee that unwanted information cannot leak into the hierarchical synthesis process. The resulting networks match the FID of StyleGAN2 but differ dramatically in their internal representations, and they are fully equivariant to translation and rotation even at subpixel scales. Our results pave the way for generative models better suited for video and animation. 2021: Tero Karras, M. Aittala, S. Laine, Erik Harkonen, Janne Hellsten, J. Lehtinen, Timo Aila Ranked #1 on Image Generation on FFHQ-U https://arxiv.org/pdf/2106.12423v3.pdf
Hey guys, in this episode I talk about applications of GANs, Generative Adversarial Networks, in the real world. I talk about many applications in photo and video editing, super resolution, video games, autonomous driving and much more! Instagram: https://www.instagram.com/podcast.lifewithai/ Linkedin: https://www.linkedin.com/company/life-with-ai Code: https://github.com/filipelauar/projects/tree/main/GAN_applications
In this episode, host Bidemi Ologunde conducts a deep dive on deepfake technology, its remarkable history, recent high-profile incidents on the dark web and surface web, recommendations and future assessments.Please send questions, comments, and suggestions to bidemi@thebidpicture.com. You can also get in touch on LinkedIn, Twitter, the Clubhouse app (@bid), and the Wisdom app (@bidemi).
A² The Show - Ep 228 Feat. Bas Uterwijk Bas Uterwijk is a freelance photographer living in Amsterdam, the Netherlands At the moment he is re-evaluating my role as a photographer For my Generative Adversarial Network images, please see: https://www.instagram.com/ganbroodA selection of his AI work is for sale through Warnars Art: https://www.warnarsartdealers.com/art...Follow the podcast hosts on social media: @a2theshow Hosts Ali Haejl @scoobz.mp4 Ali Al Shammari @freshprinceofmishref Social Media Ali Saeed @freelanceralisaeed alihaejl.com --- Send in a voice message: https://anchor.fm/a2theshow/message Support this podcast: https://anchor.fm/a2theshow/support
This week we talk about Photoshopping, GANs, and Lincoln's fake body.We also discuss AI-generated cats, scientific discovery, and creative self-consciousness. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit letsknowthings.substack.com/subscribe
Jeff Bezos & Amazon clearly don't have a New York state of mind, as the tech giant walks away from its planned HQ expansion plans, opting to focus instead on its Virginia HQ2 while distributing an optimistic 25,000 jobs across existing facilities. What happened? Is this really about business? Or politics? And is there a lesson here? We think so. Plus our Fast Five: - UK moves closer to anti-trust action against Facebook; - Huawei's 5G prospects just improved in the UK; - The digital disruption of key fobs; - Apple looks to a post-iPhone future; and - IBM's Watson makes a guest appearance on non-IBM clouds. Our Tech Bites Topic: AI, Generative Adversarial Networks, and the risk of not-so-deep fakes. Our Crystal Ball: MWC19 is right around the corner - here's what we expect and would like to see in the world of mobile tech! This episode features: Daniel Newman (@danielnewmanUV), Fred McClimans (@fredmcclimans), and Olivier Blanchard (@oablanchard). If you haven't already, please subscribe to our show on iTunes or SoundCloud. For inquiries or more information on the show you may email the team at info@futurumresearch.com or follow @FuturumResearch on Twitter and feel free to direct inquires through that channel as well. To learn more about Future research please visit www.futurumresearch.com. Futurum Research is a research and analysis provider, not an investment advisor. The Futurum Tech Podcast is a newsletter/podcast intended for informational use only. Futurum Research does not provide personalized investment advice. No investment advice is offered nor implied by this podcast.