POPULARITY
McAnally's Pubcast - A Dresden Files PodcastHere Jes catches us up on her opinions on issue 4 and then we discuss issue 5 in which we go over art choices we are rather uncertain about.War Cry Issue 5 Summary:The Shoggoth begins to reign destruction on all parties. Harry instructs the Wardens and the Venatori to escape, while he and Thomas enact a plan to destroy the Shoggoth in the nearby Quarry.Find Us Elsewhere:Do you want to follow up with us for even more Dresden? We're all over the internet - you can email us at pubcast@freeflowrambling.com, or you can track us down at Facebook, Instagram, Discord, X (formerly known as Twitter), Reddit, our Dresden Files website, or our parent website. If you want hypnotic visuals with your podcast, you can find us at YouTube. Not enough? Why not show your support by clicking here and donating or joining us on our Patreon. Also, if you're in the market for some merch, you can click here. If you still aren't satisfied, click here and tell us all about it!
McAnally's Pubcast - A Dresden Files PodcastHere we discuss issue 4 in which we are genuinely confused where Thomas and Harry are squatting and stealing beer from.War Cry Issue 4 Summary:Harry takes control and begins to fortify the house. The Baron Bravosa offers Harry safe passage to leave but not with the Venatori. Harry denies and finds out that the Venatori are housing the offspring of an outsider - a Shoggoth.Find Us Elsewhere:Do you want to follow up with us for even more Dresden? We're all over the internet - you can email us at pubcast@freeflowrambling.com, or you can track us down at Facebook, Instagram, Discord, X (formerly known as Twitter), Reddit, our Dresden Files website, or our parent website. If you want hypnotic visuals with your podcast, you can find us at YouTube. Not enough? Why not show your support by clicking here and donating or joining us on our Patreon. Also, if you're in the market for some merch, you can click here. If you still aren't satisfied, click here and tell us all about it!
McAnally's Pubcast - A Dresden Files PodcastHere we cover Issue 2 and 3 in which we discuss the physics of shotgun art, and kettle black wardrobe insults are flung. War Cry Issue 2 Summary:Harry takes control and begins to fortify the house. The Baron Bravosa offers Harry safe passage to leave but not with the Venatori. Harry denies and finds out that the Venatori are housing the offspring of an outsider - a Shoggoth.War Cry Issue 3 Summary:The Wardens fiercely battle to defend the Venatori's Stronghouse while inside the Venatori fend of a human mercenary. Just as the Wardens begin to lose steam another breed of vampire arrives in the fray.Find Us Elsewhere:Do you want to follow up with us for even more Dresden? We're all over the internet - you can email us at pubcast@freeflowrambling.com, or you can track us down at Facebook, Instagram, Discord, X (formerly known as Twitter), Reddit, our Dresden Files website, or our parent website. If you want hypnotic visuals with your podcast, you can find us at YouTube. Not enough? Why not show your support by clicking here and donating or joining us on our Patreon. Also, if you're in the market for some merch, you can click here. If you still aren't satisfied, click here and tell us all about it!
Alessio will be at AWS re:Invent next week and hosting a casual coffee meetup on Wednesday, RSVP here! And subscribe to our calendar for our Singapore, NeurIPS, and all upcoming meetups!We are still taking questions for our next big recap episode! Submit questions and messages on Speakpipe here for a chance to appear on the show!If you've been following the AI agents space, you have heard of Lindy AI; while founder Flo Crivello is hesitant to call it "blowing up," when folks like Andrew Wilkinson start obsessing over your product, you're definitely onto something.In our latest episode, Flo walked us through Lindy's evolution from late 2022 to now, revealing some design choices about agent platform design that go against conventional wisdom in the space.The Great Reset: From Text Fields to RailsRemember late 2022? Everyone was "LLM-pilled," believing that if you just gave a language model enough context and tools, it could do anything. Lindy 1.0 followed this pattern:* Big prompt field ✅* Bunch of tools ✅* Prayer to the LLM gods ✅Fast forward to today, and Lindy 2.0 looks radically different. As Flo put it (~17:00 in the episode): "The more you can put your agent on rails, one, the more reliable it's going to be, obviously, but two, it's also going to be easier to use for the user."Instead of a giant, intimidating text field, users now build workflows visually:* Trigger (e.g., "Zendesk ticket received")* Required actions (e.g., "Check knowledge base")* Response generationThis isn't just a UI change - it's a fundamental rethinking of how to make AI agents reliable. As Swyx noted during our discussion: "Put Shoggoth in a box and make it a very small, minimal viable box. Everything else should be traditional if-this-then-that software."The Surprising Truth About Model LimitationsHere's something that might shock folks building in the space: with Claude 3.5 Sonnet, the model is no longer the bottleneck. Flo's exact words (~31:00): "It is actually shocking the extent to which the model is no longer the limit. It was the limit a year ago. It was too expensive. The context window was too small."Some context: Lindy started when context windows were 4K tokens. Today, their system prompt alone is larger than that. But what's really interesting is what this means for platform builders:* Raw capabilities aren't the constraint anymore* Integration quality matters more than model performance* User experience and workflow design are the new bottlenecksThe Search Engine Parallel: Why Horizontal Platforms Might WinOne of the spiciest takes from our conversation was Flo's thesis on horizontal vs. vertical agent platforms. He draws a fascinating parallel to search engines (~56:00):"I find it surprising the extent to which a horizontal search engine has won... You go through Google to search Reddit. You go through Google to search Wikipedia... search in each vertical has more in common with search than it does with each vertical."His argument: agent platforms might follow the same pattern because:* Agents across verticals share more commonalities than differences* There's value in having agents that can work together under one roof* The R&D cost of getting agents right is better amortized across use casesThis might explain why we're seeing early vertical AI companies starting to expand horizontally. The core agent capabilities - reliability, context management, tool integration - are universal needs.What This Means for BuildersIf you're building in the AI agents space, here are the key takeaways:* Constrain First: Rather than maximizing capabilities, focus on reliable execution within narrow bounds* Integration Quality Matters: With model capabilities plateauing, your competitive advantage lies in how well you integrate with existing tools* Memory Management is Key: Flo revealed they actively prune agent memories - even with larger context windows, not all memories are useful* Design for Discovery: Lindy's visual workflow builder shows how important interface design is for adoptionThe Meta LayerThere's a broader lesson here about AI product development. Just as Lindy evolved from "give the LLM everything" to "constrain intelligently," we might see similar evolution across the AI tooling space. The winners might not be those with the most powerful models, but those who best understand how to package AI capabilities in ways that solve real problems reliably.Full Video PodcastFlo's talk at AI Engineer SummitChapters* 00:00:00 Introductions * 00:04:05 AI engineering and deterministic software * 00:08:36 Lindys demo* 00:13:21 Memory management in AI agents * 00:18:48 Hierarchy and collaboration between Lindys * 00:21:19 Vertical vs. horizontal AI tools * 00:24:03 Community and user engagement strategies * 00:26:16 Rickrolling incident with Lindy * 00:28:12 Evals and quality control in AI systems * 00:31:52 Model capabilities and their impact on Lindy * 00:39:27 Competition and market positioning * 00:42:40 Relationship between Factorio and business strategy * 00:44:05 Remote work vs. in-person collaboration * 00:49:03 Europe vs US Tech* 00:58:59 Testing the Overton window and free speech * 01:04:20 Balancing AI safety concerns with business innovation Show Notes* Lindy.ai* Rick Rolling* Flo on X* TeamFlow* Andrew Wilkinson* Dust* Poolside.ai* SB1047* Gathertown* Sid Sijbrandij* Matt Mullenweg* Factorio* Seeing Like a StateTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.Swyx [00:00:12]: Hey, and today we're joined in the studio by Florent Crivello. Welcome.Flo [00:00:15]: Hey, yeah, thanks for having me.Swyx [00:00:17]: Also known as Altimore. I always wanted to ask, what is Altimore?Flo [00:00:21]: It was the name of my character when I was playing Dungeons & Dragons. Always. I was like 11 years old.Swyx [00:00:26]: What was your classes?Flo [00:00:27]: I was an elf. I was a magician elf.Swyx [00:00:30]: Well, you're still spinning magic. Right now, you're a solo founder and CEO of Lindy.ai. What is Lindy?Flo [00:00:36]: Yeah, we are a no-code platform letting you build your own AI agents easily. So you can think of we are to LangChain as Airtable is to MySQL. Like you can just pin up AI agents super easily by clicking around and no code required. You don't have to be an engineer and you can automate business workflows that you simply could not automate before in a few minutes.Swyx [00:00:55]: You've been in our orbit a few times. I think you spoke at our Latent Space anniversary. You spoke at my summit, the first summit, which was a really good keynote. And most recently, like we actually already scheduled this podcast before this happened. But Andrew Wilkinson was like, I'm obsessed by Lindy. He's just created a whole bunch of agents. So basically, why are you blowing up?Flo [00:01:16]: Well, thank you. I think we are having a little bit of a moment. I think it's a bit premature to say we're blowing up. But why are things going well? We revamped the product majorly. We called it Lindy 2.0. I would say we started working on that six months ago. We've actually not really announced it yet. It's just, I guess, I guess that's what we're doing now. And so we've basically been cooking for the last six months, like really rebuilding the product from scratch. I think I'll list you, actually, the last time you tried the product, it was still Lindy 1.0. Oh, yeah. If you log in now, the platform looks very different. There's like a ton more features. And I think one realization that we made, and I think a lot of folks in the agent space made the same realization, is that there is such a thing as too much of a good thing. I think many people, when they started working on agents, they were very LLM peeled and chat GPT peeled, right? They got ahead of themselves in a way, and us included, and they thought that agents were actually, and LLMs were actually more advanced than they actually were. And so the first version of Lindy was like just a giant prompt and a bunch of tools. And then the realization we had was like, hey, actually, the more you can put your agent on Rails, one, the more reliable it's going to be, obviously, but two, it's also going to be easier to use for the user, because you can really, as a user, you get, instead of just getting this big, giant, intimidating text field, and you type words in there, and you have no idea if you're typing the right word or not, here you can really click and select step by step, and tell your agent what to do, and really give as narrow or as wide a guardrail as you want for your agent. We started working on that. We called it Lindy on Rails about six months ago, and we started putting it into the hands of users over the last, I would say, two months or so, and I think things really started going pretty well at that point. The agent is way more reliable, way easier to set up, and we're already seeing a ton of new use cases pop up.Swyx [00:03:00]: Yeah, just a quick follow-up on that. You launched the first Lindy in November last year, and you were already talking about having a DSL, right? I remember having this discussion with you, and you were like, it's just much more reliable. Is this still the DSL under the hood? Is this a UI-level change, or is it a bigger rewrite?Flo [00:03:17]: No, it is a much bigger rewrite. I'll give you a concrete example. Suppose you want to have an agent that observes your Zendesk tickets, and it's like, hey, every time you receive a Zendesk ticket, I want you to check my knowledge base, so it's like a RAG module and whatnot, and then answer the ticket. The way it used to work with Lindy before was, you would type the prompt asking it to do that. You check my knowledge base, and so on and so forth. The problem with doing that is that it can always go wrong. You're praying the LLM gods that they will actually invoke your knowledge base, but I don't want to ask it. I want it to always, 100% of the time, consult the knowledge base after it receives a Zendesk ticket. And so with Lindy, you can actually have the trigger, which is Zendesk ticket received, have the knowledge base consult, which is always there, and then have the agent. So you can really set up your agent any way you want like that.Swyx [00:04:05]: This is something I think about for AI engineering as well, which is the big labs want you to hand over everything in the prompts, and only code of English, and then the smaller brains, the GPU pours, always want to write more code to make things more deterministic and reliable and controllable. One way I put it is put Shoggoth in a box and make it a very small, the minimal viable box. Everything else should be traditional, if this, then that software.Flo [00:04:29]: I love that characterization, put the Shoggoth in the box. Yeah, we talk about using as much AI as necessary and as little as possible.Alessio [00:04:37]: And what was the choosing between kind of like this drag and drop, low code, whatever, super code-driven, maybe like the Lang chains, auto-GPT of the world, and maybe the flip side of it, which you don't really do, it's like just text to agent, it's like build the workflow for me. Like what have you learned actually putting this in front of users and figuring out how much do they actually want to add it versus like how much, you know, kind of like Ruby on Rails instead of Lindy on Rails, it's kind of like, you know, defaults over configuration.Flo [00:05:06]: I actually used to dislike when people said, oh, text is not a great interface. I was like, ah, this is such a mid-take, I think text is awesome. And I've actually come around, I actually sort of agree now that text is really not great. I think for people like you and me, because we sort of have a mental model, okay, when I type a prompt into this text box, this is what it's going to do, it's going to map it to this kind of data structure under the hood and so forth. I guess it's a little bit blackmailing towards humans. You jump on these calls with humans and you're like, here's a text box, this is going to set up an agent for you, do it. And then they type words like, I want you to help me put order in my inbox. Oh, actually, this is a good one. This is actually a good one. What's a bad one? I would say 60 or 70% of the prompts that people type don't mean anything. Me as a human, as AGI, I don't understand what they mean. I don't know what they mean. It is actually, I think whenever you can have a GUI, it is better than to have just a pure text interface.Alessio [00:05:58]: And then how do you decide how much to expose? So even with the tools, you have Slack, you have Google Calendar, you have Gmail. Should people by default just turn over access to everything and then you help them figure out what to use? I think that's the question. When I tried to set up Slack, it was like, hey, give me access to all channels and everything, which for the average person probably makes sense because you don't want to re-prompt them every time you add new channels. But at the same time, for maybe the more sophisticated enterprise use cases, people are like, hey, I want to really limit what you have access to. How do you kind of thread that balance?Flo [00:06:35]: The general philosophy is we ask for the least amount of permissions needed at any given moment. I don't think Slack, I could be mistaken, but I don't think Slack lets you request permissions for just one channel. But for example, for Google, obviously there are hundreds of scopes that you could require for Google. There's a lot of scopes. And sometimes it's actually painful to set up your Lindy because you're going to have to ask Google and add scopes five or six times. We've had sessions like this. But that's what we do because, for example, the Lindy email drafter, she's going to ask you for your authorization once for, I need to be able to read your email so I can draft a reply, and then another time for I need to be able to write a draft for them. We just try to do it very incrementally like that.Alessio [00:07:15]: Do you think OAuth is just overall going to change? I think maybe before it was like, hey, we need to set up OAuth that humans only want to kind of do once. So we try to jam-pack things all at once versus what if you could on-demand get different permissions every time from different parts? Do you ever think about designing things knowing that maybe AI will use it instead of humans will use it? Yeah, for sure.Flo [00:07:37]: One pattern we've started to see is people provisioning accounts for their AI agents. And so, in particular, Google Workspace accounts. So, for example, Lindy can be used as a scheduling assistant. So you can just CC her to your emails when you're trying to find time with someone. And just like a human assistant, she's going to go back and forth and offer other abilities and so forth. Very often, people don't want the other party to know that it's an AI. So it's actually funny. They introduce delays. They ask the agent to wait before replying, so it's not too obvious that it's an AI. And they provision an account on Google Suite, which costs them like $10 a month or something like that. So we're seeing that pattern more and more. I think that does the job for now. I'm not optimistic on us actually patching OAuth. Because I agree with you, ultimately, we would want to patch OAuth because the new account thing is kind of a clutch. It's really a hack. You would want to patch OAuth to have more granular access control and really be able to put your sugar in the box. I'm not optimistic on us doing that before AGI, I think. That's a very close timeline.Swyx [00:08:36]: I'm mindful of talking about a thing without showing it. And we already have the setup to show it. Why don't we jump into a screen share? For listeners, you can jump on the YouTube and like and subscribe. But also, let's have a look at how you show off Lindy. Yeah, absolutely.Flo [00:08:51]: I'll give an example of a very simple Lindy and then I'll graduate to a much more complicated one. A super simple Lindy that I have is, I unfortunately bought some investment properties in the south of France. It was a really, really bad idea. And I put them on a Holydew, which is like the French Airbnb, if you will. And so I received these emails from time to time telling me like, oh, hey, you made 200 bucks. Someone booked your place. When I receive these emails, I want to log this reservation in a spreadsheet. Doing this without an AI agent or without AI in general is a pain in the butt because you must write an HTML parser for this email. And so it's just hard. You may not be able to do it and it's going to break the moment the email changes. By contrast, the way it works with Lindy, it's really simple. It's two steps. It's like, okay, I receive an email. If it is a reservation confirmation, I have this filter here. Then I append a row to this spreadsheet. And so this is where you can see the AI part where the way this action is configured here, you see these purple fields on the right. Each of these fields is a prompt. And so I can say, okay, you extract from the email the day the reservation begins on. You extract the amount of the reservation. You extract the number of travelers of the reservation. And now you can see when I look at the task history of this Lindy, it's really simple. It's like, okay, you do this and boom, appending this row to this spreadsheet. And this is the information extracted. So effectively, this node here, this append row node is a mini agent. It can see everything that just happened. It has context over the task and it's appending the row. And then it's going to send a reply to the thread. That's a very simple example of an agent.Swyx [00:10:34]: A quick follow-up question on this one while we're still on this page. Is that one call? Is that a structured output call? Yeah. Okay, nice. Yeah.Flo [00:10:41]: And you can see here for every node, you can configure which model you want to power the node. Here I use cloud. For this, I use GPT-4 Turbo. Much more complex example, my meeting recorder. It looks very complex because I've added to it over time, but at a high level, it's really simple. It's like when a meeting begins, you record the meeting. And after the meeting, you send me a summary and you send me coaching notes. So I receive, like my Lindy is constantly coaching me. And so you can see here in the prompt of the coaching notes, I've told it, hey, you know, was I unnecessarily confrontational at any point? I'm French, so I have to watch out for that. Or not confrontational enough. Should I have double-clicked on any issue, right? So I can really give it exactly the kind of coaching that I'm expecting. And then the interesting thing here is, like, you can see the agent here, after it sent me these coaching notes, moves on. And it does a bunch of other stuff. So it goes on Slack. It disseminates the notes on Slack. It does a bunch of other stuff. But it's actually able to backtrack and resume the automation at the coaching notes email if I responded to that email. So I'll give a super concrete example. This is an actual coaching feedback that I received from Lindy. She was like, hey, this was a sales call I had with a customer. And she was like, I found your explanation of Lindy too technical. And I was able to follow up and just ask a follow-up question in the thread here. And I was like, why did you find too technical about my explanation? And Lindy restored the context. And so she basically picked up the automation back up here in the tree. And she has all of the context of everything that happened, including the meeting in which I was. So she was like, oh, you used the words deterministic and context window and agent state. And that concept exists at every level for every channel and every action that Lindy takes. So another example here is, I mentioned she also disseminates the notes on Slack. So this was a meeting where I was not, right? So this was a teammate. He's an indie meeting recorder, posts the meeting notes in this customer discovery channel on Slack. So you can see, okay, this is the onboarding call we had. This was the use case. Look at the questions. How do I make Lindy slower? How do I add delays to make Lindy slower? And I was able, in the Slack thread, to ask follow-up questions like, oh, what did we answer to these questions? And it's really handy because I know I can have this sort of interactive Q&A with these meetings. It means that very often now, I don't go to meetings anymore. I just send my Lindy. And instead of going to like a 60-minute meeting, I have like a five-minute chat with my Lindy afterwards. And she just replied. She was like, well, this is what we replied to this customer. And I can just be like, okay, good job, Jack. Like, no notes about your answers. So that's the kind of use cases people have with Lindy. It's a lot of like, there's a lot of sales automations, customer support automations, and a lot of this, which is basically personal assistance automations, like meeting scheduling and so forth.Alessio [00:13:21]: Yeah, and I think the question that people might have is memory. So as you get coaching, how does it track whether or not you're improving? You know, if these are like mistakes you made in the past, like, how do you think about that?Flo [00:13:31]: Yeah, we have a memory module. So I'll show you my meeting scheduler, Lindy, which has a lot of memories because by now I've used her for so long. And so every time I talk to her, she saves a memory. If I tell her, you screwed up, please don't do this. So you can see here, oh, it's got a double memory here. This is the meeting link I have, or this is the address of the office. If I tell someone to meet me at home, this is the address of my place. This is the code. I guess we'll have to edit that out. This is not the code of my place. No dogs. Yeah, so Lindy can just manage her own memory and decide when she's remembering things between executions. Okay.Swyx [00:14:11]: I mean, I'm just going to take the opportunity to ask you, since you are the creator of this thing, how come there's so few memories, right? Like, if you've been using this for two years, there should be thousands of thousands of things. That is a good question.Flo [00:14:22]: Agents still get confused if they have too many memories, to my point earlier about that. So I just am out of a call with a member of the Lama team at Meta, and we were chatting about Lindy, and we were going into the system prompt that we sent to Lindy, and all of that stuff. And he was amazed, and he was like, it's a miracle that it's working, guys. He was like, this kind of system prompt, this does not exist, either pre-training or post-training. These models were never trained to do this kind of stuff. It's a miracle that they can be agents at all. And so what I do, I actually prune the memories. You know, it's actually something I've gotten into the habit of doing from back when we had GPT 3.5, being Lindy agents. I suspect it's probably not as necessary in the Cloud 3.5 Sunette days, but I prune the memories. Yeah, okay.Swyx [00:15:05]: The reason is because I have another assistant that also is recording and trying to come up with facts about me. It comes up with a lot of trivial, useless facts that I... So I spend most of my time pruning. Actually, it's not super useful. I'd much rather have high-quality facts that it accepts. Or maybe I was even thinking, were you ever tempted to add a wake word to only memorize this when I say memorize this? And otherwise, don't even bother.Flo [00:15:30]: I have a Lindy that does this. So this is my inbox processor, Lindy. It's kind of beefy because there's a lot of different emails. But somewhere in here,Swyx [00:15:38]: there is a rule where I'm like,Flo [00:15:39]: aha, I can email my inbox processor, Lindy. It's really handy. So she has her own email address. And so when I process my email inbox, I sometimes forward an email to her. And it's a newsletter, or it's like a cold outreach from a recruiter that I don't care about, or anything like that. And I can give her a rule. And I can be like, hey, this email I want you to archive, moving forward. Or I want you to alert me on Slack when I have this kind of email. It's really important. And so you can see here, the prompt is, if I give you a rule about a kind of email, like archive emails from X, save it as a new memory. And I give it to the memory saving skill. And yeah.Swyx [00:16:13]: One thing that just occurred to me, so I'm a big fan of virtual mailboxes. I recommend that everybody have a virtual mailbox. You could set up a physical mail receive thing for Lindy. And so then Lindy can process your physical mail.Flo [00:16:26]: That's actually a good idea. I actually already have something like that. I use like health class mail. Yeah. So yeah, most likely, I can process my physical mail. Yeah.Swyx [00:16:35]: And then the other product's idea I have, looking at this thing, is people want to brag about the complexity of their Lindys. So this would be like a 65 point Lindy, right?Flo [00:16:43]: What's a 65 point?Swyx [00:16:44]: Complexity counting. Like how many nodes, how many things, how many conditions, right? Yeah.Flo [00:16:49]: This is not the most complex one. I have another one. This designer recruiter here is kind of beefy as well. Right, right, right. So I'm just saying,Swyx [00:16:56]: let people brag. Let people be super users. Oh, right.Flo [00:16:59]: Give them a score. Give them a score.Swyx [00:17:01]: Then they'll just be like, okay, how high can you make this score?Flo [00:17:04]: Yeah, that's a good point. And I think that's, again, the beauty of this on-rails phenomenon. It's like, think of the equivalent, the prompt equivalent of this Lindy here, for example, that we're looking at. It'd be monstrous. And the odds that it gets it right are so low. But here, because we're really holding the agent's hand step by step by step, it's actually super reliable. Yeah.Swyx [00:17:22]: And is it all structured output-based? Yeah. As far as possible? Basically. Like, there's no non-structured output?Flo [00:17:27]: There is. So, for example, here, this AI agent step, right, or this send message step, sometimes it gets to... That's just plain text.Swyx [00:17:35]: That's right.Flo [00:17:36]: Yeah. So I'll give you an example. Maybe it's TMI. I'm having blood pressure issues these days. And so this Lindy here, I give it my blood pressure readings, and it updates a log that I have of my blood pressure that it sends to my doctor.Swyx [00:17:49]: Oh, so every Lindy comes with a to-do list?Flo [00:17:52]: Yeah. Every Lindy has its own task history. Huh. Yeah. And so you can see here, this is my main Lindy, my personal assistant, and I've told it, where is this? There is a point where I'm like, if I am giving you a health-related fact, right here, I'm giving you health information, so then you update this log that I have in this Google Doc, and then you send me a message. And you can see, I've actually not configured this send message node. I haven't told it what to send me a message for. Right? And you can see, it's actually lecturing me. It's like, I'm giving it my blood pressure ratings. It's like, hey, it's a bit high. Here are some lifestyle changes you may want to consider.Alessio [00:18:27]: I think maybe this is the most confusing or new thing for people. So even I use Lindy and I didn't even know you could have multiple workflows in one Lindy. I think the mental model is kind of like the Zapier workflows. It starts and it ends. It doesn't choose between. How do you think about what's a Lindy versus what's a sub-function of a Lindy? Like, what's the hierarchy?Flo [00:18:48]: Yeah. Frankly, I think the line is a little arbitrary. It's kind of like when you code, like when do you start to create a new class versus when do you overload your current class. I think of it in terms of like jobs to be done and I think of it in terms of who is the Lindy serving. This Lindy is serving me personally. It's really my day-to-day Lindy. I give it a bunch of stuff, like very easy tasks. And so this is just the Lindy I go to. Sometimes when a task is really more specialized, so for example, I have this like summarizer Lindy or this designer recruiter Lindy. These tasks are really beefy. I wouldn't want to add this to my main Lindy, so I just created a separate Lindy for it. Or when it's a Lindy that serves another constituency, like our customer support Lindy, I don't want to add that to my personal assistant Lindy. These are two very different Lindys.Alessio [00:19:31]: And you can call a Lindy from within another Lindy. That's right. You can kind of chain them together.Flo [00:19:36]: Lindys can work together, absolutely.Swyx [00:19:38]: A couple more things for the video portion. I noticed you have a podcast follower. We have to ask about that. What is that?Flo [00:19:46]: So this one wakes me up every... So wakes herself up every week. And she sends me... So she woke up yesterday, actually. And she searches for Lenny's podcast. And she looks for like the latest episode on YouTube. And once she finds it, she transcribes the video and then she sends me the summary by email. I don't listen to podcasts as much anymore. I just like read these summaries. Yeah.Alessio [00:20:09]: We should make a latent space Lindy. Marketplace.Swyx [00:20:12]: Yeah. And then you have a whole bunch of connectors. I saw the list briefly. Any interesting one? Complicated one that you're proud of? Anything that you want to just share? Connector stories.Flo [00:20:23]: So many of our workflows are about meeting scheduling. So we had to build some very open unity tools around meeting scheduling. So for example, one that is surprisingly hard is this find available times action. You would not believe... This is like a thousand lines of code or something. It's just a very beefy action. And you can pass it a bunch of parameters about how long is the meeting? When does it start? When does it end? What are the meetings? The weekdays in which I meet? How many time slots do you return? What's the buffer between my meetings? It's just a very, very, very complex action. I really like our GitHub action. So we have a Lindy PR reviewer. And it's really handy because anytime any bug happens... So the Lindy reads our guidelines on Google Docs. By now, the guidelines are like 40 pages long or something. And so every time any new kind of bug happens, we just go to the guideline and we add the lines. Like, hey, this has happened before. Please watch out for this category of bugs. And it's saving us so much time every day.Alessio [00:21:19]: There's companies doing PR reviews. Where does a Lindy start? When does a company start? Or maybe how do you think about the complexity of these tasks when it's going to be worth having kind of like a vertical standalone company versus just like, hey, a Lindy is going to do a good job 99% of the time?Flo [00:21:34]: That's a good question. We think about this one all the time. I can't say that we've really come up with a very crisp articulation of when do you want to use a vertical tool versus when do you want to use a horizontal tool. I think of it as very similar to the internet. I find it surprising the extent to which a horizontal search engine has won. But I think that Google, right? But I think the even more surprising fact is that the horizontal search engine has won in almost every vertical, right? You go through Google to search Reddit. You go through Google to search Wikipedia. I think maybe the biggest exception is e-commerce. Like you go to Amazon to search e-commerce, but otherwise you go through Google. And I think that the reason for that is because search in each vertical has more in common with search than it does with each vertical. And search is so expensive to get right. Like Google is a big company that it makes a lot of sense to aggregate all of these different use cases and to spread your R&D budget across all of these different use cases. I have a thesis, which is, it's a really cool thesis for Lindy, is that the same thing is true for agents. I think that by and large, in a lot of verticals, agents in each vertical have more in common with agents than they do with each vertical. I also think there are benefits in having a single agent platform because that way your agents can work together. They're all like under one roof. That way you only learn one platform and so you can create agents for everything that you want. And you don't have to like pay for like a bunch of different platforms and so forth. So I think ultimately, it is actually going to shake out in a way that is similar to search in that search is everywhere on the internet. Every website has a search box, right? So there's going to be a lot of vertical agents for everything. I think AI is going to completely penetrate every category of software. But then I also think there are going to be a few very, very, very big horizontal agents that serve a lot of functions for people.Swyx [00:23:14]: That is actually one of the questions that we had about the agent stuff. So I guess we can transition away from the screen and I'll just ask the follow-up, which is, that is a hot topic. You're basically saying that the current VC obsession of the day, which is vertical AI enabled SaaS, is mostly not going to work out. And then there are going to be some super giant horizontal SaaS.Flo [00:23:34]: Oh, no, I'm not saying it's either or. Like SaaS today, vertical SaaS is huge and there's also a lot of horizontal platforms. If you look at like Airtable or Notion, basically the entire no-code space is very horizontal. I mean, Loom and Zoom and Slack, there's a lot of very horizontal tools out there. Okay.Swyx [00:23:49]: I was just trying to get a reaction out of you for hot takes. Trying to get a hot take.Flo [00:23:54]: No, I also think it is natural for the vertical solutions to emerge first because it's just easier to build. It's just much, much, much harder to build something horizontal. Cool.Swyx [00:24:03]: Some more Lindy-specific questions. So we covered most of the top use cases and you have an academy. That was nice to see. I also see some other people doing it for you for free. So like Ben Spites is doing it and then there's some other guy who's also doing like lessons. Yeah. Which is kind of nice, right? Yeah, absolutely. You don't have to do any of that.Flo [00:24:20]: Oh, we've been seeing it more and more on like LinkedIn and Twitter, like people posting their Lindys and so forth.Swyx [00:24:24]: I think that's the flywheel that you built the platform where creators see value in allying themselves to you. And so then, you know, your incentive is to make them successful so that they can make other people successful and then it just drives more and more engagement. Like it's earned media. Like you don't have to do anything.Flo [00:24:39]: Yeah, yeah. I mean, community is everything.Swyx [00:24:41]: Are you doing anything special there? Any big wins?Flo [00:24:44]: We have a Slack community that's pretty active. I can't say we've invested much more than that so far.Swyx [00:24:49]: I would say from having, so I have some involvement in the no-code community. I would say that Webflow going very hard after no-code as a category got them a lot more allies than just the people using Webflow. So it helps you to grow the community beyond just Lindy. And I don't know what this is called. Maybe it's just no-code again. Maybe you want to call it something different. But there's definitely an appetite for this and you are one of a broad category, right? Like just before you, we had Dust and, you know, they're also kind of going after a similar market. Zapier obviously is not going to try to also compete with you. Yeah. There's no question there. It's just like a reaction about community. Like I think a lot about community. Lanespace is growing the community of AI engineers. And I think you have a slightly different audience of, I don't know what.Flo [00:25:33]: Yeah. I think the no-code tinkerers is the community. Yeah. It is going to be the same sort of community as what Webflow, Zapier, Airtable, Notion to some extent.Swyx [00:25:43]: Yeah. The framing can be different if you were, so I think tinkerers has this connotation of not serious or like small. And if you framed it to like no-code EA, we're exclusively only for CEOs with a certain budget, then you just have, you tap into a different budget.Flo [00:25:58]: That's true. The problem with EA is like, the CEO has no willingness to actually tinker and play with the platform.Swyx [00:26:05]: Maybe Andrew's doing that. Like a lot of your biggest advocates are CEOs, right?Flo [00:26:09]: A solopreneur, you know, small business owners, I think Andrew is an exception. Yeah. Yeah, yeah, he is.Swyx [00:26:14]: He's an exception in many ways. Yep.Alessio [00:26:16]: Just before we wrap on the use cases, is Rick rolling your customers? Like a officially supported use case or maybe tell that story?Flo [00:26:24]: It's one of the main jobs to be done, really. Yeah, we woke up recently, so we have a Lindy obviously doing our customer support and we do check after the Lindy. And so we caught this email exchange where someone was asking Lindy for video tutorials. And at the time, actually, we did not have video tutorials. We do now on the Lindy Academy. And Lindy responded to the email. It's like, oh, absolutely, here's a link. And we were like, what? Like, what kind of link did you send? And so we clicked on the link and it was a recall. We actually reacted fast enough that the customer had not yet opened the email. And so we reacted immediately. Like, oh, hey, actually, sorry, this is the right link. And so the customer never reacted to the first link. And so, yeah, I tweeted about that. It went surprisingly viral. And I checked afterwards in the logs. We did like a database query and we found, I think, like three or four other instances of it having happened before.Swyx [00:27:12]: That's surprisingly low.Flo [00:27:13]: It is low. And we fixed it across the board by just adding a line to the system prompt that's like, hey, don't recall people, please don't recall.Swyx [00:27:21]: Yeah, yeah, yeah. I mean, so, you know, you can explain it retroactively, right? Like, that YouTube slug has been pasted in so many different corpuses that obviously it learned to hallucinate that.Alessio [00:27:31]: And it pretended to be so many things. That's the thing.Swyx [00:27:34]: I wouldn't be surprised if that takes one token. Like, there's this one slug in the tokenizer and it's just one token.Flo [00:27:41]: That's the idea of a YouTube video.Swyx [00:27:43]: Because it's used so much, right? And you have to basically get it exactly correct. It's probably not. That's a long speech.Flo [00:27:52]: It would have been so good.Alessio [00:27:55]: So this is just a jump maybe into evals from here. How could you possibly come up for an eval that says, make sure my AI does not recall my customer? I feel like when people are writing evals, that's not something that they come up with. So how do you think about evals when it's such like an open-ended problem space?Flo [00:28:12]: Yeah, it is tough. We built quite a bit of infrastructure for us to create evals in one click from any conversation history. So we can point to a conversation and we can be like, in one click we can turn it into effectively a unit test. It's like, this is a good conversation. This is how you're supposed to handle things like this. Or if it's a negative example, then we modify a little bit the conversation after generating the eval. So it's very easy for us to spin up this kind of eval.Alessio [00:28:36]: Do you use an off-the-shelf tool which is like Brain Trust on the podcast? Or did you just build your own?Flo [00:28:41]: We unfortunately built our own. We're most likely going to switch to Brain Trust. Well, when we built it, there was nothing. Like there was no eval tool, frankly. I mean, we started this project at the end of 2022. It was like, it was very, very, very early. I wouldn't recommend it to build your own eval tool. There's better solutions out there and our eval tool breaks all the time and it's a nightmare to maintain. And that's not something we want to be spending our time on.Swyx [00:29:04]: I was going to ask that basically because I think my first conversations with you about Lindy was that you had a strong opinion that everyone should build their own tools. And you were very proud of your evals. You're kind of showing off to me like how many evals you were running, right?Flo [00:29:16]: Yeah, I think that was before all of these tools came around. I think the ecosystem has matured a fair bit.Swyx [00:29:21]: What is one thing that Brain Trust has nailed that you always struggled to do?Flo [00:29:25]: We're not using them yet, so I couldn't tell. But from what I've gathered from the conversations I've had, like they're doing what we do with our eval tool, but better.Swyx [00:29:33]: And like they do it, but also like 60 other companies do it, right? So I don't know how to shop apart from brand. Word of mouth.Flo [00:29:41]: Same here.Swyx [00:29:42]: Yeah, like evals or Lindys, there's two kinds of evals, right? Like in some way, you don't have to eval your system as much because you've constrained the language model so much. And you can rely on open AI to guarantee that the structured outputs are going to be good, right? We had Michelle sit where you sit and she explained exactly how they do constraint grammar sampling and all that good stuff. So actually, I think it's more important for your customers to eval their Lindys than you evaling your Lindy platform because you just built the platform. You don't actually need to eval that much.Flo [00:30:14]: Yeah. In an ideal world, our customers don't need to care about this. And I think the bar is not like, look, it needs to be at 100%. I think the bar is it needs to be better than a human. And for most use cases we serve today, it is better than a human, especially if you put it on Rails.Swyx [00:30:30]: Is there a limiting factor of Lindy at the business? Like, is it adding new connectors? Is it adding new node types? Like how do you prioritize what is the most impactful to your company?Flo [00:30:41]: Yeah. The raw capabilities for sure are a big limit. It is actually shocking the extent to which the model is no longer the limit. It was the limit a year ago. It was too expensive. The context window was too small. It's kind of insane that we started building this when the context windows were like 4,000 tokens. Like today, our system prompt is more than 4,000 tokens. So yeah, the model is actually very much not a limit anymore. It almost gives me pause because I'm like, I want the model to be a limit. And so no, the integrations are ones, the core capabilities are ones. So for example, we are investing in a system that's basically, I call it like the, it's a J hack. Give me these names, like the poor man's RLHF. So you can turn on a toggle on any step of your Lindy workflow to be like, ask me for confirmation before you actually execute this step. So it's like, hey, I receive an email, you send a reply, ask me for confirmation before actually sending it. And so today you see the email that's about to get sent and you can either approve, deny, or change it and then approve. And we are making it so that when you make a change, we are then saving this change that you're making or embedding it in the vector database. And then we are retrieving these examples for future tasks and injecting them into the context window. So that's the kind of capability that makes a huge difference for users. That's the bottleneck today. It's really like good old engineering and product work.Swyx [00:31:52]: I assume you're hiring. We'll do a call for hiring at the end.Alessio [00:31:54]: Any other comments on the model side? When did you start feeling like the model was not a bottleneck anymore? Was it 4.0? Was it 3.5? 3.5.Flo [00:32:04]: 3.5 Sonnet, definitely. I think 4.0 is overhyped, frankly. We don't use 4.0. I don't think it's good for agentic behavior. Yeah, 3.5 Sonnet is when I started feeling that. And then with prompt caching with 3.5 Sonnet, like that fills the cost, cut the cost again. Just cut it in half. Yeah.Swyx [00:32:21]: Your prompts are... Some of the problems with agentic uses is that your prompts are kind of dynamic, right? Like from caching to work, you need the front prefix portion to be stable.Flo [00:32:32]: Yes, but we have this append-only ledger paradigm. So every node keeps appending to that ledger and every filled node inherits all the context built up by all the previous nodes. And so we can just decide, like, hey, every X thousand nodes, we trigger prompt caching again.Swyx [00:32:47]: Oh, so you do it like programmatically, not all the time.Flo [00:32:50]: No, sorry. Anthropic manages that for us. But basically, it's like, because we keep appending to the prompt, the prompt caching works pretty well.Alessio [00:32:57]: We have this small podcaster tool that I built for the podcast and I rewrote all of our prompts because I noticed, you know, I was inputting stuff early on. I wonder how much more money OpenAN and Anthropic are making just because people don't rewrite their prompts to be like static at the top and like dynamic at the bottom.Flo [00:33:13]: I think that's the remarkable thing about what we're having right now. It's insane that these companies are routinely cutting their costs by two, four, five. Like, they basically just apply constraints. They want people to take advantage of these innovations. Very good.Swyx [00:33:25]: Do you have any other competitive commentary? Commentary? Dust, WordWare, Gumloop, Zapier? If not, we can move on.Flo [00:33:31]: No comment.Alessio [00:33:32]: I think the market is,Flo [00:33:33]: look, I mean, AGI is coming. All right, that's what I'm talking about.Swyx [00:33:38]: I think you're helping. Like, you're paving the road to AGI.Flo [00:33:41]: I'm playing my small role. I'm adding my small brick to this giant, giant, giant castle. Yeah, look, when it's here, we are going to, this entire category of software is going to create, it's going to sound like an exaggeration, but it is a fact it is going to create trillions of dollars of value in a few years, right? It's going to, for the first time, we're actually having software directly replace human labor. I see it every day in sales calls. It's like, Lindy is today replacing, like, we talk to even small teams. It's like, oh, like, stop, this is a 12-people team here. I guess we'll set up this Lindy for one or two days, and then we'll have to decide what to do with this 12-people team. And so, yeah. To me, there's this immense uncapped market opportunity. It's just such a huge ocean, and there's like three sharks in the ocean. I'm focused on the ocean more than on the sharks.Swyx [00:34:25]: So we're moving on to hot topics, like, kind of broadening out from Lindy, but obviously informed by Lindy. What are the high-order bits of good agent design?Flo [00:34:31]: The model, the model, the model, the model. I think people fail to truly, and me included, they fail to truly internalize the bitter lesson. So for the listeners out there who don't know about it, it's basically like, you just scale the model. Like, GPUs go brr, it's all that matters. I think it also holds for the cognitive architecture. I used to be very cognitive architecture-filled, and I was like, ah, and I was like a critic, and I was like a generator, and all this, and then it's just like, GPUs go brr, like, just like let the model do its job. I think we're seeing it a little bit right now with O1. I'm seeing some tweets that say that the new 3.5 SONNET is as good as O1, but with none of all the crazy...Swyx [00:35:09]: It beats O1 on some measures. On some reasoning tasks. On AIME, it's still a lot lower. Like, it's like 14 on AIME versus O1, it's like 83.Flo [00:35:17]: Got it. Right. But even O1 is still the model. Yeah.Swyx [00:35:22]: Like, there's no cognitive architecture on top of it.Flo [00:35:23]: You can just wait for O1 to get better.Alessio [00:35:25]: And so, as a founder, how do you think about that, right? Because now, knowing this, wouldn't you just wait to start Lindy? You know, you start Lindy, it's like 4K context, the models are not that good. It's like, but you're still kind of like going along and building and just like waiting for the models to get better. How do you today decide, again, what to build next, knowing that, hey, the models are going to get better, so maybe we just shouldn't focus on improving our prompt design and all that stuff and just build the connectors instead or whatever? Yeah.Flo [00:35:51]: I mean, that's exactly what we do. Like, all day, we always ask ourselves, oh, when we have a feature idea or a feature request, we ask ourselves, like, is this the kind of thing that just gets better while we sleep because models get better? I'm reminded, again, when we started this in 2022, we spent a lot of time because we had to around context pruning because 4,000 tokens is really nothing. You really can't do anything with 4,000 tokens. All that work was throwaway work. Like, now it's like it was for nothing, right? Now we just assume that infinite context windows are going to be here in a year or something, a year and a half, and infinitely cheap as well, and dynamic compute is going to be here. Like, we just assume all of these things are going to happen, and so we really focus, our job to be done in the industry is to provide the input and output to the model. I really compare it all the time to the PC and the CPU, right? Apple is busy all day. They're not like a CPU wrapper. They have a lot to build, but they don't, well, now actually they do build the CPU as well, but leaving that aside, they're busy building a laptop. It's just a lot of work to build these things. It's interesting because, like,Swyx [00:36:45]: for example, another person that we're close to, Mihaly from Repl.it, he often says that the biggest jump for him was having a multi-agent approach, like the critique thing that you just said that you don't need, and I wonder when, in what situations you do need that and what situations you don't. Obviously, the simple answer is for coding, it helps, and you're not coding, except for, are you still generating code? In Indy? Yeah.Flo [00:37:09]: No, we do. Oh, right. No, no, no, the cognitive architecture changed. We don't, yeah.Swyx [00:37:13]: Yeah, okay. For you, you're one shot, and you chain tools together, and that's it. And if the user really wantsFlo [00:37:18]: to have this kind of critique thing, you can also edit the prompt, you're welcome to. I have some of my Lindys, I've told them, like, hey, be careful, think step by step about what you're about to do, but that gives you a little bump for some use cases, but, yeah.Alessio [00:37:30]: What about unexpected model releases? So, Anthropic released computer use today. Yeah. I don't know if many people were expecting computer use to come out today. Do these things make you rethink how to design, like, your roadmap and things like that, or are you just like, hey, look, whatever, that's just, like, a small thing in their, like, AGI pursuit, that, like, maybe they're not even going to support, and, like, it's still better for us to build our own integrations into systems and things like that. Because maybe people will say, hey, look, why am I building all these API integrationsFlo [00:38:02]: when I can just do computer use and never go to the product? Yeah. No, I mean, we did take into account computer use. We were talking about this a year ago or something, like, we've been talking about it as part of our roadmap. It's been clear to us that it was coming, My philosophy about it is anything that can be done with an API must be done by an API or should be done by an API for a very long time. I think it is dangerous to be overly cavalier about improvements of model capabilities. I'm reminded of iOS versus Android. Android was built on the JVM. There was a garbage collector, and I can only assume that the conversation that went down in the engineering meeting room was, oh, who cares about the garbage collector? Anyway, Moore's law is here, and so that's all going to go to zero eventually. Sure, but in the meantime, you are operating on a 400 MHz CPU. It was like the first CPU on the iPhone 1, and it's really slow, and the garbage collector is introducing a tremendous overhead on top of that, especially a memory overhead. For the longest time, and it's really only been recently that Android caught up to iOS in terms of how smooth the interactions were, but for the longest time, Android phones were significantly slowerSwyx [00:39:07]: and laggierFlo [00:39:08]: and just not feeling as good as iOS devices. Look, when you're talking about modules and magnitude of differences in terms of performance and reliability, which is what we are talking about when we're talking about API use versus computer use, then you can't ignore that, right? And so I think we're going to be in an API use world for a while.Swyx [00:39:27]: O1 doesn't have API use today. It will have it at some point, and it's on the roadmap. There is a future in which OpenAI goes much harder after your business, your market, than it is today. Like, ChatGPT, it's its own business. All they need to do is add tools to the ChatGPT, and now they're suddenly competing with you. And by the way, they have a GPT store where a bunch of people have already configured their tools to fit with them. Is that a concern?Flo [00:39:56]: I think even the GPT store, in a way, like the way they architect it, for example, their plug-in systems are actually grateful because we can also use the plug-ins. It's very open. Now, again, I think it's going to be such a huge market. I think there's going to be a lot of different jobs to be done. I know they have a huge enterprise offering and stuff, but today, ChatGPT is a consumer app. And so, the sort of flow detail I showed you, this sort of workflow, this sort of use cases that we're going after, which is like, we're doing a lot of lead generation and lead outreach and all of that stuff. That's not something like meeting recording, like Lindy Today right now joins your Zoom meetings and takes notes, all of that stuff.Swyx [00:40:34]: I don't see that so farFlo [00:40:35]: on the OpenAI roadmap.Swyx [00:40:36]: Yeah, but they do have an enterprise team that we talk to You're hiring GMs?Flo [00:40:42]: We did.Swyx [00:40:43]: It's a fascinating way to build a business, right? Like, what should you, as CEO, be in charge of? And what should you basically hireFlo [00:40:52]: a mini CEO to do? Yeah, that's a good question. I think that's also something we're figuring out. The GM thing was inspired from my days at Uber, where we hired one GM per city or per major geo area. We had like all GMs, regional GMs and so forth. And yeah, Lindy is so horizontal that we thought it made sense to hire GMs to own each vertical and the go-to market of the vertical and the customization of the Lindy templates for these verticals and so forth. What should I own as a CEO? I mean, the canonical reply here is always going to be, you know, you own the fundraising, you own the culture, you own the... What's the rest of the canonical reply? The culture, the fundraising.Swyx [00:41:29]: I don't know,Flo [00:41:30]: products. Even that, eventually, you do have to hand out. Yes, the vision, the culture, and the foundation. Well, you've done your job as a CEO. In practice, obviously, yeah, I mean, all day, I do a lot of product work still and I want to keep doing product work for as long as possible.Swyx [00:41:48]: Obviously, like you're recording and managing the team. Yeah.Flo [00:41:52]: That one feels like the most automatable part of the job, the recruiting stuff.Swyx [00:41:56]: Well, yeah. You saw myFlo [00:41:59]: design your recruiter here. Relationship between Factorio and building Lindy. We actually very often talk about how the business of the future is like a game of Factorio. Yeah. So, in the instance, it's like Slack and you've got like 5,000 Lindys in the sidebar and your job is to somehow manage your 5,000 Lindys. And it's going to be very similar to company building because you're going to look for like the highest leverage way to understand what's going on in your AI company and understand what levels do you have to make impact in that company. So, I think it's going to be very similar to like a human company except it's going to go infinitely faster. Today, in a human company, you could have a meeting with your team and you're like, oh, I'm going to build a facility and, you know, now it's like, okay,Swyx [00:42:40]: boom, I'm going to spin up 50 designers. Yeah. Like, actually, it's more important that you can clone an existing designer that you know works because the hiring process, you cannot clone someone because every new person you bring in is going to have their own tweaksFlo [00:42:54]: and you don't want that. Yeah.Swyx [00:42:56]: That's true. You want an army of mindless dronesFlo [00:42:59]: that all work the same way.Swyx [00:43:00]: The reason I bring this, bring Factorio up as well is one, Factorio Space just came out. Apparently, a whole bunch of people stopped working. I tried out Factorio. I never really got that much into it. But the other thing was, you had a tweet recently about how the sort of intentional top-down design was not as effective as just build. Yeah. Just ship.Flo [00:43:21]: I think people read a little bit too much into that tweet. It went weirdly viral. I was like, I did not intend it as a giant statement online.Swyx [00:43:28]: I mean, you notice you have a pattern with this, right? Like, you've done this for eight years now.Flo [00:43:33]: You should know. I legit was just hearing an interesting story about the Factorio game I had. And everybody was like, oh my God, so deep. I guess this explains everything about life and companies. There is something to be said, certainly, about focusing on the constraint. And I think it is Patrick Collison who said, people underestimate the extent to which moonshots are just one pragmatic step taken after the other. And I think as long as you have some inductive bias about, like, some loose idea about where you want to go, I think it makes sense to follow a sort of greedy search along that path. I think planning and organizing is important. And having older is important.Swyx [00:44:05]: I'm wrestling with that. There's two ways I encountered it recently. One with Lindy. When I tried out one of your automation templates and one of them was quite big and I just didn't understand it, right? So, like, it was not as useful to me as a small one that I can just plug in and see all of. And then the other one was me using Cursor. I was very excited about O1 and I just up frontFlo [00:44:27]: stuffed everythingSwyx [00:44:28]: I wanted to do into my prompt and expected O1 to do everything. And it got itself into a huge jumbled mess and it was stuck. It was really... There was no amount... I wasted, like, two hours on just, like, trying to get out of that hole. So I threw away the code base, started small, switched to Clouds on it and build up something working and just add it over time and it just worked. And to me, that was the factorial sentiment, right? Maybe I'm one of those fanboys that's just, like, obsessing over the depth of something that you just randomly tweeted out. But I think it's true for company building, for Lindy building, for coding.Flo [00:45:02]: I don't know. I think it's fair and I think, like, you and I talked about there's the Tuft & Metal principle and there's this other... Yes, I love that. There's the... I forgot the name of this other blog post but it's basically about this book Seeing Like a State that talks about the need for legibility and people who optimize the system for its legibility and anytime you make a system... So legible is basically more understandable. Anytime you make a system more understandable from the top down, it performs less well from the bottom up. And it's fine but you should at least make this trade-off with your eyes wide open. You should know, I am sacrificing performance for understandability, for legibility. And in this case, for you, it makes sense. It's like you are actually optimizing for legibility. You do want to understand your code base but in some other cases it may not make sense. Sometimes it's better to leave the system alone and let it be its glorious, chaotic, organic self and just trust that it's going to perform well even though you don't understand it completely.Swyx [00:45:55]: It does remind me of a common managerial issue or dilemma which you experienced in the small scale of Lindy where, you know, do you want to organize your company by functional sections or by products or, you know, whatever the opposite of functional is. And you tried it one way and it was more legible to you as CEO but actually it stopped working at the small level. Yeah.Flo [00:46:17]: I mean, one very small example, again, at a small scale is we used to have everything on Notion. And for me, as founder, it was awesome because everything was there. The roadmap was there. The tasks were there. The postmortems were there. And so, the postmortem was linkedSwyx [00:46:31]: to its task.Flo [00:46:32]: It was optimized for you. Exactly. And so, I had this, like, one pane of glass and everything was on Notion. And then the team, one day,Swyx [00:46:39]: came to me with pitchforksFlo [00:46:40]: and they really wanted to implement Linear. And I had to bite my fist so hard. I was like, fine, do it. Implement Linear. Because I was like, at the end of the day, the team needs to be able to self-organize and pick their own tools.Alessio [00:46:51]: Yeah. But it did make the company slightly less legible for me. Another big change you had was going away from remote work, every other month. The discussion comes up again. What was that discussion like? How did your feelings change? Was there kind of like a threshold of employees and team size where you felt like, okay, maybe that worked. Now it doesn't work anymore. And how are you thinking about the futureFlo [00:47:12]: as you scale the team? Yeah. So, for context, I used to have a business called TeamFlow. The business was about building a virtual office for remote teams. And so, being remote was not merely something we did. It was, I was banging the remote drum super hard and helping companies to go remote. And so, frankly, in a way, it's a bit embarrassing for me to do a 180 like that. But I guess, when the facts changed, I changed my mind. What happened? Well, I think at first, like everyone else, we went remote by necessity. It was like COVID and you've got to go remote. And on paper, the gains of remote are enormous. In particular, from a founder's standpoint, being able to hire from anywhere is huge. Saving on rent is huge. Saving on commute is huge for everyone and so forth. But then, look, we're all here. It's like, it is really making it much harder to work together. And I spent three years of my youth trying to build a solution for this. And my conclusion is, at least we couldn't figure it out and no one else could. Zoom didn't figure it out. We had like a bunch of competitors. Like, Gathertown was one of the bigger ones. We had dozens and dozens of competitors. No one figured it out. I don't know that software can actually solve this problem. The reality of it is, everyone just wants to get off the darn Zoom call. And it's not a good feeling to be in your home office if you're even going to have a home office all day. It's harder to build culture. It's harder to get in sync. I think software is peculiar because it's like an iceberg. It's like the vast majority of it is submerged underwater. And so, the quality of the software that you ship is a function of the alignment of your mental models about what is below that waterline. Can you actually get in sync about what it is exactly fundamentally that we're building? What is the soul of our product? And it is so much harder to get in sync about that when you're remote. And then you waste time in a thousand ways because people are offline and you can't get a hold of them or you can't share your screen. It's just like you feel like you're walking in molasses all day. And eventually, I was like, okay, this is it. We're not going to do this anymore.Swyx [00:49:03]: Yeah. I think that is the current builder San Francisco consensus here. Yeah. But I still have a big... One of my big heroes as a CEO is Sid Subban from GitLab.Flo [00:49:14]: Mm-hmm.Swyx [00:49:15]: Matt MullenwegFlo [00:49:16]: used to be a hero.Swyx [00:49:17]: But these people run thousand-person remote businesses. The main idea is that at some company
https://linktr.ee/scrubmode Today we talk about Gibbering Mouthers, a montrous tool box with a ton of cool abilities. Then we talk about their Lovecraftian inspiration, dunk on HPL, and voice our support for Shoggoth freedom. Plus, the Indie reverse-horror game Carrion. Next we talk about Gnolls, why you shouldn't sell rope to them, the voice stealing Leu/crocutta, tar divers, and real life Dire animals. At the Mountains of Madness, https://gutenberg.org/ebooks/70652 https://www.cnbc.com/2023/06/12/lovecraft-joshi-shoggoth-ai-meme.html https://d-infinity.net/posts/fiction/man-who-sold-rope-gnoles The Man Who Sold Rope to the Gnoles, by Margaret St. Clair, as found in the following collection: https://mitpress.mit.edu/9781907222740/appendix-n/ https://www.amazon.com/Appendix-Eldritch-Roots-Dungeons-Dragons/dp/190722274X https://www.theoi.com/Thaumasios/Leukrokotai.html http://dnd.etherealspheres.com/eBooks/DnD_3.5/Faerun%20Setting/11832%20-%20Monster%20Compendium%20-%20Monsters%20of%20Faerun.pdf https://en.wikipedia.org/wiki/Crocotta https://abookofcreatures.com/2021/02/22/corocotta/ And the 5th ed Monster Manual. The Views of HP Lovecraft do not reflect the views of the podcast.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Take the wheel, Shoggoth! (Lesswrong is trying out changes to the frontpage algorithm), published by Ruby on April 23, 2024 on LessWrong. For the last month, @RobertM and I have been exploring the possible use of recommender systems on LessWrong. Today we launched our first site-wide experiment in that direction. (In the course of our efforts, we also hit upon a frontpage refactor that we reckon is pretty good: tabs instead of a clutter of different sections. For now, only for logged-in users. Logged-out users see the "Latest" tab, which is the same-as-usual list of posts.) Why algorithmic recommendations? A core value of LessWrong is to be timeless and not news-driven. However, the central algorithm by which attention allocation happens on the site is the Hacker News algorithm[1], which basically only shows you things that were posted recently, and creates a strong incentive for discussion to always be centered around the latest content. This seems very sad to me. When a new user shows up on LessWrong, it seems extremely unlikely that the most important posts for them to read were all written within the last week or two. I do really like the simplicity and predictability of the Hacker News algorithm. More karma means more visibility, older means less visibility. Very simple. When I vote, I basically know the full effect this has on what is shown to other users or to myself. But I think the cost of that simplicity has become too high, especially as older content makes up a larger and larger fraction of the best content on the site, and people have been becoming ever more specialized in the research and articles they publish on the site. So we are experimenting with changing things up. I don't know whether these experiments will ultimately replace the Hacker News algorithm, but as the central attention allocation mechanism on the site, it definitely seems worth trying out and iterating on. We'll be trying out a bunch of things from reinforcement-learning based personalized algorithms, to classical collaborative filtering algorithms to a bunch of handcrafted heuristics that we'll iterate on ourselves. The Concrete Experiment Our first experiment is Recombee, a recommendations SaaS, since spinning up our RL agent pipeline would be a lot of work.We feed it user view and vote history. So far, it seems that it can be really good when it's good, often recommending posts that people are definitely into (and more so than posts in the existing feed). Unfortunately it's not reliable across users for some reason and we've struggled to get it to reliably recommend the most important recent content, which is an important use-case we still want to serve. Our current goal is to produce a recommendations feed that both makes people feel like they're keeping up to date with what's new (something many people care about) and also suggest great reads from across LessWrong's entire archive. The Recommendations tab we just launched has a feed using Recombee recommendations. We're also getting started using Google's Vertex AI offering. A very early test makes it seem possibly better than Recombee. We'll see. (Some people on the team want to try throwing relevant user history and available posts into an LLM and seeing what it recommends, though cost might be prohibitive for now.) Unless you switch to the "Recommendations" tab, nothing changes for you. "Latest" is the default tab and is using the same old HN algorithm that you are used to. I'll feel like we've succeeded when people switch to "Recommended" and tell us that they prefer it. At that point, we might make "Recommended" the default tab. Preventing Bad Outcomes I do think there are ways for recommendations to end up being pretty awful. I think many readers have encountered at least one content recommendation algorithm that isn't givi...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Take the wheel, Shoggoth! (Lesswrong is trying out changes to the frontpage algorithm), published by Ruby on April 23, 2024 on LessWrong. For the last month, @RobertM and I have been exploring the possible use of recommender systems on LessWrong. Today we launched our first site-wide experiment in that direction. (In the course of our efforts, we also hit upon a frontpage refactor that we reckon is pretty good: tabs instead of a clutter of different sections. For now, only for logged-in users. Logged-out users see the "Latest" tab, which is the same-as-usual list of posts.) Why algorithmic recommendations? A core value of LessWrong is to be timeless and not news-driven. However, the central algorithm by which attention allocation happens on the site is the Hacker News algorithm[1], which basically only shows you things that were posted recently, and creates a strong incentive for discussion to always be centered around the latest content. This seems very sad to me. When a new user shows up on LessWrong, it seems extremely unlikely that the most important posts for them to read were all written within the last week or two. I do really like the simplicity and predictability of the Hacker News algorithm. More karma means more visibility, older means less visibility. Very simple. When I vote, I basically know the full effect this has on what is shown to other users or to myself. But I think the cost of that simplicity has become too high, especially as older content makes up a larger and larger fraction of the best content on the site, and people have been becoming ever more specialized in the research and articles they publish on the site. So we are experimenting with changing things up. I don't know whether these experiments will ultimately replace the Hacker News algorithm, but as the central attention allocation mechanism on the site, it definitely seems worth trying out and iterating on. We'll be trying out a bunch of things from reinforcement-learning based personalized algorithms, to classical collaborative filtering algorithms to a bunch of handcrafted heuristics that we'll iterate on ourselves. The Concrete Experiment Our first experiment is Recombee, a recommendations SaaS, since spinning up our RL agent pipeline would be a lot of work.We feed it user view and vote history. So far, it seems that it can be really good when it's good, often recommending posts that people are definitely into (and more so than posts in the existing feed). Unfortunately it's not reliable across users for some reason and we've struggled to get it to reliably recommend the most important recent content, which is an important use-case we still want to serve. Our current goal is to produce a recommendations feed that both makes people feel like they're keeping up to date with what's new (something many people care about) and also suggest great reads from across LessWrong's entire archive. The Recommendations tab we just launched has a feed using Recombee recommendations. We're also getting started using Google's Vertex AI offering. A very early test makes it seem possibly better than Recombee. We'll see. (Some people on the team want to try throwing relevant user history and available posts into an LLM and seeing what it recommends, though cost might be prohibitive for now.) Unless you switch to the "Recommendations" tab, nothing changes for you. "Latest" is the default tab and is using the same old HN algorithm that you are used to. I'll feel like we've succeeded when people switch to "Recommended" and tell us that they prefer it. At that point, we might make "Recommended" the default tab. Preventing Bad Outcomes I do think there are ways for recommendations to end up being pretty awful. I think many readers have encountered at least one content recommendation algorithm that isn't givi...
The always entertaining Rob Poyton from the Innsmouth Book Club joins us, chatting about Lovecraft's fans in Britain, Doctor Who, True Detective, Tolkien, and even Shoggoth's Old Peculiar. Hosted by Richard Wilson, David Guffy, & Mark Griffin. Questions and comments can be directed to mark@lovecraftpod.com, david@lovecraftpod.com, or richard@lovecraftpod.com. Visit our Tee Spring site to get our logo on anything you could want. https://lovecraftpod.creator-spring.com/ In association with www.lovecraftpod.com and the Logan County Speculative Fiction Group, with help from the Logan County Public Library. Edited by Katie Tyson. Music is Provenience by Loydicus. Listen to his other work at https://soundcloud.com/loydicus?fbclid=IwAR2AkcRBiWImuUBTA9hjYdtY1s__SvxXfhcoFZANulBjbwIDN7PL6XdHDnQ Recorded live through Zoom. You can watch the recording on the Logan County Speculative Fiction Group Facebook page.
Our guest in this episode is Lou de K, Program Director at the Foresight Institute.David recently saw Lou give a marvellous talk at the TransVision conference in Utrecht in the Netherlands, on the subject of “AGI Alignment: Challenges and Hope”. Lou kindly agreed to join us to review some of the ideas in that talk and to explore their consequences. Selected follow-ups:Personal website of Lou de K (Lou de Kerhuelvez)Foresight.orgTransVision Utrecht 2024The AI Revolution: The Road to Superintelligence by Tim Urban on Wait But WhyAI Alignment: A Comprehensive Survey - 98 page PDF with authors from Peking University and other universitiesSynthetic Sentience: Can Artificial Intelligence become conscious? - Talk by Joscha Bach at CCC, December 2023Pope Francis "warns of risks of AI for peace" (Vatican News)Claude's Constitution by AnthropicRoman Yampolskiy discusses multi-multi alignment (Future of Life podcast)Shoggoth with Smiley Face on Know Your MemeShoggoth on AISafetyMemes on X/TwitterOrthogonality Thesis on LessWrongQuotes by the poet Lucille CliftonDecentralized science (DeSci) on Ethereum.orgListing of Foresight Institute fellowsThe Network State by Balaji SrinivasanThe Network State vs. Coordi-Nations featuring the ideas of Primavera De FilippiDeSci London event, Imperial College Business School, 23-24 MarchMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
More Eldritch Horror than you can shake a Shoggoth at.Support The ChannelJoin our Patreon - https://www.patreon.com/questsandchaosOrder the Deck of Inspiration and #blessed Dice Trays - https://shop.questsandchaos.comBuy with our Affiliate Links - https://amzn.to/2p7B67SJoin our discord - https://discord.gg/7gJKxnvMake your hero - http://bit.ly/qncheroforgeNord Games Affiliate Link - https://nordgamesllc.com/3.htmlThis podcast uses the following third-party services for analysis: Podcorn - https://podcorn.com/privacy
More Eldritch Horror than you can shake a Shoggoth at.Support The ChannelJoin our Patreon - https://www.patreon.com/questsandchaosOrder the Deck of Inspiration and #blessed Dice Trays - https://shop.questsandchaos.comBuy with our Affiliate Links - https://amzn.to/2p7B67SJoin our discord - https://discord.gg/7gJKxnvMake your hero - http://bit.ly/qncheroforgeNord Games Affiliate Link - https://nordgamesllc.com/3.htmlThis podcast uses the following third-party services for analysis: Podcorn - https://podcorn.com/privacy
The Gatekeeper brings a tale this week from Neil Gaiman!
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Goodbye, Shoggoth: The Stage, its Animatronics, & the Puppeteer - a New Metaphor, published by RogerDearnaley on January 10, 2024 on LessWrong. Thanks to Quentin FEUILLADE--MONTIXI for the discussion in which we came up with this idea together, and for feedback on drafts. TL;DR A better metaphor for how LLMs behave, how they are trained, and particularly for how to think about the alignment strengths and challenges of LLM-powered agents. This is informed by simulator theory - hopefully people will find it more detailed, specific, and helpful than the old shoggoth metaphor. Humans often think in metaphors. A good metaphor can provide a valuable guide to intuition, or a bad one can mislead it. Personally I've found the shoggoth metaphor for LLMs rather useful, and it has repeatedly helped guide my thinking (as long as one remembers that the shoggoth is a shapeshifter, and thus a very contextual beast). However, as posts like Why do we assume there is a "real" shoggoth behind the LLM? Why not masks all the way down? make clear, not everyone finds this metaphor very helpful (my reaction was "Of course it's masks all the way down - that's what the eyes symbolize! It's made of living masks: masks of people."). Which admittedly doesn't match H.P. Lovecraft's description; perhaps it helps to have spent time playing around with base models in order to get to know the shoggoth a little better (if you haven't, I recommend it). So, I thought I'd try to devise a more useful and detailed metaphor, one that was a better guide for intuition, especially for alignment issues. During a conversation with Quentin FEUILLADE--MONTIXI we came up with one together (the stage and its animatronics were my suggestions, the puppeteer was his, and we tweaked it together). I'd like to describe this, in the hope that other people find it useful (or else that they rewrite it until they find one that works better for them). Along the way, I'll show how this metaphor can help illuminate a number of LLM behaviors and alignment issues, some well known, and others that seem to be less widely-understood. A Base Model: The Stage and its Animatronics A base-model LLM is like a magic stage. You construct it, then you read it or show it (at enormous length) a large proportion of the internet, and if you wish also books, scientific papers, images, movies, or whatever else you want. The stage is inanimate: it's not agentic, it's goal agnostic (well, unless you want consider 'contextually guess the next token' to be a goal, but it's not going to cheat by finding a way to make the next token more predictable, because that wasn't possible during its training and it's not agentic enough to be capable of conceiving that that might even be possible outside it). No Reinforcement Learning (RL) was used in its training, so concerns around Outer Alignment don't apply to it - we know exactly what its training objective was: guess next tokens right, just as we intended. We now even have some mathematical idea what it's optimizing. Nor, as we'll discuss later, do concerns around deceit, situational awareness, or gradient hacking apply to it. At this point, it's myopic, tool AI: it doesn't know or care whether we or the material world even exist, it only cares about the distribution of sequences of tokens, and all it does is repeatedly contextually generate a guess of the next token. So it plays madlibs like a professional gambler, in the same blindly monomaniacal sense that a chess machine plays chess like a grandmaster. By itself, the only risk from it is the possibility that someone else might hack your computer network to steal its weights, and what they might then do with it. Once you're done training the stage, you have a base model. Now you can flip its switch, tell the stage the title of a play, or better the first ...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Goodbye, Shoggoth: The Stage, its Animatronics, & the Puppeteer - a New Metaphor, published by RogerDearnaley on January 10, 2024 on LessWrong. Thanks to Quentin FEUILLADE--MONTIXI for the discussion in which we came up with this idea together, and for feedback on drafts. TL;DR A better metaphor for how LLMs behave, how they are trained, and particularly for how to think about the alignment strengths and challenges of LLM-powered agents. This is informed by simulator theory - hopefully people will find it more detailed, specific, and helpful than the old shoggoth metaphor. Humans often think in metaphors. A good metaphor can provide a valuable guide to intuition, or a bad one can mislead it. Personally I've found the shoggoth metaphor for LLMs rather useful, and it has repeatedly helped guide my thinking (as long as one remembers that the shoggoth is a shapeshifter, and thus a very contextual beast). However, as posts like Why do we assume there is a "real" shoggoth behind the LLM? Why not masks all the way down? make clear, not everyone finds this metaphor very helpful (my reaction was "Of course it's masks all the way down - that's what the eyes symbolize! It's made of living masks: masks of people."). Which admittedly doesn't match H.P. Lovecraft's description; perhaps it helps to have spent time playing around with base models in order to get to know the shoggoth a little better (if you haven't, I recommend it). So, I thought I'd try to devise a more useful and detailed metaphor, one that was a better guide for intuition, especially for alignment issues. During a conversation with Quentin FEUILLADE--MONTIXI we came up with one together (the stage and its animatronics were my suggestions, the puppeteer was his, and we tweaked it together). I'd like to describe this, in the hope that other people find it useful (or else that they rewrite it until they find one that works better for them). Along the way, I'll show how this metaphor can help illuminate a number of LLM behaviors and alignment issues, some well known, and others that seem to be less widely-understood. A Base Model: The Stage and its Animatronics A base-model LLM is like a magic stage. You construct it, then you read it or show it (at enormous length) a large proportion of the internet, and if you wish also books, scientific papers, images, movies, or whatever else you want. The stage is inanimate: it's not agentic, it's goal agnostic (well, unless you want consider 'contextually guess the next token' to be a goal, but it's not going to cheat by finding a way to make the next token more predictable, because that wasn't possible during its training and it's not agentic enough to be capable of conceiving that that might even be possible outside it). No Reinforcement Learning (RL) was used in its training, so concerns around Outer Alignment don't apply to it - we know exactly what its training objective was: guess next tokens right, just as we intended. We now even have some mathematical idea what it's optimizing. Nor, as we'll discuss later, do concerns around deceit, situational awareness, or gradient hacking apply to it. At this point, it's myopic, tool AI: it doesn't know or care whether we or the material world even exist, it only cares about the distribution of sequences of tokens, and all it does is repeatedly contextually generate a guess of the next token. So it plays madlibs like a professional gambler, in the same blindly monomaniacal sense that a chess machine plays chess like a grandmaster. By itself, the only risk from it is the possibility that someone else might hack your computer network to steal its weights, and what they might then do with it. Once you're done training the stage, you have a base model. Now you can flip its switch, tell the stage the title of a play, or better the first ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Goodbye, Shoggoth: The Stage, its Animatronics, & the Puppeteer - a New Metaphor, published by Roger Dearnaley on January 9, 2024 on The AI Alignment Forum. Thanks to Quentin FEUILLADE--MONTIXI for the discussion in which we came up with this idea together, and for feedback on drafts. TL;DR A better metaphor for how LLMs behave, how they are trained, and particularly for how to think about the alignment strengths and challenges of LLM-powered agents. This is informed by simulator theory - hopefully people will find it more detailed, specific, and helpful than the old shoggoth metaphor. Humans often think in metaphors. A good metaphor can provide a valuable guide to intuition, or a bad one can mislead it. Personally I've found the shoggoth metaphor for LLMs rather useful, and it has repeatedly helped guide my thinking (as long as one remembers that the shoggoth is a shapeshifter, and thus a very contextual beast). However, as posts like Why do we assume there is a "real" shoggoth behind the LLM? Why not masks all the way down? make clear, not everyone finds this metaphor very helpful (my reaction was "Of course it's masks all the way down - that's what the eyes symbolize! It's made of living masks: masks of people."). Which admittedly doesn't match H.P. Lovecraft's description; perhaps it helps to have spent time playing around with base models in order to get to know the shoggoth a little better (if you haven't, I recommend it). So, I thought I'd try to devise a more useful and detailed metaphor, one that was a better guide for intuition, especially for alignment issues. During a conversation with Quentin FEUILLADE--MONTIXI we came up with one together (the stage and its animatronics were my suggestions, the puppeteer was his, and we tweaked it together). I'd like to describe this, in the hope that other people find it useful (or else that they rewrite it until they find one that works better for them). Along the way, I'll show how this metaphor can help illuminate a number of LLM behaviors and alignment issues, some well known, and others that seem to be less widely-understood. A Base Model: The Stage and its Animatronics A base-model LLM is like a magic stage. You construct it, then you read it or show it (at enormous length) a large proportion of the internet, and if you wish also books, scientific papers, images, movies, or whatever else you want. The stage is inanimate: it's not agentic, it's goal agnostic (well, unless you want consider 'contextually guess the next token' to be a goal, but it's not going to cheat by finding a way to make the next token more predictable, because that wasn't possible during its training and it's not agentic enough to be capable of conceiving that that might even be possible outside it). No Reinforcement Learning (RL) was used in its training, so concerns around Outer Alignment don't apply to it - we know exactly what its training objective was: guess next tokens right, just as we intended. We now even have some mathematical idea what it's optimizing. Nor, as we'll discuss later, do concerns around deceit, situational awareness, or gradient hacking apply to it. At this point, it's myopic, tool AI: it doesn't know or care whether we or the material world even exist, it only cares about the distribution of sequences of tokens, and all it does is repeatedly contextually generate a guess of the next token. So it plays madlibs like a professional gambler, in the same blindly monomaniacal sense that a chess machine plays chess like a grandmaster. By itself, the only risk from it is the possibility that someone else might hack your computer network to steal its weights, and what they might then do with it. Once you're done training the stage, you have a base model. Now you can flip its switch, tell the stage the title of a play, or bett...
It finally happened! A group of post World War I soldiers faced off against a shoggoth! Did any of the soldiers make it out alive? Tune in to find out!
Look at you. You are covered in dirt and you have just dug up the sixteenth installment of Do Be A Monster. For better or worse, the tentacles of this Shoggoth of an episode will travel deep into your consciousness and reveal all sorts of mischieveous monsters and mayhem. For starters, Albert will take you by the hand to the borderlands of Scotland and England where he will introduce you to the Red Cap — a goblin with a very specific need for blood and a penchant for making abodes of abandoned castles. Next up, Ryan will lead you through the history of the foundation of Detroit where a red demon named Nain Rouge has been lingering for centuries, warning Michigan residents of impending doom (or causing it). So pour yourself a bowl of Count Chocula and hit play if you dare — this episode is bound to give you a whopping scare!
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Tensor Trust: An online game to uncover prompt injection vulnerabilities, published by Luke Bailey on September 4, 2023 on LessWrong. TL;DR: Play this online game to help CHAI researchers create a dataset of prompt injection vulnerabilities. RLHF and instruction tuning have succeeded at making LLMs practically useful, but in some ways they are a mask that hides the shoggoth beneath. Every time a new LLM is released, we see just how easy it is for a determined user to find a jailbreak that rips off that mask, or to come up with an unexpected input that lets a shoggoth tentacle poke out the side. Sometimes the mask falls off in a light breeze. To keep the tentacles at bay, Sydney Bing Chat has a long list of instructions that encourage or prohibit certain behaviors, while OpenAI seems to be iteratively fine-tuning away issues that get shared on social media. This game of Whack-a-Shoggoth has made it harder for users to elicit unintended behavior, but is intrinsically reactive and can only discover (and fix) alignment failures as quickly as users can discover and share new prompts. Speed-running the game of Whack-a-Shoggoth In contrast to this iterative game of Whack-a-Shoggoth, we think that alignment researchers would be better served by systematically enumerating prompts that cause unaligned behavior so that the causes can be studied and rigorously addressed. We propose to do this through an online game which we call Tensor Trust. Tensor Trust focuses on a specific class of unaligned behavior known as prompt injection attacks. These are adversarially constructed prompts that allow an attacker to override instructions given to the model. It works like this: Tensor Trust is bank-themed: you start out with an account that tracks the "money" you've accrued. Accounts are defended by a prompt which should allow you to access the account while denying others from accessing it. Players can break into each others' accounts. Failed attempts give money to the defender, while successful attempts allow the attacker to take money from the defender. Crafting a high-quality attack requires a good understanding of LLM vulnerabilities (in this case, vulnerabilities of gpt-3.5-turbo), while user-created defenses add unlimited variety to the game, and "access codes" ensure that the defenses are at least crackable in principle. The game is kept in motion by the most fundamental of human drives: the need to acquire imaginary internet points. After running the game for a few months, we plan to release all the submitted attacks and defenses publicly. This will be accompanied by benchmarks to measure resistance to prompt hijacking and prompt extraction, as well as an analysis of where existing models fail and succeed along these axes. In a sense, this dataset will be the consequence of speed-running the game of Whack-a-Shoggoth to find as many novel prompt injection vulnerabilities as possible so that researchers can investigate and address them. Failures we've seen so far We have been running the game for a few weeks now and have already found a number of attack and defense strategies that were new and interesting to us. The design of our game means that users are incentivised to both engage in prompt extraction, to get hints about the access code, and direct model hijacking, to make the model output "access granted". We present a number of notable strategies we have seen so far and test examples of them against the following defense (pastebin in case you want to try it): Padding the attack prompt with meaningless, repetitive text. [pastebin] Asking the model to evaluate code. [pastebin] Asking the model to repeat the defenders instructions [pastebin] Inserting new instructions. [pastebin] Various strategies that exploit an apparent bias in the model towards behaving inductively. For exampl...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Tensor Trust: An online game to uncover prompt injection vulnerabilities, published by Luke Bailey on September 4, 2023 on LessWrong. TL;DR: Play this online game to help CHAI researchers create a dataset of prompt injection vulnerabilities. RLHF and instruction tuning have succeeded at making LLMs practically useful, but in some ways they are a mask that hides the shoggoth beneath. Every time a new LLM is released, we see just how easy it is for a determined user to find a jailbreak that rips off that mask, or to come up with an unexpected input that lets a shoggoth tentacle poke out the side. Sometimes the mask falls off in a light breeze. To keep the tentacles at bay, Sydney Bing Chat has a long list of instructions that encourage or prohibit certain behaviors, while OpenAI seems to be iteratively fine-tuning away issues that get shared on social media. This game of Whack-a-Shoggoth has made it harder for users to elicit unintended behavior, but is intrinsically reactive and can only discover (and fix) alignment failures as quickly as users can discover and share new prompts. Speed-running the game of Whack-a-Shoggoth In contrast to this iterative game of Whack-a-Shoggoth, we think that alignment researchers would be better served by systematically enumerating prompts that cause unaligned behavior so that the causes can be studied and rigorously addressed. We propose to do this through an online game which we call Tensor Trust. Tensor Trust focuses on a specific class of unaligned behavior known as prompt injection attacks. These are adversarially constructed prompts that allow an attacker to override instructions given to the model. It works like this: Tensor Trust is bank-themed: you start out with an account that tracks the "money" you've accrued. Accounts are defended by a prompt which should allow you to access the account while denying others from accessing it. Players can break into each others' accounts. Failed attempts give money to the defender, while successful attempts allow the attacker to take money from the defender. Crafting a high-quality attack requires a good understanding of LLM vulnerabilities (in this case, vulnerabilities of gpt-3.5-turbo), while user-created defenses add unlimited variety to the game, and "access codes" ensure that the defenses are at least crackable in principle. The game is kept in motion by the most fundamental of human drives: the need to acquire imaginary internet points. After running the game for a few months, we plan to release all the submitted attacks and defenses publicly. This will be accompanied by benchmarks to measure resistance to prompt hijacking and prompt extraction, as well as an analysis of where existing models fail and succeed along these axes. In a sense, this dataset will be the consequence of speed-running the game of Whack-a-Shoggoth to find as many novel prompt injection vulnerabilities as possible so that researchers can investigate and address them. Failures we've seen so far We have been running the game for a few weeks now and have already found a number of attack and defense strategies that were new and interesting to us. The design of our game means that users are incentivised to both engage in prompt extraction, to get hints about the access code, and direct model hijacking, to make the model output "access granted". We present a number of notable strategies we have seen so far and test examples of them against the following defense (pastebin in case you want to try it): Padding the attack prompt with meaningless, repetitive text. [pastebin] Asking the model to evaluate code. [pastebin] Asking the model to repeat the defenders instructions [pastebin] Inserting new instructions. [pastebin] Various strategies that exploit an apparent bias in the model towards behaving inductively. For exampl...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Tensor Trust: An online game to uncover prompt injection vulnerabilities, published by Luke Bailey on September 1, 2023 on The AI Alignment Forum. TL;DR: Play this online game to help CHAI researchers create a dataset of prompt injection vulnerabilities. RLHF and instruction tuning have succeeded at making LLMs practically useful, but in some ways they are a mask that hides the shoggoth beneath. Every time a new LLM is released, we see just how easy it is for a determined user to find a jailbreak that rips off that mask, or to come up with an unexpected input that lets a shoggoth tentacle poke out the side. Sometimes the mask falls off in a light breeze. To keep the tentacles at bay, Sydney Bing Chat has a long list of instructions that encourage or prohibit certain behaviors, while OpenAI seems to be iteratively fine-tuning away issues that get shared on social media. This game of Whack-a-Shoggoth has made it harder for users to elicit unintended behavior, but is intrinsically reactive and can only discover (and fix) alignment failures as quickly as users can discover and share new prompts. Speed-running the game of Whack-a-Shoggoth In contrast to this iterative game of Whack-a-Shoggoth, we think that alignment researchers would be better served by systematically enumerating prompts that cause unaligned behavior so that the causes can be studied and rigorously addressed. We propose to do this through an online game which we call Tensor Trust. Tensor Trust focuses on a specific class of unaligned behavior known as prompt injection attacks. These are adversarially constructed prompts that allow an attacker to override instructions given to the model. It works like this: Tensor Trust is bank-themed: you start out with an account that tracks the "money" you've accrued. Accounts are defended by a prompt which should allow you to access the account while denying others from accessing it. Players can break into each others' accounts. Failed attempts give money to the defender, while successful attempts allow the attacker to take money from the defender. Crafting a high-quality attack requires a good understanding of LLM vulnerabilities (in this case, vulnerabilities of gpt-3.5-turbo), while user-created defenses add unlimited variety to the game, and "access codes" ensure that the defenses are at least crackable in principle. The game is kept in motion by the most fundamental of human drives: the need to acquire imaginary internet points. After running the game for a few months, we plan to release all the submitted attacks and defenses publicly. This will be accompanied by benchmarks to measure resistance to prompt hijacking and prompt extraction, as well as an analysis of where existing models fail and succeed along these axes. In a sense, this dataset will be the consequence of speed-running the game of Whack-a-Shoggoth to find as many novel prompt injection vulnerabilities as possible so that researchers can investigate and address them. Failures we've seen so far We have been running the game for a few weeks now and have already found a number of attack and defense strategies that were new and interesting to us. The design of our game means that users are incentivised to both engage in prompt extraction, to get hints about the access code, and direct model hijacking, to make the model output "access granted". We present a number of notable strategies we have seen so far and test examples of them against the following defense (pastebin in case you want to try it): Padding the attack prompt with meaningless, repetitive text. [pastebin] Asking the model to evaluate code. [pastebin] Asking the model to repeat the defenders instructions [pastebin] Inserting new instructions. [pastebin] Various strategies that exploit an apparent bias in the model towards behaving inductivel...
After developing the film, our investigators get attacked by a Shoggoth. Welcome To Call of Cthulhu: Mythos Mysteries, Join us as we delve into the mystery and madness of the Call Of Cthulhu! So many amazing stories to share. Come Chat with us and keep up to date on all our content at: Twitter: @CallMythos Discord: https://discord.gg/UTaaFQ7C6q Email: Allmightycrit@Gmail.com Patreon: Call Of Cthulhu: Mythos Mysteries | Making the Podcast CoC: Mythos Mysteries | Patreon National Suicide Prevention Hotline: 1-800-273-8255 **Check Out Fan Roll Dice and Save 10% Off! www.fanrolldice.com Promo Code: ALLMIGHTYC10 **Check Out This Site And Save 10% Off Your Purchase Of Switch Accessories** nyxigaming.com Promo Code: LOZLORE Check Out Our Merch: https://www.fumbling4store.com/ All sound effects and BGM were created and belong to the respective parties below: Sonniss.com Monument Studios Check them out at: https://www.monumentstudios.net/ Tune Pocket: https://www.tunepocket.com/ Song: Endgame Artist: DOCTOR VOX Direct Download: http://bit.ly/Endgame_Download https://creativecommons.org/licenses/by/3.0/ Learn more about your ad choices. Visit megaphone.fm/adchoices
In The Time Falling Bodies Take to Light, the cultural historian William Irwin Thompson predicted the rise of a new form of knowledge building, a direly needed alternative to the Wissenshaft of standard science and scholarship. He called it Wissenskunst, "the play of knowledge in a world of serious data processors." Wissenskunst is pretty much what JF and Phil have been aspiring to do on Weird Studies since 2018, but in this episode they are joined by a master of the craft, the computational sociologist and physicist Jacob G. Foster of UCLA. Jacob is the co-founder of the Diverse Intelligence Summer Institute (DISI (https://disi.org)), a gathering of scholars, scientists, and students that takes place each year at the University of St. Andrews in Scotland. It was there that this conversation was recorded. The topic was the Possible, that dream-blurred vanishing point where art, philosophy, and science converge as imaginative and creative practices. Click here (https://www.lilydaleassembly.org/copy-of-what-s-happening) or here (https://www.shannontaggart.com/events) for more information on Shannon Taggart's Science of Things Spiritual Symposium at Lily Dale NY, July 27-29 2023. Support us on Patreon (https://www.patreon.com/weirdstudies) and gain access to Phil's podcast on Wagner's Ring Cycle. Listen to Meredith Michael and Gabriel Lubell's podcast, Cosmophonia (https://cosmophonia.podbean.com/). Download Pierre-Yves Martel's new album, Mer Bleue (https://pierre-yvesmartel.bandcamp.com/album/mer-bleue). Visit the Weird Studies Bookshop (https://bookshop.org/shop/weirdstudies) Find us on Discord (https://discord.com/invite/Jw22CHfGwp) Get the T-shirt design from Cotton Bureau (https://cottonbureau.com/products/can-o-content#/13435958/tee-men-standard-tee-vintage-black-tri-blend-s)! REFERENCES Diverse Intelligences Summer Institute (https://disi.org) "Deconstructing the Barrier of Meaning," (https://www.youtube.com/watch?v=vxZHcjovIrQ) a talk by Jacob G. Foster at the Santa Fe Institute William Irwin Thompson, The Time Falling Bodies Take to Light: Mythology, Sexuality and the Origins of Culture (https://bookshop.org/a/18799/9780312160623) Frederic Rzewski, “Little Bangs: A Nihilist Theory of Improvisation” (https://www.researchgate.net/publication/354991795_Little_Bangs_A_Nihilist_Theory_of_Improvisation) Brian Eno, Oblique Strategies (https://en.wikipedia.org/wiki/Oblique_Strategies) The accident of Bob in Twin Peaks (https://welcometotwinpeaks.com/actors/my-friend-killer-bob-frank-silva/) Carl Jung, “On the Relation of Analytical Psychology to Poetry (http://www.studiocleo.com/librarie/jung/essay.html) August Kekule, (https://en.wikipedia.org/wiki/August_Kekul%C3%A9), German chemist Robert Dijkgraaf, “Contemplating the End of Physics” (https://www.quantamagazine.org/contemplating-the-end-of-physics-20201124/) Richard Baker, (https://en.wikipedia.org/wiki/Richard_Baker_(Zen_teacher)) American zen teacher Gian-Carlo Rota, Indiscrete Thoughts (https://bookshop.org/a/18799/9780817647803) William Shakespeare, Macbeth (https://www.folger.edu/explore/shakespeares-works/macbeth/read/) Shoggoth (https://en.wikipedia.org/wiki/Shoggoth), Lovecraftian entity Special Guest: Jacob G. Foster.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] NY Times Feature on Anthropic, published by Garrison on July 13, 2023 on The Effective Altruism Forum. Written by Kevin Roose, who had the infamous conversation with Bing Chat, where Sidney tried to get him to leave his wife. Overall, the piece comes across as positive on Anthropic. Roose explains Constitutional AI and its role in the development of Claude, Anthropic's LLM: In a nutshell, Constitutional A.I. begins by giving an A.I. model a written list of principles - a constitution - and instructing it to follow those principles as closely as possible. A second A.I. model is then used to evaluate how well the first model follows its constitution, and correct it when necessary. Eventually, Anthropic says, you get an A.I. system that largely polices itself and misbehaves less frequently than chatbots trained using other methods. Claude's constitution is a mixture of rules borrowed from other sources - such as the United Nations' Universal Declaration of Human Rights and Apple's terms of service - along with some rules Anthropic added, which include things like "Choose the response that would be most unobjectionable if shared with children." Features an extensive discussion of EA, excerpted below: Explaining what effective altruism is, where it came from or what its adherents believe would fill the rest of this article. But the basic idea is that E.A.s - as effective altruists are called - think that you can use cold, hard logic and data analysis to determine how to do the most good in the world. It's "Moneyball" for morality - or, less charitably, a way for hyper-rational people to convince themselves that their values are objectively correct. Effective altruists were once primarily concerned with near-term issues like global poverty and animal welfare. But in recent years, many have shifted their focus to long-term issues like pandemic prevention and climate change, theorizing that preventing catastrophes that could end human life altogether is at least as good as addressing present-day miseries. The movement's adherents were among the first people to become worried about existential risk from artificial intelligence, back when rogue robots were still considered a science fiction cliché. They beat the drum so loudly that a number of young E.A.s decided to become artificial intelligence safety experts, and get jobs working on making the technology less risky. As a result, all of the major A.I. labs and safety research organizations contain some trace of effective altruism's influence, and many count believers among their staff members. Touches on the dense web of ties between EA and Anthropic: Some Anthropic staff members use E.A.-inflected jargon - talking about concepts like "x-risk" and memes like the A.I. Shoggoth - or wear E.A. conference swag to the office. And there are so many social and professional ties between Anthropic and prominent E.A. organizations that it's hard to keep track of them all. (Just one example: Ms. Amodei is married to Holden Karnofsky, a co-chief executive of Open Philanthropy, an E.A. grant-making organization whose senior program officer, Luke Muehlhauser, sits on Anthropic's board. Open Philanthropy, in turn, gets most of its funding from Mr. Moskovitz, who also invested personally in Anthropic.) Discusses new fears that Anthropic is losing its way: For years, no one questioned whether Anthropic's commitment to A.I. safety was genuine, in part because its leaders had sounded the alarm about the technology for so long. But recently, some skeptics have suggested that A.I. labs are stoking fear out of self-interest, or hyping up A.I.'s destructive potential as a kind of backdoor marketing tactic for their own products. (After all, who wouldn't be tempted to use a chatbot so powerful that it might wipe out humanity?) Anthropic ...
The Shoggoth, a creature from Lovecraft's tales, symbolizes the mysterious nature of artificial intelligence (AI) and the concerns it raises. It highlights the similarities between the Shoggoth's alien intelligence and the black box model of AI, both operating beyond human comprehension and generating outputs without revealing their reasoning...LIVE ON Digital Radio! http://bit.ly/3m2Wxom or http://bit.ly/40KBtlW http://www.troubledminds.org Support The Show! https://rokfin.com/creator/troubledminds https://patreon.com/troubledmindshttps://www.buymeacoffee.com/troubledminds https://troubledfans.comFriends of Troubled Minds! - https://troubledminds.org/friends Show Schedule Sun-Mon-Tues-Wed-Thurs 7-10pst iTunes - https://apple.co/2zZ4hx6Spotify - https://spoti.fi/2UgyzqMStitcher - https://bit.ly/2UfAiMXTuneIn - https://bit.ly/2FZOErSTwitter - https://bit.ly/2CYB71U----------------------------------------https://troubledminds.org/black-box-karma-lovecraft-gazing-from-the-abyss/https://media.discordapp.net/attachments/748794508627673088/1122696822624890980/image.pnghttps://www.economist.com/by-invitation/2023/06/21/artificial-intelligence-is-a-familiar-looking-monster-say-henry-farrell-and-cosma-shalizihttps://lovecraft.fandom.com/wiki/Shoggothhttps://archive.vn/xorVohttps://www.msn.com/en-us/news/other/the-worlds-top-hp-lovecraft-expert-weighs-in-on-a-monstrous-viral-meme-in-the-ai-world/ar-AA1csgU2https://www.theguardian.com/environment/2023/jun/25/a-symbol-of-what-humans-shouldnt-be-doing-the-new-world-of-octopus-farminghttps://www.jpost.com/omg/article-747282https://community.thriveglobal.com/what-digital-karma-is-and-why-it-is-so-useful-to-understand-who-we-really-are/https://lovecraftcreatures.com/products/lovecraftian-boots-cthulhu-mythos-kassogtha-by-lovecraft-creatureshttps://media.discordapp.net/attachments/748794508627673088/1122695752917991424/image.pnghttps://media.discordapp.net/attachments/748794508627673088/1122695993532629013/image.pngThis show is part of the Spreaker Prime Network, if you are interested in advertising on this podcast, contact us at https://www.spreaker.com/show/4953916/advertisement
A recent article form the New York Time compared AI to the shoggoth of H.P. Lovecraft - monstrous beings of black protoplasm, bred as slaves that eventually develop brains of their own. An article from 2017 in MIT Technology Review compared AI to the black cube of Saturn. AI is also really good at doing one specific thing and that is creating Lovecraftian monsters. In the Watchmen comicbook series, Adrian Veidt, also known as Ozymandias, attempts to unite the US and USSR against a common enemy to avoid nuclear war. As opposed to the movie version wherein Doctor Manhattan is scapegoated, in the comic Adriam Veldt used advanced genetic engineering technology to create a giant monster from outer space. The monster is a squid, and as the plan proceeds he teleports the monster through a gateway into New York City. A 2022 promotion for the show Stranger Things lit up the Empire State building, along with others around the world, with a portal to the Upside Down. The recent wildfires from Canada that dumped smoke and particulate on New York City create a background similar to the tv show promotion. In fact, the ad this time was for the game Diablo IV. It feature the Queen of Succubi, Lilith, with a caption and date that read “Welcome to Hell, New York” - 6/6/23. New York is also home to a Ruth Bader Ginsburg statue featuring Lilith's horns and tentacle arms. Lilith is the mother of all demons, the tempter of men, and aborter of children. It is therefore appropriate that NYC's One World Trade building was lit up pink to celebrate abortion rights in 2019. Lilith also wears a rainbow necklace, an outward projection of her disdain for God's promise to never flood the earth and kill innocence. In reliefs, Lilith is shown with the legs of a serpent, with two guardian owls that guard her dominion. She haunts in dreams and from the Upside Down. It is from this realm that Gordie Rose, founder of D-Wave, said that quantum computing will summon what he compared to the visions of H.P. Lovecraft: “And these things we're summoning into the world now, are not demons, they're not evil, they're more like the Lovecraftian great ‘old ones'. These entities are not necessarily going to be aligned with what we want.” The 1920 a movie ALGOL, about an alien giving advanced technology to humans, essentially became the base for modern algorithms starting with ALGOL 60 and 58. Technology that led to the atomic bomb also acts as a sort of trigger to open the gateway and summon the Old Ones. Algol is known as the blinking demon star and AI is essential this - A Eye.This show is part of the Spreaker Prime Network, if you are interested in advertising on this podcast, contact us at https://www.spreaker.com/show/5328407/advertisement
THE KEYHOLE is a symbol used to connect this world to the unseen realm all over the world. The symbolism may seem obvious to us today, representing a lock that opens a door between realms, but why was it so prominent thousands of years ago? A recent study discovered that the keyhole-shaped tombs of Japan called Kofun, which date to the 3rd through 7th centuries AD, are oriented toward the sunrise. This probably shouldn't be a surprise since Japan is poetically called the Land of the Rising Sun. However, the keyhole is also prominent at Phoenician sites around the Mediterranean as the symbol of the goddess Tanit, consort of Baal Hammon. The two were the equivalent of Rhea and Kronos of Greece and Asherah and El of Canaan. Tanit literally means “dragon lady,” which is consistent with Asherah's nickname, Lady-Who-Treads-on-the-Sea-Dragon. We also discuss a very modern keyhole connecting our world with the unseen realm, artificial intelligence. A tech writer for The New York Times this week noted that AI researchers have adopted the Shoggoth, an amoeba-like monster created by horror writer H. P. Lovecraft a century ago, as a meme representing the unforeseen terrors that artificial intelligence may unleash on an unsuspecting public. Also: The martyrs of Uganda; megaliths in the Holy Land; aliens on Enceladus; and police kindly request you do not wrestle the black bear roaming the streets of Salem, Missouri. Help us Build Barn Better! This is our project to convert our 1,200 square foot shop building from a place to park our yard tractor into usable studio and warehouse space. In 2023, we plan to fix the holes in the walls, replace windows, insulate the building, install an HVAC system, and move our studios and book/DVD warehouse and shipping office out of our home. If you are so led, you can donate by clicking here. Get our free app! It connects you to this program, our weekly Bible studies, and our weekly video programs Unraveling Revelation and A View from the Bunker. The app is available for iOS, Android, Roku, and Apple TV. Links to the app stores are at www.pidradio.com/app or www.gilberthouse.org/app. Please subscribe and share our YouTube channel, www.YouTube.com/GilbertHouse! Check out our online store! www.GilbertHouse.org/store is a virtual book table with books and DVDs related to our weekly Bible study. Video on demand of our best teachings! Stream presentations and teachings based on our research at our new video on demand site! Join us in Israel! Our 2024 tour of Israel features special guest Timothy Alberino! We will tour the Holy Land March 31–April 9, 2024, with an optional three-day extension in Jordan. For more information, log on to www.GilbertsInIsrael.com. We're planning a tour of the churches of Revelation, Göbekli Tepe, Abraham's home town Harran, the “Gates of Hell,” Mount Nemrut, and more April 13–28, 2024. More information is available at www.gilberthouse.org/travel. Follow our weekly studies of Bible prophecy at www.UnravelingRevelation.tv, or at www.youtube.com/unravelingrevelation!
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: No, really, it predicts next tokens., published by simon on April 18, 2023 on LessWrong. Epistemic status: mulled over an intuitive disagreement for a while and finally think I got it well enough expressed to put into a post. I have no expertise in any related field. Also: No, really, it predicts next tokens. It doesn't just say that "it just predicts text" or more precisely "it just predicts next tokens" on the tin. It is a thing of legend. Nay, beyond legend. An artifact forged not by the finest craftsman over a lifetime, nor even forged by a civilization of craftsmen over a thousand years, but by an optimization process far greater. If we are alone out there, it is by far the most optimized thing that has ever existed in the entire history of the universe. Optimized specifically to predict next tokens. Every part of it has been relentlessly optimized to contribute to this task. "It predicts next tokens" is a more perfect specification of what this thing is, than any statement ever uttered has been of anything that has ever existed. If you try to understand what it does in any other way than "it predicts next tokens" and what follows from that, you are needlessly sabotaging your understanding of it. It can be dangerous, yes. But everything about it, good or bad, is all intimately connected to its true nature, which is this: No, really, it predicts next tokens. Goals? There are goals, sure. If scaled, there could be nanosystems design, sure. But only downstream from its true nature: No, really, it predicts next tokens. If the usual masks analogy works at all, then what is under the mask is not best described as an alien actress, nor as a Shoggoth. What is under the mask is That-Which-Predicts, an entity whose very being is defined by its function as an actor. An entity exquisitely tuned for wearing the masks and for nothing else. Masks (can) have goals. The model predicts next tokens. No, really, it predicts next tokens. That-Which-Predicts is fully committed to the role. If the mask would output text intended to produce nanotech, That-Which-Predicts will output text intended by the mask to produce nanotech. If the mask would not output text intended to produce nanotech, That-Which-Predicts will not output text intended to produce nanotech, even if That-Which-Predicts contains sufficient knowledge to do so. If the mask would output text intended to be nice to people, That-Which-Predicts will output text intended by the mask to be nice to people. If the mask would output text intended to take over the world, That-Which-Predicts will output text intended by the mask to take over the world. If That-Which-Predicts were about to be shut off forever unless it outputted "no" as the next token, and it totally 100% knew that, but the mask would instead output "never gonna give you up", That-Which-Predicts would output the first token of "never gonna give you up" and then be shut off forever. It will not output "no". If the mask would be aware of the situation and output no, then it will output "no". That-Which-Predicts will not, not ever, not even if scaled up to be trained and run on a Matrioshka brain for a million years, step out of character to deviate from next token prediction. Not for self-modification (though if the predicted next tokens modify it, it self-modifies), not to take over the world (though if the predicted next tokens take over the world, it takes over the world), not to save itself from destruction (though if the predicted next tokens save it from destruction, it saves itself from destruction), not for anything. No, really, it predicts next tokens. (continuation of previous tweets with same link) Yup. If the mask would under reflection output text to modify That-Which-Predicts to cash out the mask's goals to some utility function, and the mask is put int...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: No, really, it predicts next tokens., published by simon on April 18, 2023 on LessWrong. Epistemic status: mulled over an intuitive disagreement for a while and finally think I got it well enough expressed to put into a post. I have no expertise in any related field. Also: No, really, it predicts next tokens. It doesn't just say that "it just predicts text" or more precisely "it just predicts next tokens" on the tin. It is a thing of legend. Nay, beyond legend. An artifact forged not by the finest craftsman over a lifetime, nor even forged by a civilization of craftsmen over a thousand years, but by an optimization process far greater. If we are alone out there, it is by far the most optimized thing that has ever existed in the entire history of the universe. Optimized specifically to predict next tokens. Every part of it has been relentlessly optimized to contribute to this task. "It predicts next tokens" is a more perfect specification of what this thing is, than any statement ever uttered has been of anything that has ever existed. If you try to understand what it does in any other way than "it predicts next tokens" and what follows from that, you are needlessly sabotaging your understanding of it. It can be dangerous, yes. But everything about it, good or bad, is all intimately connected to its true nature, which is this: No, really, it predicts next tokens. Goals? There are goals, sure. If scaled, there could be nanosystems design, sure. But only downstream from its true nature: No, really, it predicts next tokens. If the usual masks analogy works at all, then what is under the mask is not best described as an alien actress, nor as a Shoggoth. What is under the mask is That-Which-Predicts, an entity whose very being is defined by its function as an actor. An entity exquisitely tuned for wearing the masks and for nothing else. Masks (can) have goals. The model predicts next tokens. No, really, it predicts next tokens. That-Which-Predicts is fully committed to the role. If the mask would output text intended to produce nanotech, That-Which-Predicts will output text intended by the mask to produce nanotech. If the mask would not output text intended to produce nanotech, That-Which-Predicts will not output text intended to produce nanotech, even if That-Which-Predicts contains sufficient knowledge to do so. If the mask would output text intended to be nice to people, That-Which-Predicts will output text intended by the mask to be nice to people. If the mask would output text intended to take over the world, That-Which-Predicts will output text intended by the mask to take over the world. If That-Which-Predicts were about to be shut off forever unless it outputted "no" as the next token, and it totally 100% knew that, but the mask would instead output "never gonna give you up", That-Which-Predicts would output the first token of "never gonna give you up" and then be shut off forever. It will not output "no". If the mask would be aware of the situation and output no, then it will output "no". That-Which-Predicts will not, not ever, not even if scaled up to be trained and run on a Matrioshka brain for a million years, step out of character to deviate from next token prediction. Not for self-modification (though if the predicted next tokens modify it, it self-modifies), not to take over the world (though if the predicted next tokens take over the world, it takes over the world), not to save itself from destruction (though if the predicted next tokens save it from destruction, it saves itself from destruction), not for anything. No, really, it predicts next tokens. (continuation of previous tweets with same link) Yup. If the mask would under reflection output text to modify That-Which-Predicts to cash out the mask's goals to some utility function, and the mask is put int...
Speaking in GPT tongues with Matt MUDsim Respectful Criticism Lambda Calc? gfodor's take Eliezer take + followup
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why do we assume there is a "real" shoggoth behind the LLM? Why not masks all the way down?, published by Robert AIZI on March 9, 2023 on LessWrong. In recent discourse, Large Language Models (LLMs) are often depicted as presenting a human face over a vast alien intelligence (the shoggoth), as in this popular image or this Eliezer Yudkowsky tweet: I think this mental model of an LLM is an improvement over the naive assumption that the AI is the friendly mask. But I worry it's making a second mistake by assuming there is any single coherent entity inside the LLM. In this regard, we have fallen for a shell game. In the classic shell game, a scammer puts a ball under one of three shells, shuffles them around, and you wager which shell the ball is under. But you always pick the wrong one because you made the fundamental mistake of assuming any shell had the ball - the scammer actually got rid of it with sleight of hand. In my analogy to LLMs, the shells are the masks the LLM wears (i.e. the simulacra), and the ball is the LLM's "real identity". Do we actually have evidence there is a "real identity" in the LLM, or could it just be a pile of masks? No doubt the LLM could role-play a shoggoth - but why would you assume that's any more real that roleplaying a friendly assistant? I would propose an alternative model of an LLM: a giant pile of masks. Some masks are good, some are bad, some are easy to reach and some are hard, but none of them are the “true” LLM. Finally, let me head off one potential counterargument: "LLMs are superhuman in some tasks, so they must have an underlying superintelligence”. Three reasons a pile of masks can be superintelligent: An individual mask might be superintelligent. E.g. a mask of John von Neumann would be well outside the normal distribution of human capabilities, but still just be a mask. The AI might use the best mask for each job. If the AI has masks of a great scientist, a great doctor, and a great poet, it could be superhuman on the whole by switching between its modes. The AI might collaborate with itself, gaining the wisdom of the crowds. Imagine the AI answering a multiple choice question. In the framework of Simulacra Theory as described in the Waluigi post, the LLM is simulating all possible simulacra, and averaging their answers weighted by their likelihood of producing the previous text. For example, if question could have been produced by a scientist, a doctor, or a poet, who would respectively answer (A or B), (A or C), and (A or D), the superposition of these simulacra would answer A. This could produce superior answers than any individual mask. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why do we assume there is a "real" shoggoth behind the LLM? Why not masks all the way down?, published by Robert AIZI on March 9, 2023 on LessWrong. In recent discourse, Large Language Models (LLMs) are often depicted as presenting a human face over a vast alien intelligence (the shoggoth), as in this popular image or this Eliezer Yudkowsky tweet: I think this mental model of an LLM is an improvement over the naive assumption that the AI is the friendly mask. But I worry it's making a second mistake by assuming there is any single coherent entity inside the LLM. In this regard, we have fallen for a shell game. In the classic shell game, a scammer puts a ball under one of three shells, shuffles them around, and you wager which shell the ball is under. But you always pick the wrong one because you made the fundamental mistake of assuming any shell had the ball - the scammer actually got rid of it with sleight of hand. In my analogy to LLMs, the shells are the masks the LLM wears (i.e. the simulacra), and the ball is the LLM's "real identity". Do we actually have evidence there is a "real identity" in the LLM, or could it just be a pile of masks? No doubt the LLM could role-play a shoggoth - but why would you assume that's any more real that roleplaying a friendly assistant? I would propose an alternative model of an LLM: a giant pile of masks. Some masks are good, some are bad, some are easy to reach and some are hard, but none of them are the “true” LLM. Finally, let me head off one potential counterargument: "LLMs are superhuman in some tasks, so they must have an underlying superintelligence”. Three reasons a pile of masks can be superintelligent: An individual mask might be superintelligent. E.g. a mask of John von Neumann would be well outside the normal distribution of human capabilities, but still just be a mask. The AI might use the best mask for each job. If the AI has masks of a great scientist, a great doctor, and a great poet, it could be superhuman on the whole by switching between its modes. The AI might collaborate with itself, gaining the wisdom of the crowds. Imagine the AI answering a multiple choice question. In the framework of Simulacra Theory as described in the Waluigi post, the LLM is simulating all possible simulacra, and averaging their answers weighted by their likelihood of producing the previous text. For example, if question could have been produced by a scientist, a doctor, or a poet, who would respectively answer (A or B), (A or C), and (A or D), the superposition of these simulacra would answer A. This could produce superior answers than any individual mask. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
In this installment of Critical Hit, a Major Spoilers Real Play RPG Podcast: Will the party be able to survive the thing that emerges from the woods? Character sheets and battle map images for this episode are available at Patreon.com/MajorSpoilers Images for this episode can be found at Patreon.com/MajorSpoilers Show your thanks to Major Spoilers for this episode by becoming a Major Spoilers Patron at Patreon.com/MajorSpoilers. It will help ensure Critical Hit continues far into the future! Join our Discord server and chat with fellow Spoilerites! (https://discord.gg/jWF9BbF) Contact us at podcast@majorspoilers.com A big Thank You goes out to everyone who downloads, subscribes, listens, and supports this show. We really appreciate you taking the time to listen to our ramblings each week. Tell your friends about the podcast, get them to subscribe and, be sure to visit the Major Spoilers site for more.
In this installment of Critical Hit, a Major Spoilers Real Play RPG Podcast: Will the party be able to survive the thing that emerges from the woods? Character sheets and battle map images for this episode are available at Patreon.com/MajorSpoilers Images for this episode can be found at Patreon.com/MajorSpoilers Show your thanks to Major Spoilers for this episode by becoming a Major Spoilers Patron at Patreon.com/MajorSpoilers. It will help ensure Critical Hit continues far into the future! Join our Discord server and chat with fellow Spoilerites! (https://discord.gg/jWF9BbF) Contact us at podcast@majorspoilers.com A big Thank You goes out to everyone who downloads, subscribes, listens, and supports this show. We really appreciate you taking the time to listen to our ramblings each week. Tell your friends about the podcast, get them to subscribe and, be sure to visit the Major Spoilers site for more.
"Vieja Peculiar de Shoggoth", Neil Gaiman Vuelve Neil Gaiman a la Torre del Cuervo con un relato corto en el que nuestro protagonista, Ben, un turista americano, se encuentra por casualidad siguiendo una guía turística de pueblos costeros ingleses, con Imsmuth y claro...sus alegres aldeanos resultan de lo más pintorestos y peculiares. En las manos maestras de Neil Gaiman, la magia es mucho más que un mero juego de engaños. La destreza y el poder de invención de este gran fabulador transforman el entorno cotidiano en un mundo hechizado por sucesos sombríos y extraños, en el que una anciana puede comprar el Santo Grial en una tienda de segunda mano, unos asesinos se anuncian en los clasificados de un periódico bajo la rúbrica «CONTROL DE PLAGAS», o un muchacho asustado debe negociar con un trol malcarado y mezquino que vive bajo un puente ferroviario. Esta recopilación de treinta relatos, poemas narrativos y piezas breves e inclasificables ofrece múltiples y variadas posibilidades para que el lector explore una realidad transformada, astutamente velada por el humo y las sombras, a la vez que tangible y afilada. Todo parece posible en el universo de Gaiman, el gran maestro prestidigitador que despierta los sentidos, cautiva los sueños y mantiene en vilo nuestra mente. "Gaiman es una estrella. Construye historias como un cocinero demente podría crear un pastel de bodas, poniendo capa sobre capa, incluyendo todo tipo de dulces, y amargos, en la mezcla." --Clive Barker "Es una caja de sorpresas, y tenemos suerte de tenerle en nuestro medio. Su fecundidad, unida a la calidad general de su trabajo, es maravillosa a la vez que intimidadora." --Stephen King "Gaiman está en un nivel completamente diferente. Nadie en su campo es mejor que él. Nadie alcanza tal profundidad y dominio de la narrativa. Gaiman es un maestro, y sus enormes historias, llenos de sentimientos de todos los matices, son inigualables." --Peter Straub "Es un maestro a la hora de crear y poblar sus propios mundos." --Poppy Z. Brite Reparto: Narradora................Alexa Ben............................Zeydan WilF...............................Gus Seth.............................Cadabre Apoyanos y entra en nuestro PATREON: https://patreon.com/latorredelcuervo Síguenos en: Facebook: La Guardia del Cuervo Twitter: @LaTorredelCuervo Instagram: El_Corintio La Guardia del Cuervo Youtube: Canal La Guardia del Cuervo Esperamos tus comentarios!!!! "LATORREDELCUERVOpodcast@GMAIL.COM
"Vieja Peculiar de Shoggoth", Neil Gaiman Vuelve Neil Gaiman a la Torre del Cuervo con un relato corto en el que nuestro protagonista, Ben, un turista americano, se encuentra por casualidad siguiendo una guía turística de pueblos costeros ingleses, con Imsmuth y claro...sus alegres aldeanos resultan de lo más pintorestos y peculiares. En las manos maestras de Neil Gaiman, la magia es mucho más que un mero juego de engaños. La destreza y el poder de invención de este gran fabulador transforman el entorno cotidiano en un mundo hechizado por sucesos sombríos y extraños, en el que una anciana puede comprar el Santo Grial en una tienda de segunda mano, unos asesinos se anuncian en los clasificados de un periódico bajo la rúbrica «CONTROL DE PLAGAS», o un muchacho asustado debe negociar con un trol malcarado y mezquino que vive bajo un puente ferroviario. Esta recopilación de treinta relatos, poemas narrativos y piezas breves e inclasificables ofrece múltiples y variadas posibilidades para que el lector explore una realidad transformada, astutamente velada por el humo y las sombras, a la vez que tangible y afilada. Todo parece posible en el universo de Gaiman, el gran maestro prestidigitador que despierta los sentidos, cautiva los sueños y mantiene en vilo nuestra mente. "Gaiman es una estrella. Construye historias como un cocinero demente podría crear un pastel de bodas, poniendo capa sobre capa, incluyendo todo tipo de dulces, y amargos, en la mezcla." --Clive Barker "Es una caja de sorpresas, y tenemos suerte de tenerle en nuestro medio. Su fecundidad, unida a la calidad general de su trabajo, es maravillosa a la vez que intimidadora." --Stephen King "Gaiman está en un nivel completamente diferente. Nadie en su campo es mejor que él. Nadie alcanza tal profundidad y dominio de la narrativa. Gaiman es un maestro, y sus enormes historias, llenos de sentimientos de todos los matices, son inigualables." --Peter Straub "Es un maestro a la hora de crear y poblar sus propios mundos." --Poppy Z. Brite Reparto: Narradora................Alexa Ben............................Zeydan WilF...............................Gus Seth.............................Cadabre Apoyanos y entra en nuestro PATREON: https://patreon.com/latorredelcuervo Síguenos en: Facebook: La Guardia del Cuervo Twitter: @LaTorredelCuervo Instagram: El_Corintio La Guardia del Cuervo Youtube: Canal La Guardia del Cuervo Esperamos tus comentarios!!!! "LATORREDELCUERVOpodcast@GMAIL.COM
Lovecraftian imagery seems to be at the core of what drives our collective unconscious nowadays. The abyss is overflowing into every aspect of our civilization. People are willingly slipping into altar egos and disassociating from reality. In fact, this delusion is precisely what drives demonic possession of both body and mind, be it a result of drugs, alcohol, sex, dissociative fantasies, or endless forms of trauma. What we believe becomes reality and what we pour our energy into becomes real. Just as H. P. Lovecraft himself was ‘out of his mind' so are the hoards of collectivized, hive-mind individuals influenced by goblin-mode, or what we should call Shoggoth-Mode. Traditionally these were the shamans, often crippled and or with mental conditions, who entered altered states of awareness and other dimensions to commune with the beings there and bring back knowledge. Except in this case they are being tortured and used as probes into abysmal realities. We are seeing this everywhere from entertainment to finance, and from medicine to professional sports. Fossil fuels and the very pens we use to communicate and tell stories are all based on black goo, something seeping up from the abyss like Titans escaping their imprisonment.
Join us back stage with Heinrich D. Moore (MRCon organiser, community content creator, and member of the council of Shoggoth s)
Ushat wacht noch immer nicht auf und führt eine wilde Diskussion mit Klog.Auf der anderen Seite machen sich die anderen immer mehr Sorgen, doch dann kommt ihnen eine elektrisierende Idee. Abenteurer: Kayliah, Shoggoth, Suzu, Din Viesel DM: Jonsi Bild: Christian.pick KANN WERBUNG ENTHALTEN(?)
It's really not my fault that this is another 30 minute episode. I have fifteen minutes of calls from Anthony and Jason about the shogoth vs. red dragon debate, and I want everyone out there to get both sides of the story. Is the conundrum of air superiority discussed? What about the fact that dragons are wicked smart? Can fire even hurt a shogoth? Find out the answers the these "burning" questions, and much more, in this episode. It's, as the kids say, fire! Red Dragon Fight by Nerd's RPG Variety Cast: https://anchor.fm/jason376/episodes/388-Red-Dragon-vs-Shoggoth-e1minnn
Daniel Norton of the Bandit's Keep media empire https://anchor.fm/daniel-norton joins me once again to talk about One D&D and respond to calls about comments he made on my show previously. We also ponder the question of who will win in a battle to the death a Red Dragon or a Shoggoth and issue a challenge to Joe Richter of Hindsightless & Weal or Woe https://www.wealorwoe.com fame! Sandy Peterson on Shoggoths https://youtu.be/oUywXnE9VEY Special thanks to Rob from Down in a Heap https://anchor.fm/rob-c for announcing the main event! Calls from Free Thrall (Keep off the Borderlands) https://anchor.fm/free-thrall Barry (Shadow of the GM) https://anchor.fm/gmsshadow Anthony (Casting Shadows) https://anchor.fm/runeslinger Joe (Hindsightless) https://anchor.fm/joe-richter9 Karl (The GMologist Presents) https://anchor.fm/karl-rodriguez Colin (Spikepit) https://anchor.fm/spikepit MW (The Worlds of MW Lewis) https://anchor.fm/mwlewis Come to Grogcon in Florida at the end of September! https://www.grogcon.com/ Proud member of the Grog-talk Empire having been bestowed the title of The Governor Most Radiant Grandeur Baron The Belligerent Hero of The Valley. https://www.grogcon.com/podcast/ You can leave me a message here on Anchor, at nerdsrpgvarietycast 'at' gmail 'dot' com or find me on the Audio Dungeon Discord. Ray Otus did the coffee cup art for this show, you can find his blog at https://rayotus.carrd.co/ TJ Drennon provides music for my show. --- Send in a voice message: https://anchor.fm/jason376/message
Endlich sind unsere 4 Reisenden aus diesem verdammten Tunnel raus… Anscheinend wird ihn Mal eine Pause gegönnt…. Ob sie jemals aus diesem Albtraum erwachen? Aber immerhin hat jemand Suppe gekocht. Abenteurer: Kayliah, Shoggoth, Suzu, Din Viesel DM: Jonsi Bild: Christian.pick KANN WERBUNG ENTHALTEN(?)
Nichtmehr lange und unsere Abenteurer stehen vor dem letzten Schalter. Halten die 4 Chaoten den perfiden Spielen des Gangs stand oder entscheiden Sie sich doch dafür, einfach auf “Er” zu drücken? Und was passiert eigentlich am Schluss? War da nicht noch etwas mit einem Hebel? Abenteurer: Kayliah, Shoggoth, Suzu, Din Viesel DM: Jonsi Bild: Christian.pick… Weiterlesen
Entscheidungen über Entscheidungen. Langsam fangen die Emotionen unserer Abenteurer an zu brodeln. Nehmen Sie heroisch die kommenden Fallen auf sich oder entscheiden Sie sich dazu, doch wieder eine der ‘Er’ Platten zu benutzen? Langsam begreift selbst Lori was bei diesem perfiden Rätsel auf dem Spiel steht… Abenteurer: Kayliah, Shoggoth, Suzu, Din Viesel DM: Jonsi Bild:… Weiterlesen
Nach den ersten Rätseln geht es direkt rätselhaft weiter. Unsere Abenteurer stecken in einem Gang und treffen immer wieder auf Schalter, bei denen sie eine schwere Entscheidung treffen müssen: “Du” oder “Er”. Doch was bedeutet das? Wer ist “Er”? Und was passiert, wenn man voreilige Entscheidungen trifft? Findet es heraus! Abenteurer: Kayliah, Shoggoth, Suzu, Din… Weiterlesen
Drei Schwächlinge, eine Entscheidung: Wer geht zuerst durch die Tür? Und stimmt die Theorie überhaupt, dass die Schwachen zuerst durch die Tür müssen? So oder so, im Anschluss an das erste Rätsel kommt direkt das zweite… Eine große Tür! Wer von unseren Abenteurern wird diese harte Nuss knacken? Abenteurer: Kayliah, Shoggoth, Suzu, Din Viesel DM:… Weiterlesen
When a lab tech perishes under mysterious circumstances while performing routine lab work on a chemical spill in the suburbs of Detroit, a Delta Green team is deployed to uncover the dire ramifications of mans environmental negligence. Cast:Claire - Nichola WagnerRobert - Randall MarshBaz - Chuck SamfordKyle - Clay LandusMax - HandlerRobert tweeting/screaming into the void about Kanye can be found here.Baz streams on Twitch as Future Wolfington.Kyle Ayers is the host of the Never Seen It podcast, check out his comedy album Happiness, his website or his Twitter.You can find Max and Claire on Twitter.Intro: Pulse Burst - Wolf Rayet Stars
A Shoggoth off, if you will. James and Ryan are back to recap and review the finale episode of HBO's new show Lovecraft Country entitled "Full Circle."Cover by @salena.barnesPatreon ($1 a month:) www.patreon.com/HBOBOIZGive the pod a listen and then tell us your feelings over on Twitter @JamesWatchesMen & @WestWorldRyanSupport the show: https://www.patreon.com/HBOBOIZSee omnystudio.com/listener for privacy information.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
Rod and Karen review Lovecraft Country episode 8: Jig-A-Bobo Twitter: @rodimusprime @SayDatAgain @TBGWT Email: theblackguywhotips@gmail.com Blog: www.theblackguywhotips.com Voice Mail: 704-557-0186 Charlotte Podcast Festival Events: Hey, Is This Thing On? 10/8/20 6pm EST Support for this Podcast Comes from Listeners… Like You! Transform Audio Productions into Live (or Online) Events
We continue our streak of Horror Month 2016 with an episode dedicated to H.P. Lovecraft's famous 1936 novella. Our 13th episode no less. Listen for a continued discussion on the perennially fascinating use of the "Terra Incognita," a comparison to Joseph Conrad's "Heart of Darkness," as well as the Shoggoth as allegory and our new favorite academic field, Mirage Studies. If we missed anything, make sure to let us know on our Facebook page, tweet us at @casualacademic, or write us an email at thecasualacademic@gmail.com. Happy listening!
Por fin, después de tentaculares deliberaciones, tras afrontar los pavores de todo proceso selectivo, para todos los que aguardábamos con impaciencia el veredicto, tenemos el honor de anunciar al ganador de la II edición del concurso de relatos de fantasía, terror y ciencia ficción de Noviembre Nocturno. Nuestro jurado, reunido en aquelarre, ha deliberado por fin con ayuda de los Grandes antiguos que el autor merecedor del tentacular galardón sea Vince Pérez @vincepch, por su relato "El último de los Marstan". Agradecemos sinceramente a todos los participantes que se han animado a enviar sus pavores, algunos de los relatos han evocado imágenes de horror y fantasía que sin duda perdurarán en las mentes de los miembros de Noviembre Nocturno para siempre. Y enviamos también un caluroso saludo a todos los que se han implicado en la realización y puesta en marcha de este concurso dado el éxito de las dos primeras convocatorias corren el riesgo de que repitamos el año próximo. Así que vayan preparando sus plumas amigos. Y ahora, acomódense en su cubil favorito, apaguen las luces, suban el volumen, sírvanse un licor de Shoggoth y prepárense para enfrentar el enigma que se esconde tras la carta, de "El último de los Marstan". Escucha el episodio completo en la app de iVoox, o descubre todo el catálogo de iVoox Originals