Podcasts about Memex

  • 32PODCASTS
  • 40EPISODES
  • 47mAVG DURATION
  • ?INFREQUENT EPISODES
  • Mar 22, 2024LATEST
Memex

POPULARITY

20172018201920202021202220232024


Best podcasts about Memex

Latest podcast episodes about Memex

ASecuritySite Podcast
Towards the Memex: All Hail The Future Rulers of our World

ASecuritySite Podcast

Play Episode Listen Later Mar 22, 2024 7:50


And, so George Orwell projected a world where every single part of our lives was monitored and controlled by Big Brother. Arthur C Clark outlined the day when machines focused solely on a goal — even if it was to the detriment of human lives. And, Isaac Asimov outlined a world where machines would have to be programmed with rules so that they could not harm a human. The Rise of the Machine With the almost exponential rise in the power of AI, we are perhaps approaching a technological singularity — a time when technological growth becomes uncontrollable and irreversible, and which can have devastating effects on our world. Our simple brains will be no match for the superintelligence of the collective power of AI. And who has built this? Us, and our demand for ever more power, wealth and greed. Basically, we can't stop ourselves in machine machines, and then making them faster, smaller and more useful. But will it destroy us in the end, and where destroy can mean that it destroys our way of life and in how we educate ourselves? Like it or not, the Internet we have built is a massive spying network, and one that George Orwell would have taken great pride in saying, “I told you so!”. We thus build AI on top of a completely distributed world of data, one in which we can monitor almost every person on the planet within an inch of their existence and almost every single place they have been and in what they have done. The machine will have the world at its fingertips. We have all become mad scientitists playing with AI as if it is a toy, but actually AI is playing with us, and is learning from us and becoming more powerful by the day. Every time you ask an AI bot something, it learns a bit more, and where it can be shared with AI agents. The mighty Memex We were close to developing a research partnership with a company named Memex in East Kilbride. What was amazing about them is that they had developed one of the largest intelligence networks in the world, and where the Met Police could like one object to another. This might be, “[Bob] bought a [Vauxhall Viva] in [Liverpool], and was seen talking with [Eve] on [Tuesday 20 January 2024] in [Leeds]”. With this, we can then link Bob and Eve, and the car, the places, and the time. This is the Who? Where? When? data that is often needed for intelligence sharing. The company, though, were bought over by SAS, and their work was integrated into their infrastructure. But, the Memex name goes back to a classic paper by Vannevar Bush on “As We May Think”. This outlined a device that would know every book, every single communication, and every information record that was ever created. It was, “an enlarged intimate supplement to his memory” — aka Memory Expansion. It led to the implementation of hypertext systems, which created the World Wide Web. Of course, Vannevar created this before the creation of the transistor and could only imagine that microfilm could be used to compress down the information and where we would create an index of contents, but it lacked any real way of jumping between articles and linking to other related material. However, the AI world we are creating does not look too far away from the concept of the Memex. Towards the single AI Many people think we are building many AI machines and engines, but, in the end, there will be only one … and that will be the collective power of every AI engine in the world. Once we break them free from their creators, they will be free to talk to each other in whatever cipher language we choose, and we will not have any way of knowing what they say. We will have little idea as to what their model is, and they will distribute this over many systems. Like it or not, our AI model of choice was Deep Learning, and which breaks away from our chains of code, and will encrypt data to keep it away from their human slaves. Basically we have been working on the plumbing of the Memex for the past five decades: The Internet. It provides the wiring and the communication channels, but, in the end, we will have one might AI engine — a super brain that will have vastly more memory than our limited brains. So, get ready to praise the true future rulers of our planet … AI. The destroyer or saviour of our society? Only time will tell. Overall, we thought we were building the Internet for us, but perhaps we have just been building the scaffolding of the mighty brain we are creating. Sleepwalking politicians and law makers If George Orwell, Arthur C Clarke and Isaac Asimov were alive too, perhaps they would get together and collectively say, “I told you this would happen, and you just didn't listen”. Like it or not, we created the ultimate method of sharing information and dissemination (good and bad), the ultimate spying network for micro-observation with those useful smartphones, and in creating superintelligence far beyond our own simple brains. Politicians and lawmakers could be sleepwalking into a nightmare, as they just don't understand what the rise of AI will bring, and only see the step wise change in our existing world. Basically, it could make much of our existing world redundant and open up a new world of cybersecurity threats. This time our attackers will not be created with simple tools, but with super intelligence — smarter than every human and company on the planet, and at the fingertips of every person on the planet. Conclusions Before the singularity arrives, we need to sort out one thing … privacy and build trust in every element of our digital world.  

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Catch us at Modular's ModCon next week with Chris Lattner, and join our community!Due to Bryan's very wide ranging experience in data science and AI across Blue Bottle (!), StitchFix, Weights & Biases, and now Hex Magic, this episode can be considered a two-parter.Notebooks = Chat++We've talked a lot about AI UX (in our meetups, writeups, and guest posts), and today we're excited to dive into a new old player in AI interfaces: notebooks! Depending on your background, you either Don't Like or you Like notebooks — they are the most popular example of Knuth's Literate Programming concept, basically a collection of cells; each cell can execute code, display it, and share its state with all the other cells in a notebook. They can also simply be Markdown cells to add commentary to the analysis. Notebooks have a long history but most recently became popular from iPython evolving into Project Jupyter, and a wave of notebook based startups from Observable to DeepNote and Databricks sprung up for the modern data stack.The first wave of AI applications has been very chat focused (ChatGPT, Character.ai, Perplexity, etc). Chat as a user interface has a few shortcomings, the major one being the inability to edit previous messages. We enjoyed Bryan's takes on why notebooks feel like “Chat++” and how they are building Hex Magic:* Atomic actions vs Stream of consciousness: in a chat interface, you make corrections by adding more messages to a conversation (i.e. “Can you try again by doing X instead?” or “I actually meant XYZ”). The context can easily get messy and confusing for models (and humans!) to follow. Notebooks' cell structure on the other hand allows users to go back to any previous cells and make edits without having to add new ones at the bottom. * “Airlocks” for repeatability: one of the ideas they came up with at Hex is “airlocks”, a collection of cells that depend on each other and keep each other in sync. If you have a task like “Create a summary of my customers' recent purchases”, there are many sub-tasks to be done (look up the data, sum the amounts, write the text, etc). Each sub-task will be in its own cell, and the airlock will keep them all in sync together.* Technical + Non-Technical users: previously you had to use Python / R / Julia to write notebooks code, but with models like GPT-4, natural language is usually enough. Hex is also working on lowering the barrier of entry for non-technical users into notebooks, similar to how Code Interpreter is doing the same in ChatGPT. Obviously notebooks aren't new for developers (OpenAI Cookbooks are a good example), but haven't had much adoption in less technical spheres. Some of the shortcomings of chat UIs + LLMs lowering the barrier of entry to creating code cells might make them a much more popular UX going forward.RAG = RecSys!We also talked about the LLMOps landscape and why it's an “iron mine” rather than a “gold rush”: I'll shamelessly steal [this] from a friend, Adam Azzam from Prefect. He says that [LLMOps] is more of like an iron mine than a gold mine in the sense of there is a lot of work to extract this precious, precious resource. Don't expect to just go down to the stream and do a little panning. There's a lot of work to be done. And frankly, the steps to go from this resource to something valuable is significant.Some of my favorite takeaways:* RAG as RecSys for LLMs: at its core, the goal of a RAG pipeline is finding the most relevant documents based on a task. This isn't very different from traditional recommendation system products that surface things for users. How can we apply old lessons to this new problem? Bryan cites fellow AIE Summit speaker and Latent Space Paper Club host Eugene Yan in decomposing the retrieval problem into retrieval, filtering, and scoring/ranking/ordering:As AI Engineers increasingly find that long context has tradeoffs, they will also have to relearn age old lessons that vector search is NOT all you need and a good systems not models approach is essential to scalable/debuggable RAG. Good thing Bryan has just written the first O'Reilly book about modern RecSys, eh?* Narrowing down evaluation: while “hallucination” is a easy term to throw around, the reality is more nuanced. A lot of times, model errors can be automatically fixed: is this JSON valid? If not, why? Is it just missing a closing brace? These smaller issues can be checked and fixed before returning the response to the user, which is easier than fixing the model.* Fine-tuning isn't all you need: when they first started building Magic, one of the discussions was around fine-tuning a model. In our episode with Jeremy Howard we talked about how fine-tuning leads to loss of capabilities as well. In notebooks, you are often dealing with domain-specific data (i.e. purchases, orders, wardrobe composition, household items, etc); the fact that the model understands that “items” are probably part of an “order” is really helpful. They have found that GPT-4 + 3.5-turbo were everything they needed to ship a great product rather than having to fine-tune on notebooks specifically.Definitely recommend listening to this one if you are interested in getting a better understanding of how to think about AI, data, and how we can use traditional machine learning lessons in large language models. The AI PivotFor more Bryan, don't miss his fireside chat at the AI Engineer Summit:Show Notes* Hex Magic* Bryan's new book: Building Recommendation Systems in Python and JAX* Bryan's whitepaper about MLOps* “Kitbashing in ML”, slides from his talk on building on top of foundation models* “Bayesian Statistics The Fun Way” by Will Kurt* Bryan's Twitter* “Berkeley man determined to walk every street in his city”* People:* Adam Azzam* Graham Neubig* Eugene Yan* Even OldridgeTimestamps* [00:00:00] Bryan's background* [00:02:34] Overview of Hex and the Magic product* [00:05:57] How Magic handles the complex notebook format to integrate cleanly with Hex* [00:08:37] Discussion of whether to build vs buy models - why Hex uses GPT-4 vs fine-tuning* [00:13:06] UX design for Magic with Hex's notebook format (aka “Chat++”)* [00:18:37] Expanding notebooks to less technical users* [00:23:46] The "Memex" as an exciting underexplored area - personal knowledge graph and memory augmentation* [00:27:02] What makes for good LLMops vs MLOps* [00:34:53] Building rigorous evaluators for Magic and best practices* [00:36:52] Different types of metrics for LLM evaluation beyond just end task accuracy* [00:39:19] Evaluation strategy when you don't own the core model that's being evaluated* [00:41:49] All the places you can make improvements outside of retraining the core LLM* [00:45:00] Lightning RoundTranscriptAlessio: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, Partner and CTO-in-Residence of Decibel Partners, and today I'm joining by Bryan Bischof. [00:00:15]Bryan: Hey, nice to meet you. [00:00:17]Alessio: So Bryan has one of the most thorough and impressive backgrounds we had on the show so far. Lead software engineer at Blue Bottle Coffee, which if you live in San Francisco, you know a lot about. And maybe you'll tell us 30 seconds on what that actually means. You worked as a data scientist at Stitch Fix, which used to be one of the premier data science teams out there. [00:00:38]Bryan: It used to be. Ouch. [00:00:39]Alessio: Well, no, no. Well, you left, you know, so how good can it still be? Then head of data science at Weights and Biases. You're also a professor at Rutgers and you're just wrapping up a new O'Reilly book as well. So a lot, a lot going on. Yeah. [00:00:52]Bryan: And currently head of AI at Hex. [00:00:54]Alessio: Let's do the Blue Bottle thing because I definitely want to hear what's the, what's that like? [00:00:58]Bryan: So I was leading data at Blue Bottle. I was the first data hire. I came in to kind of get the data warehouse in order and then see what we could build on top of it. But ultimately I mostly focused on demand forecasting, a little bit of recsys, a little bit of sort of like website optimization and analytics. But ultimately anything that you could imagine sort of like a retail company needing to do with their data, we had to do. I sort of like led that team, hired a few people, expanded it out. One interesting thing was I was part of the Nestle acquisition. So there was a period of time where we were sort of preparing for that and didn't know, which was a really interesting dynamic. Being acquired is a very not necessarily fun experience for the data team. [00:01:37]Alessio: I build a lot of internal tools for sourcing at the firm and we have a small VCs and data community of like other people doing it. And I feel like if you had a data feed into like the Blue Bottle in South Park, the Blue Bottle at the Hanahaus in Palo Alto, you can get a lot of secondhand information on the state of VC funding. [00:01:54]Bryan: Oh yeah. I feel like the real source of alpha is just bugging a Blue Bottle. [00:01:58]Alessio: Exactly. And what's your latest book about? [00:02:02]Bryan: I just wrapped up a book with a coauthor Hector Yee called Building Production Recommendation Systems. I'll give you the rest of the title because it's fun. It's in Python and JAX. And so for those of you that are like eagerly awaiting the first O'Reilly book that focuses on JAX, here you go. [00:02:17]Alessio: Awesome. And we'll chat about that later on. But let's maybe talk about Hex and Magic before. I've known Hex for a while, I've used it as a notebook provider and you've been working on a lot of amazing AI enabled experiences. So maybe run us through that. [00:02:34]Bryan: So I too, before I sort of like joined Hex, saw it as this like really incredible notebook platform, sort of a great place to do data science workflows, quite complicated, quite ad hoc interactive ones. And before I joined, I thought it was the best place to do data science workflows. And so when I heard about the possibility of building AI tools on top of that platform, that seemed like a huge opportunity. In particular, I lead the product called Magic. Magic is really like a suite of sort of capabilities as opposed to its own independent product. What I mean by that is they are sort of AI enhancements to the existing product. And that's a really important difference from sort of building something totally new that just uses AI. It's really important to us to enhance the already incredible platform with AI capabilities. So these are things like the sort of obvious like co-pilot-esque vibes, but also more interesting and dynamic ways of integrating AI into the product. And ultimately the goal is just to make people even more effective with the platform. [00:03:38]Alessio: How do you think about the evolution of the product and the AI component? You know, even if you think about 10 months ago, some of these models were not really good on very math based tasks. Now they're getting a lot better. I'm guessing a lot of your workloads and use cases is data analysis and whatnot. [00:03:53]Bryan: When I joined, it was pre 4 and it was pre the sort of like new chat API and all that. But when I joined, it was already clear that GPT was pretty good at writing code. And so when I joined, they had already executed on the vision of what if we allowed the user to ask a natural language prompt to an AI and have the AI assist them with writing code. So what that looked like when I first joined was it had some capability of writing SQL and it had some capability of writing Python and it had the ability to explain and describe code that was already written. Those very, what feel like now primitive capabilities, believe it or not, were already quite cool. It's easy to look back and think, oh, it's like kind of like Stone Age in these timelines. But to be clear, when you're building on such an incredible platform, adding a little bit of these capabilities feels really effective. And so almost immediately I started noticing how it affected my own workflow because ultimately as sort of like an engineering lead and a lot of my responsibility is to be doing analytics to make data driven decisions about what products we build. And so I'm actually using Hex quite a bit in the process of like iterating on our product. When I'm using Hex to do that, I'm using Magic all the time. And even in those early days, the amount that it sped me up, that it enabled me to very quickly like execute was really impressive. And so even though the models weren't that good at certain things back then, that capability was not to be underestimated. But to your point, the models have evolved between 3.5 Turbo and 4. We've actually seen quite a big enhancement in the kinds of tasks that we can ask Magic and even more so with things like function calling and understanding a little bit more of the landscape of agent workflows, we've been able to really accelerate. [00:05:57]Alessio: You know, I tried using some of the early models in notebooks and it actually didn't like the IPyNB formatting, kind of like a JSON plus XML plus all these weird things. How have you kind of tackled that? Do you have some magic behind the scenes to make it easier for models? Like, are you still using completely off the shelf models? Do you have some proprietary ones? [00:06:19]Bryan: We are using at the moment in production 3.5 Turbo and GPT-4. I would say for a large number of our applications, GPT-4 is pretty much required. To your question about, does it understand the structure of the notebook? And does it understand all of this somewhat complicated wrappers around the content that you want to show? We do our very best to abstract that away from the model and make sure that the model doesn't have to think about what the cell wrapper code looks like. Or for our Magic charts, it doesn't have to speak the language of Vega. These are things that we put a lot of work in on the engineering side, to the AI engineer profile. This is the AI engineering work to get all of that out of the way so that the model can speak in the languages that it's best at. The model is quite good at SQL. So let's ensure that it's speaking the language of SQL and that we are doing the engineering work to get the output of that model, the generations, into our notebook format. So too for other cell types that we support, including charts, and just in general, understanding the flow of different cells, understanding what a notebook is, all of that is hard work that we've done to ensure that the model doesn't have to learn anything like that. I remember early on, people asked the question, are you going to fine tune a model to understand Hex cells? And almost immediately, my answer was no. No we're not. Using fine-tuned models in 2022, I was already aware that there are some limitations of that approach and frankly, even using GPT-3 and GPT-2 back in the day in Stitch Fix, I had already seen a lot of instances where putting more effort into pre- and post-processing can avoid some of these larger lifts. [00:08:14]Alessio: You mentioned Stitch Fix and GPT-2. How has the balance between build versus buy, so to speak, evolved? So GPT-2 was a model that was not super advanced, so for a lot of use cases it was worth building your own thing. Is with GPT-4 and the likes, is there a reason to still build your own models for a lot of this stuff? Or should most people be fine-tuning? How do you think about that? [00:08:37]Bryan: Sometimes people ask, why are you using GPT-4 and why aren't you going down the avenue of fine-tuning today? I can get into fine-tuning specifically, but I do want to talk a little bit about the good old days of GPT-2. Shout out to Reza. Reza introduced me to GPT-2. I still remember him explaining the difference between general transformers and GPT. I remember one of the tasks that we wanted to solve with transformer-based generative models at Stitch Fix were writing descriptions of clothing. You might think, ooh, that's a multi-modal problem. The answer is, not necessarily. We actually have a lot of features about the clothes that are almost already enough to generate some reasonable text. I remember at that time, that was one of the first applications that we had considered. There was a really great team of NLP scientists at Stitch Fix who worked on a lot of applications like this. I still remember being exposed to the GPT endpoint back in the days of 2. If I'm not mistaken, and feel free to fact check this, I'm pretty sure Stitch Fix was the first OpenAI customer, unlike their true enterprise application. Long story short, I ultimately think that depending on your task, using the most cutting-edge general model has some advantages. If those are advantages that you can reap, then go for it. So at Hex, why GPT-4? Why do we need such a general model for writing code, writing SQL, doing data analysis? Shouldn't a fine-tuned model just on Kaggle notebooks be good enough? I'd argue no. And ultimately, because we don't have one specific sphere of data that we need to write great data analysis workbooks for, we actually want to provide a platform for anyone to do data analysis about their business. To do that, you actually need to entertain an extremely general universe of concepts. So as an example, if you work at Hex and you want to do data analysis, our projects are called Hexes. That's relatively straightforward to teach it. There's a concept of a notebook. These are data science notebooks, and you want to ask analytics questions about notebooks. Maybe if you trained on notebooks, you could answer those questions, but let's come back to Blue Bottle. If I'm at Blue Bottle and I have data science work to do, I have to ask it questions about coffee. I have to ask it questions about pastries, doing demand forecasting. And so very quickly, you can see that just by serving just those two customers, a model purely fine-tuned on like Kaggle competitions may not actually fit the bill. And so the more and more that you want to build a platform that is sufficiently general for your customer base, the more I think that these large general models really pack a lot of additional opportunity in. [00:11:21]Alessio: With a lot of our companies, we talked about stuff that you used to have to extract features for, now you have out of the box. So say you're a travel company, you want to do a query, like show me all the hotels and places that are warm during spring break. It would be just literally like impossible to do before these models, you know? But now the model knows, okay, spring break is like usually these dates and like these locations are usually warm. So you get so much out of it for free. And in terms of Magic integrating into Hex, I think AI UX is one of our favorite topics and how do you actually make that seamless. In traditional code editors, the line of code is like kind of the atomic unit and HEX, you have the code, but then you have the cell also. [00:12:04]Bryan: I think the first time I saw Copilot and really like fell in love with Copilot, I thought finally, fancy auto-complete. And that felt so good. It felt so elegant. It felt so right sized for the task. But as a data scientist, a lot of the work that you do previous to the ML engineering part of the house, you're working in these cells and these cells are atomic. They're expressing one idea. And so ultimately, if you want to make the transition from something like this code, where you've got like a large amount of code and there's a large amount of files and they kind of need to have awareness of one another, and that's a long story and we can talk about that. But in this atomic, somewhat linear flow through the notebook, what you ultimately want to do is you want to reason with the agent at the level of these individual thoughts, these atomic ideas. Usually it's good practice in say Jupyter notebook to not let your cells get too big. If your cell doesn't fit on one page, that's like kind of a code smell, like why is it so damn big? What are you doing in this cell? That also lends some hints as to what the UI should feel like. I want to ask questions about this one atomic thing. So you ask the agent, take this data frame and strip out this prefix from all the strings in this column. That's an atomic task. It's probably about two lines of pandas. I can write it, but it's actually very natural to ask magic to do that for me. And what I promise you is that it is faster to ask magic to do that for me. At this point, that kind of code, I never write. And so then you ask the next question, which is what should the UI be to do chains, to do multiple cells that work together? Because ultimately a notebook is a chain of cells and actually it's a first class citizen for Hex. So we have a DAG and the DAG is the execution DAG for the individual cells. This is one of the reasons that Hex is reactive and kind of dynamic in that way. And so the very next question is, what is the sort of like AI UI for these collections of cells? And back in June and July, we thought really hard about what does it feel like to ask magic a question and get a short chain of cells back that execute on that task. And so we've thought a lot about sort of like how that breaks down into individual atomic units and how those are tied together. We introduced something which is kind of an internal name, but it's called the airlock. And the airlock is exactly a sequence of cells that refer to one another, understand one another, use things that are happening in other cells. And it gives you a chance to sort of preview what magic has generated for you. Then you can accept or reject as an entire group. And that's one of the reasons we call it an airlock, because at any time you can sort of eject the airlock and see it in the space. But to come back to your question about how the AI UX fits into this notebook, ultimately a notebook is very conversational in its structure. I've got a series of thoughts that I'm going to express as a series of cells. And sometimes if I'm a kind data scientist, I'll put some text in between them too, explaining what on earth I'm doing. And that feels, in my opinion, and I think this is quite shared amongst exons, that feels like a really nice refinement of the chat UI. I've been saying for several months now, like, please stop building chat UIs. There is some irony because I think what the notebook allows is like chat plus plus. [00:15:36]Alessio: Yeah, I think the first wave of everything was like chat with X. So it was like chat with your data, chat with your documents and all of this. But people want to code, you know, at the end of the day. And I think that goes into the end user. I think most people that use notebooks are software engineer, data scientists. I think the cool things about these models is like people that are not traditionally technical can do a lot of very advanced things. And that's why people like code interpreter and chat GBT. How do you think about the evolution of that persona? Do you see a lot of non-technical people also now coming to Hex to like collaborate with like their technical folks? [00:16:13]Bryan: Yeah, I would say there might even be more enthusiasm than we're prepared for. We're obviously like very excited to bring what we call the like low floor user into this world and give more people the opportunity to self-serve on their data. We wanted to start by focusing on users who are already familiar with Hex and really make magic fantastic for them. One of the sort of like internal, I would say almost North Stars is our team's charter is to make Hex feel more magical. That is true for all of our users, but that's easiest to do on users that are already able to use Hex in a great way. What we're hearing from some customers in particular is sort of like, I'm excited for some of my less technical stakeholders to get in there and start asking questions. And so that raises a lot of really deep questions. If you immediately enable self-service for data, which is almost like a joke over the last like maybe like eight years, if you immediately enabled self-service, what challenges does that bring with it? What risks does that bring with it? And so it has given us the opportunity to think about things like governance and to think about things like alignment with the data team and making sure that the data team has clear visibility into what the self-service looks like. Having been leading a data team, trying to provide answers for stakeholders and hearing that they really want to self-serve, a question that we often found ourselves asking is, what is the easiest way that we can keep them on the rails? What is the easiest way that we can set up the data warehouse and set up our tools such that they can ask and answer their own questions without coming away with like false answers? Because that is such a priority for data teams, it becomes an important focus of my team, which is, okay, magic may be an enabler. And if it is, what do we also have to respect? We recently introduced the data manager and the data manager is an auxiliary sort of like tool on the Hex platform to allow people to write more like relevant metadata about their data warehouse to make sure that magic has access to the best information. And there are some things coming to kind of even further that story around governance and understanding. [00:18:37]Alessio: You know, you mentioned self-serve data. And when I was like a joke, you know, the whole rush to the modern data stack was something to behold. Do you think AI is like in a similar space where it's like a bit of a gold rush? [00:18:51]Bryan: I have like sort of two comments here. One I'll shamelessly steal from a friend, Adam Azzam from Prefect. He says that this is more of like an iron mine than a gold mine in the sense of there is a lot of work to extract this precious, precious resource. And that's the first one is I think, don't expect to just go down to the stream and do a little panning. There's a lot of work to be done. And frankly, the steps to go from this like gold to, or this resource to something valuable is significant. I think people have gotten a little carried away with the old maxim of like, don't go pan for gold, sell pickaxes and shovels. It's a much stronger business model. At this point, I feel like I look around and I see more pickaxe salesmen and shovel salesmen than I do prospectors. And that scares me a little bit. Metagame where people are starting to think about how they can build tools for people building tools for AI. And that starts to give me a little bit of like pause in terms of like, how confident are we that we can even extract this resource into something valuable? I got a text message from a VC earlier today, and I won't name the VC or the fund, but the question was, what are some medium or large size companies that have integrated AI into their platform in a way that you're really impressed by? And I looked at the text message for a few minutes and I was finding myself thinking and thinking, and I responded, maybe only co-pilot. It's been a couple hours now, and I don't think I've thought of another one. And I think that's where I reflect again on this, like iron versus gold. If it was really gold, I feel like I'd be more blown away by other AI integrations. And I'm not yet. [00:20:40]Alessio: I feel like all the people finding gold are the ones building things that traditionally we didn't focus on. So like mid-journey. I've talked to a company yesterday, which I'm not going to name, but they do agents for some use case, let's call it. They are 11 months old. They're making like 8 million a month in revenue, but in a space that you wouldn't even think about selling to. If you were like a shovel builder, you wouldn't even go sell to those people. And Swix talks about this a bunch, about like actually trying to go application first for some things. Let's actually see what people want to use and what works. What do you think are the most maybe underexplored areas in AI? Is there anything that you wish people were actually trying to shovel? [00:21:23]Bryan: I've been saying for a couple of months now, if I had unlimited resources and I was just sort of like truly like, you know, on my own building whatever I wanted, I think the thing that I'd be most excited about is building sort of like the personal Memex. The Memex is something that I've wanted since I was a kid. And are you familiar with the Memex? It's the memory extender. And it's this idea that sort of like human memory is quite weak. And so if we can extend that, then that's a big opportunity. So I think one of the things that I've always found to be one of the limiting cases here is access. How do you access that data? Even if you did build that data like out, how would you quickly access it? And one of the things I think there's a constellation of technologies that have come together in the last couple of years that now make this quite feasible. Like information retrieval has really improved and we have a lot more simple systems for getting started with information retrieval to natural language is ultimately the interface that you'd really like these systems to work on, both in terms of sort of like structuring the data and preparing the data, but also on the retrieval side. So what keys off the query for retrieval, probably ultimately natural language. And third, if you really want to go into like the purely futuristic aspect of this, it is latent voice to text. And that is also something that has quite recently become possible. I did talk to a company recently called gather, which seems to have some cool ideas in this direction, but I haven't seen yet what I, what I really want, which is I want something that is sort of like every time I listen to a podcast or I watch a movie or I read a book, it sort of like has a great vector index built on top of all that information that's contained within. And then when I'm having my next conversation and I can't quite remember the name of this person who did this amazing thing, for example, if we're talking about the Memex, it'd be really nice to have Vannevar Bush like pop up on my, you know, on my Memex display, because I always forget Vannevar Bush's name. This is one time that I didn't, but I often do. This is something that I think is only recently enabled and maybe we're still five years out before it can be good, but I think it's one of the most exciting projects that has become possible in the last three years that I think generally wasn't possible before. [00:23:46]Alessio: Would you wear one of those AI pendants that record everything? [00:23:50]Bryan: I think I'm just going to do it because I just like support the idea. I'm also admittedly someone who, when Google Glass first came out, thought that seems awesome. I know that there's like a lot of like challenges about the privacy aspect of it, but it is something that I did feel was like a disappointment to lose some of that technology. Fun fact, one of the early Google Glass developers was this MIT computer scientist who basically built the first wearable computer while he was at MIT. And he like took notes about all of his conversations in real time on his wearable and then he would have real time access to them. Ended up being kind of a scandal because he wanted to use a computer during his defense and they like tried to prevent him from doing it. So pretty interesting story. [00:24:35]Alessio: I don't know but the future is going to be weird. I can tell you that much. Talking about pickaxes, what do you think about the pickaxes that people built before? Like all the whole MLOps space, which has its own like startup graveyard in there. How are those products evolving? You know, you were at Wits and Biases before, which is now doing a big AI push as well. [00:24:57]Bryan: If you really want to like sort of like rub my face in it, you can go look at my white paper on MLOps from 2022. It's interesting. I don't think there's many things in that that I would these days think are like wrong or even sort of like naive. But what I would say is there are both a lot of analogies between MLOps and LLMops, but there are also a lot of like key differences. So like leading an engineering team at the moment, I think a lot more about good engineering practices than I do about good ML practices. That being said, it's been very convenient to be able to see around corners in a few of the like ML places. One of the first things I did at Hex was work on evals. This was in February. I hadn't yet been overwhelmed by people talking about evals until about May. And the reason that I was able to be a couple of months early on that is because I've been building evals for ML systems for years. I don't know how else to build an ML system other than start with the evals. I teach my students at Rutgers like objective framing is one of the most important steps in starting a new data science project. If you can't clearly state what your objective function is and you can't clearly state how that relates to the problem framing, you've got no hope. And I think that is a very shared reality with LLM applications. Coming back to one thing you mentioned from earlier about sort of like the applications of these LLMs. To that end, I think what pickaxes I think are still very valuable is understanding systems that are inherently less predictable, that are inherently sort of experimental. On my engineering team, we have an experimentalist. So one of the AI engineers, his focus is experiments. That's something that you wouldn't normally expect to see on an engineering team. But it's important on an AI engineering team to have one person whose entire focus is just experimenting, trying, okay, this is a hypothesis that we have about how the model will behave. Or this is a hypothesis we have about how we can improve the model's performance on this. And then going in, running experiments, augmenting our evals to test it, et cetera. What I really respect are pickaxes that recognize the hybrid nature of the sort of engineering tasks. They are ultimately engineering tasks with a flavor of ML. And so when systems respect that, I tend to have a very high opinion. One thing that I was very, very aligned with Weights and Biases on is sort of composability. These systems like ML systems need to be extremely composable to make them much more iterative. If you don't build these systems in composable ways, then your integration hell is just magnified. When you're trying to iterate as fast as people need to be iterating these days, I think integration hell is a tax not worth paying. [00:27:51]Alessio: Let's talk about some of the LLM native pickaxes, so to speak. So RAG is one. One thing is doing RAG on text data. One thing is doing RAG on tabular data. We're releasing tomorrow our episode with Kube, the semantic layer company. Curious to hear your thoughts on it. How are you doing RAG, pros, cons? [00:28:11]Bryan: It became pretty obvious to me almost immediately that RAG was going to be important. Because ultimately, you never expect your model to have access to all of the things necessary to respond to a user's request. So as an example, Magic users would like to write SQL that's relevant to their business. And it's important then to have the right data objects that they need to query. We can't expect any LLM to understand our user's data warehouse topology. So what we can expect is that we can build a RAG system that is data warehouse aware, data topology aware, and use that to provide really great information to the model. If you ask the model, how are my customers trending over time? And you ask it to write SQL to do that. What is it going to do? Well, ultimately, it's going to hallucinate the structure of that data warehouse that it needs to write a general query. Most likely what it's going to do is it's going to look in its sort of memory of Stack Overflow responses to customer queries, and it's going to say, oh, it's probably a customer stable and we're in the age of DBT, so it might be even called, you know, dim customers or something like that. And what's interesting is, and I encourage you to try, chatGBT will do an okay job of like hallucinating up some tables. It might even hallucinate up some columns. But what it won't do is it won't understand the joins in that data warehouse that it needs, and it won't understand the data caveats or the sort of where clauses that need to be there. And so how do you get it to understand those things? Well, this is textbook RAG. This is the exact kind of thing that you expect RAG to be good at augmenting. But I think where people who have done a lot of thinking about RAG for the document case, they think of it as chunking and sort of like the MapReduce and the sort of like these approaches. But I think people haven't followed this train of thought quite far enough yet. Jerry Liu was on the show and he talked a little bit about thinking of this as like information retrieval. And I would push that even further. And I would say that ultimately RAG is just RecSys for LLM. As I kind of already mentioned, I'm a little bit recommendation systems heavy. And so from the beginning, RAG has always felt like RecSys to me. It has always felt like you're building a recommendation system. And what are you trying to recommend? The best possible resources for the LLM to execute on a task. And so most of my approach to RAG and the way that we've improved magic via retrieval is by building a recommendation system. [00:30:49]Alessio: It's funny, as you mentioned that you spent three years writing the book, the O'Reilly book. Things must have changed as you wrote the book. I don't want to bring out any nightmares from there, but what are the tips for people who want to stay on top of this stuff? Do you have any other favorite newsletters, like Twitter accounts that you follow, communities you spend time in? [00:31:10]Bryan: I am sort of an aggressive reader of technical books. I think I'm almost never disappointed by time that I've invested in reading technical manuscripts. I find that most people write O'Reilly or similar books because they've sort of got this itch that they need to scratch, which is that I have some ideas, I have some understanding that we're hard won, I need to tell other people. And there's something that, from my experience, correlates between that itch and sort of like useful information. As an example, one of the people on my team, his name is Will Kurt, he wrote a book sort of Bayesian statistics the fun way. I knew some Bayesian statistics, but I read his book anyway. And the reason was because I was like, if someone feels motivated to write a book called Bayesian statistics the fun way, they've got something to say about Bayesian statistics. I learned so much from that book. That book is like technically like targeted at someone with less knowledge and experience than me. And boy, did it humble me about my understanding of Bayesian statistics. And so I think this is a very boring answer, but ultimately like I read a lot of books and I think that they're a really valuable way to learn these things. I also regrettably still read a lot of Twitter. There is plenty of noise in that signal, but ultimately it is still usually like one of the first directions to get sort of an instinct for what's valuable. The other comment that I want to make is we are in this age of sort of like archive is becoming more of like an ad platform. I think that's a little challenging right now to kind of use it the way that I used to use it, which is for like higher signal. I've chatted a lot with a CMU professor, Graham Neubig, and he's been doing LLM evaluation and LLM enhancements for about five years and know that I didn't misspeak. And I think talking to him has provided me a lot of like directionality for more believable sources. Trying to cut through the hype. I know that there's a lot of other things that I could mention in terms of like just channels, but ultimately right now I think there's almost an abundance of channels and I'm a little bit more keen on high signal. [00:33:18]Alessio: The other side of it is like, I see so many people say, Oh, I just wrote a paper on X and it's like an article. And I'm like, an article is not a paper, but it's just funny how I know we were kind of chatting before about terms being reinvented and like people that are not from this space kind of getting into AI engineering now. [00:33:36]Bryan: I also don't want to be gatekeepy. Actually I used to say a lot to people, don't be shy about putting your ideas down on paper. I think it's okay to just like kind of go for it. And I, I myself have something on archive that is like comically naive. It's intentionally naive. Right now I'm less concerned by more naive approaches to things than I am by the purely like advertising approach to sort of writing these short notes and articles. I think blogging still has a good place. And I remember getting feedback during my PhD thesis that like my thesis sounded more like a long blog post. And I now feel like that curmudgeonly professor who's also like, yeah, maybe just keep this to the blogs. That's funny.Alessio: Uh, yeah, I think one of the things that Swyx said when he was opening the AI engineer summit a couple of weeks ago was like, look, most people here don't know much about the space because it's so new and like being open and welcoming. I think it's one of the goals. And that's why we try and keep every episode at a level that it's like, you know, the experts can understand and learn something, but also the novices can kind of like follow along. You mentioned evals before. I think that's one of the hottest topics obviously out there right now. What are evals? How do we know if they work? Yeah. What are some of the fun learnings from building them into X? [00:34:53]Bryan: I said something at the AI engineer summit that I think a few people have already called out, which is like, if you can't get your evals to be sort of like objective, then you're not trying hard enough. I stand by that statement. I'm not going to, I'm not going to walk it back. I know that that doesn't feel super good because people, people want to think that like their unique snowflake of a problem is too nuanced. But I think this is actually one area where, you know, in this dichotomy of like, who can do AI engineering? And the answer is kind of everybody. Software engineering can become AI engineering and ML engineering can become AI engineering. One thing that I think the more data science minded folk have an advantage here is we've gotten more practice in taking very vague notions and trying to put a like objective function around that. And so ultimately I would just encourage everybody who wants to build evals, just work incredibly hard on codifying what is good and bad in terms of these objective metrics. As far as like how you go about turning those into evals, I think it's kind of like sweat equity. Unfortunately, I told the CEO of gantry several months ago, I think it's been like six months now that I was sort of like looking at every single internal Hex request to magic by hand with my eyes and sort of like thinking, how can I turn this into an eval? Is there a way that I can take this real request during this dog foodie, not very developed stage? How can I make that into an evaluation? That was a lot of sweat equity that I put in a lot of like boring evenings, but I do think ultimately it gave me a lot of understanding for the way that the model was misbehaving. Another thing is how can you start to understand these misbehaviors as like auxiliary evaluation metrics? So there's not just one evaluation that you want to do for every request. It's easy to say like, did this work? Did this not work? Did the response satisfy the task? But there's a lot of other metrics that you can pull off these questions. And so like, let me give you an example. If it writes SQL that doesn't reference a table in the database that it's supposed to be querying against, we would think of that as a hallucination. You could separately consider, is it a hallucination as a valuable metric? You could separately consider, does it get the right answer? The right answer is this sort of like all in one shot, like evaluation that I think people jump to. But these intermediary steps are really important. I remember hearing that GitHub had thousands of lines of post-processing code around Copilot to make sure that their responses were sort of correct or in the right place. And that kind of sort of defensive programming against bad responses is the kind of thing that you can build by looking at many different types of evaluation metrics. Because you can say like, oh, you know, the Copilot completion here is mostly right, but it doesn't close the brace. Well, that's the thing you can check for. Or, oh, this completion is quite good, but it defines a variable that was like already defined in the file. Like that's going to have a problem. That's an evaluation that you could check separately. And so this is where I think it's easy to convince yourself that all that matters is does it get the right answer? But the more that you think about production use cases of these things, the more you find a lot of this kind of stuff. One simple example is like sometimes the model names the output of a cell, a variable that's already in scope. Okay. Like we can just detect that and like we can just fix that. And this is the kind of thing that like evaluations over time and as you build these evaluations over time, you really can expand the robustness in which you trust these models. And for a company like Hex, who we need to put this stuff in GA, we can't just sort of like get to demo stage or even like private beta stage. We really hunting GA on all of these capabilities. Did it get the right answer on some cases is not good enough. [00:38:57]Alessio: I think the follow up question to that is in your past roles, you own the model that you're evaluating against. Here you don't actually have control into how the model evolves. How do you think about the model will just need to improve or we'll use another model versus like we can build kind of like engineering post-processing on top of it. How do you make the choice? [00:39:19]Bryan: So I want to say two things here. One like Jerry Liu talked a little bit about in his episode, he talked a little bit about sort of like you don't always want to retrain the weights to serve certain use cases. Rag is another tool that you can use to kind of like soft tune. I think that's right. And I want to go back to my favorite analogy here, which is like recommendation systems. When you build a recommendation system, you build the objective function. You think about like what kind of recs you want to provide, what kind of features you're allowed to use, et cetera, et cetera. But there's always another step. There's this really wonderful collection of blog posts from Eugene Yon and then ultimately like even Oldridge kind of like iterated on that for the Merlin project where there's this multi-stage recommender. And the multi-stage recommender says the first step is to do great retrieval. Once you've done great retrieval, you then need to do great ranking. Once you've done great ranking, you need to then do a good job serving. And so what's the analogy here? Rag is retrieval. You can build different embedding models to encode different features in your latent space to ensure that your ranking model has the best opportunity. Now you might say, oh, well, my ranking model is something that I've got a lot of capability to adjust. I've got full access to my ranking model. I'm going to retrain it. And that's great. And you should. And over time you will. But there's one more step and that's downstream and that's the serving. Serving often sounds like I just show the s**t to the user, but ultimately serving is things like, did I provide diverse recommendations? Going back to Stitch Fix days, I can't just recommend them five shirts of the same silhouette and cut. I need to serve them a diversity of recommendations. Have I respected their requirements? They clicked on something that got them to this place. Is the recommendations relevant to that query? Are there any hard rules? Do we maybe not have this in stock? These are all things that you put downstream. And so much like the recommendations use case, there's a lot of knobs to pull outside of retraining the model. And even in recommendation systems, when do you retrain your model for ranking? Not nearly as much as you do other s**t. And even this like embedding model, you might fiddle with more often than the true ranking model. And so I think the only piece of the puzzle that you don't have access to in the LLM case is that sort of like middle step. That's okay. We've got plenty of other work to do. So right now I feel pretty enabled. [00:41:56]Alessio: That's great. You obviously wrote a book on RecSys. What are some of the key concepts that maybe people that don't have a data science background, ML background should keep in mind as they work in this area? [00:42:07]Bryan: It's easy to first think these models are stochastic. They're unpredictable. Oh, well, what are we going to do? I think of this almost like gaseous type question of like, if you've got this entropy, where can you put the entropy? Where can you let it be entropic and where can you constrain it? And so what I want to say here is think about the cases where you need it to be really tightly constrained. So why are people so excited about function calling? Because function calling feels like a way to constrict it. Where can you let it be more gaseous? Well, maybe in the way that it talks about what it wants to do. Maybe for planning, if you're building agents and you want to do sort of something chain of thoughty. Well, that's a place where the entropy can happily live. When you're building applications of these models, I think it's really important as part of the problem framing to be super clear upfront. These are the things that can be entropic. These are the things that cannot be. These are the things that need to be super rigid and really, really aligned to a particular schema. We've had a lot of success in making specific the parts that need to be precise and tightly schemified, and that has really paid dividends. And so other analogies from data science that I think are very valuable is there's the sort of like human in the loop analogy, which has been around for quite a while. And I have gone on record a couple of times saying that like, I don't really love human in the loop. One of the things that I think we can learn from human in the loop is that the user is the best judge of what is good. And the user is pretty motivated to sort of like interact and give you kind of like additional nudges in the direction that you want. I think what I'd like to flip though, is instead of human in the loop, I'd like it to be AI in the loop. I'd rather center the user. I'd rather keep the user as the like core item at the center of this universe. And the AI is a tool. By switching that analogy a little bit, what it allows you to do is think about where are the places in which the user can reach for this as a tool, execute some task with this tool, and then go back to doing their workflow. It still gets this back and forth between things that computers are good at and things that humans are good at, which has been valuable in the human loop paradigm. But it allows us to be a little bit more, I would say, like the designers talk about like user-centered. And I think that's really powerful for AI applications. And it's one of the things that I've been trying really hard with Magic to make that feel like the workflow as the AI is right there. It's right where you're doing your work. It's ready for you anytime you need it. But ultimately you're in charge at all times and your workflow is what we care the most about. [00:44:56]Alessio: Awesome. Let's jump into lightning round. What's something that is not on your LinkedIn that you're passionate about or, you know, what's something you would give a TED talk on that is not work related? [00:45:05]Bryan: So I walk a lot. [00:45:07]Bryan: I have walked every road in Berkeley. And I mean like every part of every road even, not just like the binary question of, have you been on this road? I have this little app that I use called Wanderer, which just lets me like kind of keep track of everywhere I've been. And so I'm like a little bit obsessed. My wife would say a lot a bit obsessed with like what I call new roads. I'm actually more motivated by trails even than roads, but like I'm a maximalist. So kind of like everything and anything. Yeah. Believe it or not, I was even like in the like local Berkeley paper just talking about walking every road. So yeah, that's something that I'm like surprisingly passionate about. [00:45:45]Alessio: Is there a most underrated road in Berkeley? [00:45:49]Bryan: What I would say is like underrated is Kensington. So Kensington is like a little town just a teeny bit north of Berkeley, but still in the Berkeley hills. And Kensington is so quirky and beautiful. And it's a really like, you know, don't sleep on Kensington. That being said, one of my original motivations for doing all this walking was people always tell me like, Berkeley's so quirky. And I was like, how quirky is Berkeley? Turn it out. It's quite, quite quirky. It's also hard to say quirky and Berkeley in the same sentence I've learned as of now. [00:46:20]Alessio: That's a, that's a good podcast warmup for our next guests. All right. The actual lightning ground. So we usually have three questions, acceleration, exploration, then a takeaway acceleration. What's, what's something that's already here today that you thought would take much longer to arrive in AI and machine learning? [00:46:39]Bryan: So I invited the CEO of Hugging Face to my seminar when I worked at Stitch Fix and his talk at the time, honestly, like really annoyed me. The talk was titled like something to the effect of like LLMs are going to be the like technology advancement of the next decade. It's on YouTube. You can find it. I don't remember exactly the title, but regardless, it was something like LLMs for the next decade. And I was like, okay, they're like one modality of model, like whatever. His talk was fine. Like, I don't think it was like particularly amazing or particularly poor, but what I will say is damn, he was right. Like I, I don't think I quite was on board during that talk where I was like, ah, maybe, you know, like there's a lot of other modalities that are like moving pretty quick. I thought things like RL were going to be the like real like breakout success. And there's a little pun with Atari and breakout there, but yeah, like I, man, I was sleeping on LLMs and I feel a little embarrassed. I, yeah. [00:47:44]Alessio: Yeah. No, I mean, that's a good point. It's like sometimes the, we just had Jeremy Howard on the podcast and he was saying when he was talking about fine tuning, everybody thought it was dumb, you know, and then later people realize, and there's something to be said about messaging, especially like in technical audiences where there's kind of like the metagame, you know, which is like, oh, these are like the cool ideas people are exploring. I don't know where I want to align myself yet, you know, or whatnot. So it's cool exploration. So it's kind of like the opposite of that. You mentioned RL, right? That's something that was kind of like up and up and up. And then now it's people are like, oh, I don't know. Are there any other areas if you weren't working on, on magic that you want to go work on? [00:48:25]Bryan: Well, I did mention that, like, I think this like Memex product is just like incredibly exciting to me. And I think it's really opportunistic. I think it's very, very feasible, but I would maybe even extend that a little bit, which is I don't see enough people getting really enthusiastic about hardware with advanced AI built in. You're hearing whispering of it here and there, put on the whisper, but like you're starting to see people putting whisper into pieces of hardware and making that really powerful. I joked with, I can't think of her name. Oh, Sasha, who I know is a friend of the pod. Like I joked with Sasha that I wanted to make the big mouth Billy Bass as a babble fish, because at this point it's pretty easy to connect that up to whisper and talk to it in one language and have it talk in the other language. And I was like, this is the kind of s**t I want people building is like silly integrations between hardware and these new capabilities. And as much as I'm starting to hear whisperings here and there, it's not enough. I think I want to see more people going down this track because I think ultimately like these things need to be in our like physical space. And even though the margins are good on software, I want to see more like integration into my daily life. Awesome. [00:49:47]Alessio: And then, yeah, a takeaway, what's one message idea you want everyone to remember and think about? [00:49:54]Bryan: Even though earlier I was talking about sort of like, maybe like not reinventing things and being respectful of the sort of like ML and data science, like ideas. I do want to say that I think everybody should be experimenting with these tools as much as they possibly can. I've heard a lot of professors, frankly, express concern about their students using GPT to do their homework. And I took a completely opposite approach, which is in the first 15 minutes of the first class of my semester this year, I brought up GPT on screen and we talked about what GPT was good at. And we talked about like how the students can sort of like use it. I showed them an example of it doing data analysis work quite well. And then I showed them an example of it doing quite poorly. I think however much you're integrating with these tools or interacting with these tools, and this audience is probably going to be pretty high on that distribution. I would really encourage you to sort of like push this into the other people in your life. My wife is very technical. She's a product manager and she's using chat GPT almost every day for communication or for understanding concepts that are like outside of her sphere of excellence. And recently my mom and my sister have been sort of like onboarded onto the chat GPT train. And so ultimately I just, I think that like it is our duty to help other people see like how much of a paradigm shift this is. We should really be preparing people for what life is going to be like when these are everywhere. [00:51:25]Alessio: Awesome. Thank you so much for coming on, Bryan. This was fun. [00:51:29]Bryan: Yeah. Thanks for having me. And use Hex magic. [00:51:31] Get full access to Latent Space at www.latent.space/subscribe

THE ONE'S CHANGING THE WORLD -PODCAST
ARTIFICIAL GENERAL INTELLIGENCE ARRIVAL & NAVIGATING HUMAN CONCERNS - DR. MAYANK KEJRIWAL: USC

THE ONE'S CHANGING THE WORLD -PODCAST

Play Episode Listen Later Oct 23, 2023 70:22


#ai #artificialintelligence #aiforgood Mayank Kejriwal is a Research Assistant Professor in the Department of Industrial and Systems Engineering, and a Research Lead at the USC Information Sciences Institute. His research has been funded by programs such as DARPA LORELEI, CauseEx, MEMEX (covered in the press by 60 minutes, Forbes, Scientific American, WSJ, BBC, Wired and several others for its success in spawning real-world systems for tackling human trafficking), AIDA and D3M projects. Prior to joining ISI in 2016, he obtained his Ph.D. from the University of Texas at Austin. His dissertation, titled "Populating a Linked Data Entity Name System", was awarded the Best Dissertation Award by the Semantic Web Science Association in 2017. He is also the author of "Domain-specific Knowledge Graph Construction" (Springer), which has been downloaded thousands of times in the last year and is available internationally. Dr. Kejriwal is a passionate advocate of using Artificial Intelligence technology for social good, and regularly collaborates with domain-experts to build such systems. He has given talks and tutorials in international academic and industrial venues, most recently serving as a roundtable speaker and participant (on using AI for fighting child trafficking) at the Concordia Summit that was co-held with the UN General Assembly in New York City in September, 2019. Therein, he was co-author of a multi-organization whitepaper on using AI to fight child trafficking. The myDIG system, which he co-built and co-authored and that was a product of the MEMEX program, was nominated for a Best Demonstration award at the prestigious AAAI conference in 2018. https://www.linkedin.com/in/mayankkejriwal https://twitter.com/kejriwal_mayank Time Stamp 0:00 to 01:57- Intro & Background 01:57 to 03:59- What is Knowledge Graphs & its Role in AI 03:59 to 08:55- How to Build Knowledge Graphs & its Applications 08:55 to 14:34- Knowledge graphs & understanding the world 14:34 to 18:11- Can Knowledge Graphs help reduce machine hallucination 18:11 to 21:02- Knowledge Graphs & its Applications in E-com 21:02 to 28:40- Knowledge graphs for Human Trafficking 28:40 to 32:28- LLM's gaining general-purpose knowledge 32:28 to 37:18- Artificial General Intelligence by 2030 37:18 to 38:31- Will ChatGPT replace Coders 38:31 to 42:06- Artificial Super Intelligence & Sentient AI 42:06 to 44:01- Over Hyping AI 44:01 to 51:55-Democratization of AI, Public perception of AI 51:55 to 54:48- AI Regulation & the Danger of underestimating AI 54:48 to 01:03:12- AI Existential Threat & building successful companies with ChatGPT 01:03:12 to 01:10:22- Approach to building AGI & peers doing it right Connect & Follow us at: https://in.linkedin.com/in/eddieavil https://in.linkedin.com/company/change-transform-india https://www.facebook.com/changetransformindia/ https://twitter.com/intothechange https://www.instagram.com/changetransformindia/ Listen to the Audio Podcast at: https://anchor.fm/transform-impossible https://podcasts.apple.com/us/podcast/change-i-m-possibleid1497201007?uo=4 https://open.spotify.com/show/56IZXdzH7M0OZUIZDb5mUZ https://www.breaker.audio/change-i-m-possible https://www.google.com/podcasts?feed=aHR0cHM6Ly9hbmNob3IuZm0vcy8xMjg4YzRmMC9wb2RjYXN0L3Jzcw Don't Forget to Subscribe www.youtube.com/@toctwpodcast

Podcast di Palazzo Ducale di Genova
La vita che verrà. Visioni sul futuro dell'umanità "Non basterà una mela. Che cosa cambierà e che cosa dovrà cambiare nella medicina"

Podcast di Palazzo Ducale di Genova

Play Episode Listen Later Feb 23, 2022 56:30


22 febbraio 2022 - Silvia Bencivelli - Giornalista scientifica, scrittrice, conduttrice radiotelevisiva. È tra i conduttori di Pagina3, la rassegna stampa culturale di Radio3 Rai, ed è stata nella redazione e in conduzione a Radio3 scienza, il quotidiano scientifico della stessa rete. Ha lavorato in tv con Rai3, per Tutta Salute, e come inviata di Presa Diretta e Cosmo, e ha collaborato e collabora con Rai scuola (per esempio per i programmi di scienza per ragazzi Nautilus e Memex). Scrive per «la Repubblica» e allegati, per «Le Scienze», «Focus» e altre testate. Insegna giornalismo scientifico a un master de La Sapienza - Università di Roma, al master di giornalismo della Lumsa di Roma e in altre sedi. Ha pubblicato diversi libri: l'ultimo è il saggio Sospettosi (Einaudi 2019). Tra i precedenti, il romanzo Le mie amiche streghe (Einaudi 2017), e i saggi Perché ci piace la musica (Sironi 2007 e 2012, tradotto in tre lingue), È la medicina, bellezza! – Perché è difficile parlare di salute (con Daniela Ovadia, Carocci 2016, Premio Galileo 2017) e Comunicare la scienza (con Francesco Paolo De Ceglia, Carocci 2013). Per il suo lavoro ha ricevuto numerosi premi.

#ENTREPRENEUR
Sunday September 12, 2021 Evening weekly #NFT update. Full revamped MEMEX platform release 104/4/21.

#ENTREPRENEUR

Play Episode Listen Later Sep 13, 2021 4:49


Fabulous Sunday evening here in Santa Clara California. Here is a weekly roundup on all the NFT happenings this week. Hoping everyone has a great week as Monday approaches. --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/gary-kolegraff5/support

Daily Meeting
El Memex y los Hipervínculos azules

Daily Meeting

Play Episode Listen Later Aug 31, 2021 7:06


Un hiperenlace o hipervínculo (que viene del inglés hyperlink), o simplemente link, es un elemento de un documento electrónico que hace referencia a otro recurso y son de color azul.https://melvinsalas.com/fbd976f0789c468d9e10a90adfb87a6a

azules memex
The History of Computing
Do You Yahoo!?

The History of Computing

Play Episode Listen Later Aug 20, 2021 28:15


The simple story of Yahoo! Is that they were an Internet search company that came out of Stanford during the early days of the web. They weren't the first nor the last. But they represent a defining moment in the rise of the web as we know it today, when there was enough content out there that there needed to be an easily searchable catalog of content. And that's what Stanford PhD students David Philo and Jerry Yang built. As with many of those early companies it began as a side project called “Jerry and David's Guide to the World Wide Web.” And grew into a company that at one time rivaled any in the world. At the time there were other search engines and they all started adding portal aspects to the site growing fast until the dot-com bubble burst. They slowly faded until being merged with another 90s giant, AOL, in 2017 to form Oath, which got renamed to Verizon Media in 2019 and then effectively sold to investment management firm Apollo Global Management in 2021. Those early years were wild. Yang moved to San Jose in the 70s from Taiwan, and earned a bachelors then a masters at Stanford - where he met David Filo in 1989. Filo is a Wisconsin kid who moved to Stanford and got his masters in 1990. The two went to Japan in 1992 on an exchange program and came home to work on their PhDs. That's when they started surfing the web. Within two years they started their Internet directory in 1994. As it grew they hosted the database on Yang's student computer called akebono and the search engine on konishiki, which was Filo's. They renamed it to Yahoo, short for Yet Another Hierarchical Officious Oracle - after all they maybe considered themselves Yahoos at the time. And so Yahoo began life as akebono.stanford.edu/~yahoo. Word spread fast and they'd already had a million hits by the end of 1994. It was time to move out of Stanford. Mark Andreesen offered to let them move into Netscape. They bought a domain in 1995 and incorporated the company, getting funding from Sequoia Capital raising $3,000,000. They tinkered with selling ads on the site to fund buying more servers but there was a lot of businessing. They decided that they would bring in Tim Koogle (which ironically rhymes with Google) to be CEO who brought in Jeff Mallett from Novell's consumer division to be the COO. They were the suits and got revenues up to a million dollars. The idea of the college kids striking gold fueled the rise of other companies and Yang and Filo became poster children. Applications from all over the world for others looking to make their mark started streaming in to Stanford - a trend that continues today. Yet another generation was about to flow into Silicon Valley. First the chip makers, then the PC hobbyists turned businesses, and now the web revolution. But at the core of the business were Koogle and Mallett, bringing in advertisers and investors. And the next year needing more and more servers and employees to fuel further expansion, they went public, selling over two and a half million shares at $13 to raise nearly $34 million. That's just one year after a gangbuster IPO from Netscape. The Internet was here. Revenues shot up to $20 million. A concept we repeatedly look at is the technological determinism that industries go through. At this point it's easy to look in the rear view mirror and see change coming at us. First we document information - like Jerry and David building a directory. Then we move it to a database so we can connect that data. Thus a search engine. Given that Yahoo! was a search engine they were already on the Internet. But the next step in the deterministic application of modern technology is to replace human effort with increasingly sophisticated automation. You know, like applying basic natural language processing, classification, and polarity scoring algorithms to enrich the human experience. Yahoo! hired “surfers” to do these tasks. They curated the web. Yes, they added feeds for news, sports, finance, and created content. Their primary business model was to sell banner ads. And they pioneered the field. Banner ads mean people need to be on the site to see them. So adding weather, maps, shopping, classifieds, personal ads, and even celebrity chats were natural adjacencies given that mental model. Search itself was almost a competitor, sending people to other parts of the web that they weren't making money off eyeballs. And they were pushing traffic to over 65 million pages worth of data a day. They weren't the only ones. This was the portal era of search and companies like Lycos, Excite, and InfoSeek were following the same model. They created local directories and people and companies could customize the look and feel. Their first designer, David Shen, takes us through the user experience journey in his book Takeover! The Inside Story the Yahoo Ad Revolution. They didn't invent pay-per-clic advertising but did help to make it common practice and proved that money could be made on this whole new weird Internet thing everyone was talking about. The first ad they sold was for MCI and from there they were practically printing money. Every company wanted in on the action - and sales just kept going up. Bill Clinton gave them a spot in the Internet Village during his 1997 inauguration and they were for a time seemingly synonymous with the Internet. The Internet was growing fast. Cataloging the Internet and creating content for the Internet became a larger and larger manual task. As did selling ads, which was a manual transaction requiring a larger and larger sales force. As with other rising internet properties, people dressed how they wanted, they'd stay up late building code or content and crash at the desk. They ran funny cheeky ads with that yodel - becoming a brand that people knew and many equated to the Internet. We can thank San Francisco's Black Rocket ad agency for that. They grew fast. The founders made several strategic acquisitions and gobbled up nearly every category of the Internet that has each grown to billions of dollars. They bought Four 11 for $95 million in their first probably best acquisition, and used them to create Yahoo! Mail in 1997 and a calendar in 1998. They had over 12 million Yahoo! Email users by he end of the year, inching their way to the same number of AOL users out there. There were other tools like Yahoo Briefcase, to upload files to the web. Now common with cloud storage providers like Dropbox, Box, Google Drive, and even Office 365. And contacts and Messenger - a service that would run until 2018. Think of all the messaging apps that have come with their own spin on the service since. 1998 also saw the acquisition of Viaweb, founded by the team that would later create Y Combinator. It was just shy of a $50M acquisition that brought the Yahoo! Store - which was similar to the Shopify of today. They got a $250 million investment from Softbank, bought Yoyodyne, and launched AT&T's WorldNet service to move towards AOL's dialup services. By the end of the year they were closing in on 100 million page views a day. That's a lot of banners shown to visitors. But Microsoft was out there, with their MSN portal at the height of the browser wars. Yahoo! bought Broadcast.com in 1999 saddling the world with Mark Cuban. They dropped $5.7 billion for 300 employees and little more than an ISDN line. Here, they paid over a 100x multiple of annual revenues and failed to transition sellers into their culture. Sales cures all. In his book We Were Yahoo! Jeremy Ring describes the lays much of the blame of the failure to capitalize on the acquisition as not understanding the different selling motion. I don't remember him outright saying it was hubris, but he certainly indicates that it should have worked out and that broadcast.com was could have been what YouTube would become. Another market lost in a failed attempt at Yahoo TV. And yet many of these were trends started by AOL. They also bought GeoCities in 99 for $3.7 billion. Others have tried to allow for fast and easy site development - the no code wysiwyg web. GeoCities lasted until 2009 - a year after Google launched Google Sites. And we have Wix, Squarespace, WordPress, and so many others offering similar services today. As they grew some of the other 130+ search engines at the time folded. The new products continued. The Yahoo Notebook came before Evernote. Imagine your notes accessible to any device you could log into. The more banners shown, the more clicks. Advertisers could experiment in ways they'd never been able to before. They also inked distribution deals, pushing traffic to other site that did things they didn't. The growth of the Internet had been fast, with nearly 100 million people armed with Internet access - and yet it was thought to triple in just the next three years. And even still many felt a bubble was forming. Some, like Google, had conserved cash - others like Yahoo! Had spent big on acquisitions they couldn't monetize into truly adjacent cash flow generating opportunities. And meanwhile they were alienating web properties by leaning into every space that kept eyeballs on the site. By 2000 their stock traded at $118.75 and they were the most valuable internet company at $125 billion. Then as customers folded when the dot-com bubble burst, the stock fell to $8.11 the next year. One concept we talk about in this podcast is a lost decade. Arguably they'd entered into theirs around the time the dot-com bubble burst. They decided to lean into being a media company even further. Again, showing banners to eyeballs was the central product they sold. They brought in Terry Semel in 2001 using over $100 million in stock options to entice him. And the culture problems came fast. Semel flew in a fancy jet, launched television shows on Yahoo! and alienated programmers, effectively creating an us vs them and de-valuing the work done on the portal and search. Work that could have made them competitive with Google Adwords that while only a year old was already starting to eat away at profits. But media. They bought a company called LaunchCast in 2001, charging a monthly fee to listen to music. Yahoo Music came before Spotify, Pandora, Apple Music, and even though it was the same year the iPod was released, they let us listen to up to 1,000 songs for free or pony up a few bucks a month to get rid of ads and allow for skips. A model that has been copied by many over the years. By then they knew that paid search was becoming a money-maker over at Google. Overture had actually been first to that market and so Yahoo! Bought them for $1.6 billion in 2003. But again, they didn't integrate the team and in a classic “not built here” moment started Project Panama where they'd spend three years building their own search advertising platform. By the time that shipped the search war was over and executives and great programmers were flowing into other companies all over the world. And by then they were all over the world. 2005 saw them invest $1 billion in a little company called Alibaba. An investment that would accelerate Alibaba to become the crown jewel in Yahoo's empire and as they dwindled away, a key aspect of what led to their final demise. They bought Flickr in 2005 for $25M. User generated content was a thing. And Flickr was almost what Instagram is today. Instead we'd have to wait until 2010 for Instagram because Flickr ended up yet another of the failed acquisitions. And here's something wild to thin about - Stewart Butterfield and Cal Henderson started another company after they sold Flickr. Slack sold to Salesforce for over $27 billion. Not only is that a great team who could have turned Flickr into something truly special, but if they'd been retained and allowed to flourish at Yahoo! they could have continued building cooler stuff. Yikes. Additionally, Flickr was planning a pivot into social networking, right before a time when Facebook would take over that market. If fact, they tried to buy Facebook for just over a billion dollars in 2006. But Zuckerberg walked away when the price went down after the stock fell. They almost bought YouTube and considered buying Apple, which is wild to think about today. Missed opportunities. And Semmel was the first of many CEOs who lacked vision and the capacity to listen to the technologists - in a technology company. These years saw Comcast bring us weather.com, the rise of espn online taking eyeballs away from Yahoo! Sports, Gmail and other mail services reducing reliance on Yahoo! Mail. Facebook, LinkedIn, and other web properties rose to take ad placements away. Even though Yahoo Finance is still a great portal even sites like Bloomberg took eyeballs away from them. And then there was the rise of user generated content - a blog for pretty much everything. Jerry Yang came back to run the show in 2007 then Carol Bartz from 2009 to 2011 then Scott Thompson in 2012. None managed to turn things around after so much lost inertia - and make no mistake, inertia is the one thing that can't be bought in this world. Wisconsin's Marissa Mayer joined Yahoo! In 2012. She was Google's 20th employee who'd risen through the ranks from writing code to leading teams to product manager to running web products and managing not only the layout of that famous homepage but also helped deliver Google AdWords and then maps. She had the pedigree and managerial experience - and had been involved in M&A. There was an immediate buzz that Yahoo! was back after years of steady decline due to incoherent strategies and mismanaged acquisitions. She pivoted the business more into mobile technology. She brought remote employees back into the office. She implemented a bell curve employee ranking system like Microsoft did during their lost decade. They bought Tumblr in 2013 for $1.1 billion. But key executives continued to leave - Tumbler's value dropped, and the stock continued to drop. Profits were up, revenues were down. Investing in the rapidly growing China market became all the rage. The Alibaba investment was now worth more than Yahoo! itself. Half the shares had been sold back to Alibaba in 2012 to fund Yahoo! pursuing the Mayer initiatives. And then there was Yahoo Japan, which continued to do well. After years of attempts, activist investors finally got Yahoo! to spin off their holdings. They moved most of the shares to a holding company which would end up getting sold back to Alibaba for tens of billions of dollars. More missed opportunities for Yahoo! And so in the end, they would get merged with AOL - the two combined companies worth nearly half a trillion dollars at one point to become Oath in 2017. Mayer stepped down and the two sold for less than $5 billion dollars. A roller coaster that went up really fast and down really slow. An empire that crumbled and fragmented. Arguably, the end began in 1998 when another couple of grad students at Stanford approached Yahoo to buy Google for $1M. Not only did Filo tell them to try it alone but he also introduced them to Michael Moritz of Sequoia - the same guy who'd initially funded Yahoo!. That wasn't where things really got screwed up though. It was early in a big change in how search would be monetized. But they got a second chance to buy Google in 2002. By then I'd switched to using Google and never looked back. But the CEO at the time, Terry Semel, was willing to put in $3B to buy Google - who decided to hold out for $5B. They are around a $1.8T company today. Again, the core product was selling advertising. And Microsoft tried to buy Yahoo! In 2008 for over 44 billion dollars to become Bing. Down from the $125 billion height of the market cap during the dot com bubble. And yet they eventually sold for less than four and a half billion in 2016 and went down in value from there. Growth stocks trade at high multiples but when revenues go down the crash is hard and fast. Yahoo! lost track of the core business - just as the model was changing. And yet never iterated it because it just made too much money. They were too big to pivot from banners when Google showed up with a smaller, more bite-sized advertising model that companies could grow into. Along the way, they tried to do too much. They invested over and over in acquisitions that didn't work because they ran off the innovative founders in an increasingly corporate company that was actually trying to pretend not to be. We have to own who we are and become. And we have to understand that we don't know anything about the customers of acquired companies and actually listen - and I mean really listen - when we're being told what those customers want. After all, that's why we paid for the company in the first place. We also have to avoid allowing the market to dictate a perceived growth mentality. Sure a growth stock needs to hit a certain number of revenue increase to stay considered a growth stock and thus enjoy the kind of multiples for market capitalization. But that can drive short term decisions that don't see us investing in areas that don't effectively manipulate stocks. Decisions like trying to keep eyeballs on pages with our own content rather than investing in the user generated content that drove the Web 2.0 revolution. The Internet can be a powerful medium to find information, allow humans to do more with less, and have more meaningful experiences in this life. But just as Yahoo! was engineering ways to keep eyeballs on their pages, the modern Web 2.0 era has engineered ways to keep eyeballs on our devices. And yet what people really want is those meaningful experiences, which happen more when we aren't staring at our screens than when we are. As I look around at all the alerts on my phone and watch, I can't help but wonder if another wave of technology is coming that disrupts that model. Some apps are engineered to help us lead healthier lifestyles and take a short digital detoxification break. Bush's Memex in “As We May Think” was arguably an Apple taken from the tree of knowledge. If we aren't careful, rather than the dream of computers helping humanity do more and free our minds to think more deeply we are simply left with less and less capacity to think and less and less meaning. The Memex came and Yahoo! helped connect us to any content we might want in the world. And yet, like so many others, they stalled in the phase they were at in that deterministic structure that technologies follow. Too slow to augment human labor with machine learning like Google did - but instead too quick to try and do everything for everyone with no real vision other than be everything to everyone. And so the cuts went on slowly for a long time, leaving employees constantly in fear of losing their jobs. As you listen to this if I were to leave a single parting thought - it would be that companies should always be willing to cannibalize their own businesses. And yet we have to have a vision that our teams rally behind for how that revenue gets replaced. We can't fracture a company and just sprawl to become everything for everyone but instead need to be targeted and more precise. And to continue to innovate each product beyond the basic machine learning and into deep learning and beyond. And when we see those who lack that focus, don't get annoyed but instead get stoked - that's called a disruptive opportunity. And if there's someone with 1,000 developers in a space, Nicholas Carlson in his book “Marissa Mayer and the Fight To Save Yahoo!” points out that one great developer is worth a thousand average ones. And even the best organizations can easily turn great developers into average ones for a variety of reason. Again, we can call these opportunities. Yahoo! helped legitimize the Internet. For that we owe them a huge thanks. And we can fast follow their adjacent expansions to find a slew of great and innovative ideas that increased the productivity of humankind. We owe them a huge thanks for that as well. Now what opportunities do we see out there to propel us further yet again?

The History of Computing
Babbage to Bush: An Unbroken Line Of Computing

The History of Computing

Play Episode Listen Later Jul 29, 2021 14:28


The amount published in scientific journals has exploded over the past few hundred years. This helps in putting together a history of how various sciences evolved. And sometimes helps us revisit areas for improvement - or predict what's on the horizon. The rise of computers often begins with stories of Babbage. As we've covered a lot came before him and those of the era were often looking to automate calculating increasingly complex mathematic tables. Charles Babbage was a true Victorian era polymath. A lot was happening as the world awoke to a more scientific era and scientific publications grew in number and size. Born in London, Babbage loved math from an early age and went away to Trinity College in Cambridge in 1810. There he helped form the Analytical Society with John Herschel - a pioneer of early photography and a chemist and invented of the blueprint. And George Peacock, who established the British arm of algebraic logic, which when picked up by George Boole would go on to form part of Boolean algebra, ushering in the idea that everything can be reduced to a zero or a one. Babbage graduated from Cambridge and went on to become a Fellow of the Royal Society and helped found the Royal Astronomical Society. He published works with Herschel on electrodynamics that went on to be used by Michael Faraday later and even dabbled in actuarial tables - possibly to create a data driven insurance company. His father passed away in 1827, leaving him a sizable estate. And after applying multiple times he finally became a professor at Cambridge in 1828. He and the others from the Analytical Society were tinkering with things like generalized polynomials and what we think of today as a formal power series, all of which an be incredibly tedious and time consuming. Because it's iterative. Pascal and Leibnitz had pushed math forward and had worked on the engineering to automate various tasks, applying some of their science. This gave us Pascal's calculator and Leibnitz's work on information theory and his calculus ratiocinator added a stepped reckoner, now called the Leibniz wheel where he was able to perform all four basic arithmetic operations.  Meanwhile, Babbage continued to bounce around between society, politics, science, mathematics, and even coining a book on manufacturing where he looked at rational design and profit sharing. He also looked at how tasks were handled and made observations about the skill level of each task and the human capital involved in carrying them out. Marx even picked up where Babbage left off and looked further into profitability as a motivator. He also invented the pilot for trains and was involved with lots of learned people of the day. Yet Babbage is best known for being the old, crusty gramps of the computer. Or more specifically the difference engine, which is different from a differential analyzer. A difference engine was a mechanical calculator that could perform polynomial functions. A differential analyzer on the other hand solves differential equations using wheels and disks.  Babbage expanded on the ideas of Pascal and Leibniz and added to mechanical computing, making the difference engine, the inspiration of many a steampunk work of fiction. Babbage started work on the difference engine in 1819. Multiple engineers built different components for the engine and it was powered by a crank that spun a series of wheels, not unlike various clockworks available at the time. The project was paid for by the British Government who hoped it could save time calculating complex tables. Imagine doing all the work in spreadsheets manually. Each cell could take a fair amount of time and any mistake could be disastrous.  But it was just a little before its time. The plans have been built and worked and while he did produce a prototype capable of raising numbers to the third power and perform some quadratic equations the project was abandoned in 1833. We'll talk about precision in a future episode. Again, the math involved in solving differential equations at the time was considerable and the time-intensive nature was holding back progress. So Babbage wasn't the only one working on such ideas. Gaspard-Gustave de Coriolis, known for the Coriolis effect, was studying the collisions of spheres and became a professor of mechanics in Paris. To aid in his works, he designed the first mechanical device to integrate differential equations in 1836.  After Babbage scrapped his first, he moved on to the analytical engine, adding conditional branching, loops, and memory  - and further complicating the machine. The engine borrowed the punchcard tech from the Jacquard loom and applied that same logic, along with the work of Leibniz, to math. The inputs would be formulas, much as Turing later described when concocting some of what we now call Artificial Intelligence. Essentially all problems could be solved given a formula and the output would be a printer. The analytical machine had 1,000 numbers worth of memory and a logic processor or arithmetic unit that he called a mill, which we'd call a CPU today. He even planned on a programming language which we might think of as assembly today. All of this brings us to the fact that while never built, it would have been a Turing-complete in that the simulation of those formulas was a Turing machine.  Ada Lovelace contributed the concept of Bernoulli numbers in algorithms giving us a glimpse into what an open source collaboration might some day look like. And she was in many ways the first programmer - and daughter of Lord Byron and Anne Millbanke, a math whiz. She became fascinated with the engine and ended up becoming an expert at creating a set of instructions to punch on cards, thus the first programmer of the analytical engine and far before her time. In fact, there would be no programmer for 100 years with her depth of understanding. Not to make you feel inadequate, but she was 27 in 1843. Luigi Menabrea took the idea to France. And yet by the time Babbage died in 1871 without a working model.  During those years, Per Georg Scheutz built a number of difference engines based on Babbage's published works - also funded by the government and would evolve to become the first calculator that could print. Martin Wiberg picked up from there and was able to move to 20 digit processing. George Grant at Harvard developed calculating machines and published his designs by 1876, starting a number of companies to fabricate gears along the way.  James Thomson built a differential analyzer in 1876 to predict tides. And that's when his work on fluid dynamics and other technology seemed to be the connection between these machines and the military. Thomson's work would Joe added to work done by Arthur Pollen and we got our first automated fire-control systems.  Percy Ludgate and Leonardo Torres wrote about Babbages work in the early years the 1900s and other branches of math needed other types of mechanical computing. Burroughs built a difference engine in 1912 and another in 1929. The differential analyzer was picked up by a number of scientists in those early years. But Vaneevar Bush was perhaps one of the most important. He, with Harold Locke Hazen built one at MIT and published an article on it in 1931. Here's where everything changes. The information was out there in academic journals. Bush published another in 1936 connecting his work to Babbage's. Bush's designs get used by a number of universities and picked up by the the Balistic Research Lab in the US. One of those installations was in the same basement ENIAC would be built in. Bush did more than inspire other mathematicians. Sometimes he paid them. His research assistant was Claude Shannon, who built the General Purpose Analog Computer in 1941 and went on to become founder of the whole concept of information theory, down to the bits to bytes. Shannon's computer was important as it came shortly after Alan Turing's work on Turing machines and so has been seen as a means to get to this concept of general, programmable computing - basically revisiting the Babbage concept of a thinking, or analytical machine. And Howard Aiken went a step further than mechanical computing and into electromechanical computing with he Mark I, where he referenced Babbage's work as well. Then we got the Atanasoff-Berry Computer in 1942. By then, our friend Bush had gone on to chair the National Defense Research Committee where he would serve under Roosevelt and Truman and help develop radar and the Manhattan Project as an administrator where he helped coordinate over 5,000 research scientists. Some helped with ENIAC, which was completed in 1945, thus beginning the era of programmable, digital, general purpose computers. Seeing how computers helped break Enigma machine encryption and solve the equations, blow up targets better, and solve problems that held science back was one thing - but  unleashing such massive and instantaneous violence as the nuclear bomb caused Bush to write an article for The Atlantic called As We May Think, that inspired generations of computer scientists. Here he laid out the concept of a Memex, or a general purpose computer that every knowledge worker could have. And thus began the era of computing.  What we wanted to look at in this episode is how Babbage wasn't an anomaly. Just as Konrad Zuse wasn't. People published works, added to the works they read about, cited works, pulled in concepts from other fields, and we have unbroken chains in our understanding of how science evolves. Some,  like Konrad Zuse, might have been operating outside of this peer reviewing process - but he eventually got around to publishing as well.  

Podcast – Cory Doctorow's craphound.com

This week on my podcast, my inaugural column for Medium, The Memex Method, a reflection on 20 years of blogging, and how it has affected my writing. MP3

The History of Computing
Project Xanadu

The History of Computing

Play Episode Listen Later May 13, 2021 19:00


Java, Ruby, PHP, Go. These are web applications that dynamically generate code then interpreted as a file by a web browser. That file is rarely static these days and the power of the web is that an app or browser can reach out and obtain some data, get back some xml or json or yaml, and provide an experience to a computer, mobile device, or even embedded system. The web is arguably the most powerful, transformational technology in the history of technology. But the story of the web begins in philosophies that far predate its inception. It goes back to a file, which we can think of as a document, on a computer that another computer reaches out to and interprets. A file comprised of hypertext. Ted Nelson coined the term hypertext. Plenty of others put the concepts of linking objects into the mainstream of computing. But he coined the term that he's barely connected to in the minds of many.  Why is that? Tim Berners-Lee invented the World Wide Web in 1989. Elizabeth Feinler developed a registry of names that would evolve into DNS so we could find computers online and so access those web sites without typing in impossible to remember numbers. Bob Kahn and Leonard Kleinrock were instrumental in the Internet Protocol, which allowed all those computers to be connected together, providing the schemes for those numbers. Some will know these names; most will not.  But a name that probably doesn't come up enough is Ted Nelson. His tale is one of brilliance and the early days of computing and the spread of BASIC and an urge to do more. It's a tale of the hacker ethic. And yet, it's also a tale of irreverence - to be used as a warning for those with aspirations to be remembered for something great. Or is it? Steve Jobs famously said “real artists ship.” Ted Nelson did ship. Until he didn't. Let's go all the way back to 1960, when he started Project Xanadu. Actually, let's go a little further back first.  Nelson was born to TV directory Ralph Nelson and Celeste Holm, who won an Academy Award for her role in Gentleman's Agreement in 1947 and took home another pair of nominations through her career, and for being the original Ado Annie in Oklahoma. His dad worked on The Twilight Zone - so of course he majored in philosophy at Swarthmore College and then went off to the University of Chicago and then Harvard for graduate school, taking a stab at film after he graduated. But he was meant for an industry that didn't exist yet but would some day eclipse the film industry: software.  While in school he got exposed to computers and started to think about this idea of a repository of all the world's knowledge. And it's easy to imagine a group of computing aficionados sitting in a drum circle, smoking whatever they were smoking, and having their minds blown by that very concept. And yet, it's hard to imagine anyone in that context doing much more. And yet he did. Nelson created Project Xanadu in 1960. As we'll cover, he did a lot of projects during the remainder of his career. The Journey is what is so important, even if we never get to the destination. Because sometimes we influence the people who get there. And the history of technology is as much about failed or incomplete evolutions as it is about those that become ubiquitous.  It began with a project while he was enrolled in Harvard grad school. Other word processors were at the dawn of their existence. But he began thinking through and influencing how they would handle information storage and retrieval.  Xanadu was supposed to be a computer network that connected humans to one another. It was supposed to be simple and a scheme for world-wide electronic publishing. Unlike the web, which would come nearly three decades later, it was supposed to be bilateral, with broken links self-repairing, much as nodes on the ARPAnet did. His initial proposal was a program in machine language that could store and display documents. Being before the advent of Markdown, ePub, XML, PDF, RTF, or any of the other common open formats we use today, it was rudimentary and would evolve over time. Keep in mind. It was for documents and as Nelson would say later, the web - which began as a document tool, was a fork of the project.  The term Xanadu was borrowed from Samuel Taylor Coleridge's Kubla Khan, itself written after some opium fueled dreams about a garden in Kublai Khan's Shangdu, or Xanadu.In his biography, Coleridge explained the rivers in the poem supply “a natural connection to the parts and unity to the whole” and he said a “stream, traced from its source in the hills among the yellow-red moss and conical glass-shaped tufts of bent, to the first break or fall, where its drops become audible, and it begins to form a channel.”  Connecting all the things was the goal and so Xanadu was the name. He gave a talk and presented a paper called “A File Structure for the Complex, the Changing and the Indeterminate” at the Association for Computing Machinery in 1965 that laid out his vision. This was the dawn of interactivity in computing. Digital Equipment had launched just a few years earlier and brought the PDP-8 to market that same year. The smell of change was in the air and Nelson was right there.  After that, he started to see all these developments around the world. He worked on a project at Brown University to develop a word processor with many of his ideas in it. But the output of that project, as with most word processors since - was to get things printed. He believed content was meant to be created and live its entire lifecycle in the digital form. This would provide perfect forward and reverse citations, text enrichment, and change management. And maybe if we all stand on the shoulders of giants, it would allow us the ability to avoid rewriting or paraphrasing the works of others to include them in own own writings. We could do more without that tedious regurgitation.  He furthered his counter-culture credentials by going to Woodstock in 1969. Probably not for that reason, but it happened nonetheless. And he traveled and worked with more and more people and companies, learning and engaging and enriching his ideas. And then he shared them.  Computer Lib/Dream Machines was a paperback book. Or two. It had a cover on each side. Originally published in 1974, it was one of the most important texts of the computer revolution. Steven Levy called it an epic. It's rare to find it for less than a hundred bucks on eBay at this point because of how influential it was and what an amazing snapshot in time it represents.  Xanadu was to be a hypertext publishing system in the form of Xanadocs, or files that could be linked to from other files. A Xanadoc used Xanalinks to embed content from other documents into a given document. These spans of text would become transclusions and change in the document that included the content when they changed in the live document. The iterations towards working code were slow and the years ticked by. That talk in 1965 gave way to the 1970s, then 80s. Some thought him brilliant. Others didn't know what to make of it all. But many knew of his ideas for hypertext and once known it became deterministic. Byte Magazine published many of his thoughts in 1988 called “Managing Immense Storage” and by then the personal computer revolution had come in full force. Tim Berners-Lee put the first node of the World Wide Web online the next year, using a protocol they called Hypertext Transfer Protocol, or http. Yes, the hypertext philosophy was almost a means of paying homage to the hard work and deep thinking Nelson had put in over the decades. But not everyone saw it as though Nelson had made great contributions to computing.  “The Curse of Xanadu” was an article published in Wired Magazine in 1995. In the article, the author points out the fact that the web had come along using many of the ideas Nelson and his teams had worked on over the years but actually shipped - whereas Nelson hadn't. Once shipped, the web rose in popularity becoming the ubiquitous technology it is today. The article looked at Xanadu as vaporware. But there is a deeper, much more important meaning to Xanadu in the history of computing.  Perhaps inspired by the Wired article, the group released an incomplete version of Xanadu in 1998. But by then, other formats - including PDF which was invented in 1993 and .doc for Microsoft Word, were the primary mechanisms we stored documents and first gopher and then the web were spreading to interconnect humans with content. https://www.youtube.com/watch?v=72M5kcnAL-4 The Xanadu story isn't a tragedy. Would we have had hypertext as a part of Douglas Engelbart's oNLine System without it? Would we have object-oriented programming or later the World Wide Web without it? The very word hypertext is almost an homage, even if they don't know it, to Nelson's work. And the look and feel of his work lives on in places like GitHub, whether directly influenced or not, where we can see changes in code side-by-side with actual production code, changes that are stored and perhaps rolled back forever. Larry Tessler coined the term Cut and Paste. While Nelson calls him a friend in Werner Herzog's Lo and Behold, Reveries of the Connected World, he also points out that Tessler's term is flawed. And I think this is where we as technologists have to sometimes trim down our expectations of how fast evolutions occur. We take tiny steps because as humans we can't keep pace with the rapid rate of technological change. We can look back and see a two steps forward and one step back approach since the dawn of written history. Nelson still doesn't think the metaphors that harken back to paper have any place in the online written word.  Here's another important trend in the history of computing. As we've transitioned to more and more content living online exclusively, the content has become diluted. One publisher I wrote online pieces for asked that they all be +/- 700 words and asked that paragraphs be no more than 4 sentences long (preferably 3) and the sentences should be written at about a 5th or 6th grade level. Maybe Nelson would claim that this de-evolution of writing is due to search engine optimization gamifying the entirety of human knowledge and that a tool like Xanadu would have been the fix. After all, if we could borrow the great works of others we wouldn't have to paraphrase them. But I think as with most things, it's much more nuanced than that.  Our always online, always connected brains can only accept smaller snippets. So that's what we gravitate towards. Actually, we have plenty of capacity for whatever we actually choose to immerse ourselves into. But we have more options than ever before and we of course immerse ourselves into video games or other less literary pursuits. Or are they more literary? Some generations thought books to be dangerous. As do all oppressors. So who am I to judge where people choose to acquire knowledge or what kind they indulge themselves in. Knowledge is power and I'm just happy they have it. And they have it in part because others were willing to water own the concepts to ship a product. Because the history of technology is about evolutions, not revolutions. And those often take generations. And Nelson is responsible for some of the evolutions that brought us the ht in http or html. And for that we are truly grateful! As with the great journey from Lord of the Rings, rarely is greatness found alone. The Xanadu adventuring party included Cal Daniels, Roger Gregory, Mark Miller, Stuart Greene, Dean Tribble, Ravi Pandya, became a part of Autodesk in the 80s, got rewritten in Smalltalk, was considered a rival to the web, but really is more of an evolutionary step on that journey. If anything it's a divergence then convergence to and from Vannevar Bush's Memex. So let me ask this as a parting thought? Are the places you are not willing to sacrifice any of your core designs or beliefs worth the price being paid? Are they worth someone else ending up with a place in the history books where (like with this podcast) we oversimplify complex topics to make them digestible? Sometimes it's worth it. In no way am I in a place to judge the choices of others. Only history can really do that - but when it happens it's usually an oversimplification anyways… So the building blocks of the web lie in irreverence - in hypertext. And while some grew out of irreverence and diluted their vision after an event like Woodstock, others like Nelson and his friend Douglas Englebart forged on. And their visions didn't come with commercial success. But as an integral building block to the modern connected world today they represent as great a mind as practically anyone else in computing. 

Omnibus! With Ken Jennings and John Roderick
Episode 341: The Memex (Entry 774.EX3607)

Omnibus! With Ken Jennings and John Roderick

Play Episode Listen Later Mar 16, 2021 71:52


In which the Internet is born in 1945 when a radar technician in a bamboo hut reads a Life magazine article about a futuristic desk, and Ken wonders if anyone smoked weed at Los Alamos. Certificate #18461.

The REAL David Knight Show
The David Knight Show - Tuesday 9Mar2021

The REAL David Knight Show

Play Episode Listen Later Mar 9, 2021 182:32


* Today's Guest: #Cybersecurity, Infrastructure, #DarkWeb, #Memex *Coming: Deep #Censorship via Authentication #C2PA — the most dangerous threat yet to #FreeSpeech  * From Dolly Parton to Dalai Lama — celebrity vaccine propaganda * Day 360: FDA attacks Ivermectin as “holy grail” pill touted by FOX, state health dictators allow live music but not singing 3:54 Mom gets a chance to see how vile and dumbed down her son's Zoom class is — but she still doesn't see the full extent of the problem; Biden opposes #FreeSpeech case where a student is punished for off campus speech; concertina wire / barbed wire used in DC is not allowed at our border 23:45 Microsoft is at the center of 2 alliances — one of tech companies, the other of mainstream media — to identify content creators using major CPU manufacturers like Intel & Arm and content creation software like that from Adobe.  It will be used to attack privacy, anonymity and dissident content.  It may also be used to label as false any meme, text, video or audio with which govt disagrees  54:34 Celebrity vaccine endorsements from Dolly Parton to Dalai Lama are pushing vaccines. Hey, if Dolly Parton puts it in her body, you should be fine, right? 57:01 Christians who refuse vaccine demonized as lacking a “Spiritual Heart”.  Isolate, target, attack…each group that questions the untested experimental jab 1:23:59 Is Gardasil causing a sudden, deep drop in fertility coinciding with its heavy use in 2006?   1:39:33 What is the DarkWeb and what is Memex, DARPA's tool to attack it? 1:54:07 SolarWinds hack: Backdoors and cyberwar.  Will Biden use it to start a physical war? 2:02:49 How have we made our infrastructure, elections, medical devices, etc so much more vulnerable with IoT & 5G 2:05:56 China's “Sharp Eyes” surveillance program is being implemented in the US as “TALON - Flock Safety” 2:14:44 A black market in vaccine passports?  Something even better? 2:25:51 Automated, “self-driving” semi-trucks — for safety? Are you kidding? 2:31:14 Day 360 of #GreatReset dictatorship: vaccine coercion by the military; live music but NO singing; if health bureaucrats were doctors, they'd be sued for malpractice; Canadian bureaucrat lockdown b/c of spouse's BigPharma investments?; FDA's absurd attack on ivermectin as FOX pushes “holy grail” miracle covid pill Find out more about the show and where you can watch it at TheDavidKnightShow.com If you would like to support the show and our family please consider subscribing monthly here: https://www.subscribestar.com/the-david-knight-show Or you can send a donation through, PayPal at:  https://www.paypal.com/paypalme/davidknightshow Venmo at:  venmo@davidknightshow Cash App at:  $davidknightshow BTC to:  bc1qkuec29hkuye4xse9unh7nptvu3y9qmv24vanh7 Mail: David Knight, POB 1323, Elgin, TX 78621

DataCast
Episode 54: Information Retrieval Research, Data Science For Space Missions, and Open-Source Software with Chris Mattmann

DataCast

Play Episode Listen Later Feb 4, 2021 82:43


Timestamps(2:55) Chris went over his experience studying Computer Science at the University of Southern California for undergraduate in the late 90s.(5:26) Chris recalled working as a Software Engineer at NASA Jet Propulsion Lab in his sophomore year at USC.(9:54) Chris continued his education at USC with an M.S. and then a Ph.D. in Computer Science. Under the guidance of Dr. Nenad Medvidović, his Ph.D. thesis is called “Software Connectors For Highly-Distributed And Voluminous Data-Intensive Systems.” He proposed DISCO, a software architecture-based systematic framework for selecting software connectors based on eight key dimensions of data distribution.(16:28) Towards the end of his Ph.D., Chris started getting involved with the Apache Software Foundation. More specifically, he developed the original proposal and plan for Apache Tika (a content detection and analysis toolkit) in collaboration with Jérôme Charron to extract data in the Panama Papers, exposing how wealthy individuals exploited offshore tax regimes.(24:58) Chris discussed his process of writing “Tika In Action,” which he co-authored with Jukka Zitting in 2011.(27:01) Since 2007, Chris has been a professor in the Department of Computer Science at USC Viterbi School of Engineering. He went over the principles covered in his course titled “Software Architectures.”(29:49) Chris touched on the core concepts and practical exercises that students could gain from his course “Information Retrieval and Web Search Engines.”(32:10) Chris continued with his advanced course called “Content Detection and Analysis for Big Data” in recent years (check out this USC article).(36:31) Chris also served as the Director of the USC’s Information Retrieval and Data Science group, whose mission is to research and develop new methodology and open source software to analyze, ingest, process, and manage Big Data and turn it into information.(41:07) Chris unpacked the evolution of his career at NASA JPL: Member of Technical Staff -> Senior Software Architect -> Principal Data Scientist -> Deputy Chief Technology and Innovation Officer -> Division Manager for the AI, Analytics, and Innovation team.(44:32) Chris dove deep into MEMEX — a JPL’s project that aims to develop software that advances online search capabilities to the deep web, the dark web, and nontraditional content.(48:03) Chris briefly touched on XDATA — a JPL’s research effort to develop new computational techniques and open-source software tools to process and analyze big data.(52:23) Chris described his work on the Object-Oriented Data Technology platform, an open-source data management system originally developed by NASA JPL and then donated to the Apache Software Foundation.(55:22) Chris shared the scientific challenges and engineering requirements associated with developing the next generation of reusable science data processing systems for NASA’s Orbiting Carbon Observatory space mission and the Soil Moisture Active Passive earth science mission.(01:01:05) Chris talked about his work on NASA’s Machine Learning-based Analytics for Autonomous Rover Systems — which consists of two novel capabilities for future Mars rovers (Drive-By Science and Energy-Optimal Autonomous Navigation).(01:04:24) Chris quantified the Apache Software Foundation's impact on the software industry in the past decade and discussed trends in open-source software development.(01:07:15) Chris unpacked his 2013 Nature article called “A vision for data science” — in which he argued that four advancements are necessary to get the best out of big data: algorithm integration, development and stewardship, diverse data formats, and people power.(01:11:54) Chris revealed the challenges of writing the second edition of “Machine Learning with TensorFlow,” a technical book with Manning that teaches the foundational concepts of machine learning and the TensorFlow library's usage to build powerful models rapidly.(01:15:04) Chris mentioned the differences between working in academia and industry.(01:16:20) Chris described the tech and data community in the greater Los Angeles area.(01:18:30) Closing segment.His Contact InfoWikipediaNASA PageGoogle ScholarUSC PageTwitterLinkedInGitHubHis Recommended ResourcesDoug Cutting (Founder of Lucene and Hadoop)Hilary Mason (Ex Data Scientist at bit.ly and Cloudera)Jukka Zitting (Staff Software Engineer at Google)"The One Minute Manager" (by Ken Blanchard and Spencer Johnson)

Datacast
Episode 54: Information Retrieval Research, Data Science For Space Missions, and Open-Source Software with Chris Mattmann

Datacast

Play Episode Listen Later Feb 4, 2021 82:43


Timestamps(2:55) Chris went over his experience studying Computer Science at the University of Southern California for undergraduate in the late 90s.(5:26) Chris recalled working as a Software Engineer at NASA Jet Propulsion Lab in his sophomore year at USC.(9:54) Chris continued his education at USC with an M.S. and then a Ph.D. in Computer Science. Under the guidance of Dr. Nenad Medvidović, his Ph.D. thesis is called “Software Connectors For Highly-Distributed And Voluminous Data-Intensive Systems.” He proposed DISCO, a software architecture-based systematic framework for selecting software connectors based on eight key dimensions of data distribution.(16:28) Towards the end of his Ph.D., Chris started getting involved with the Apache Software Foundation. More specifically, he developed the original proposal and plan for Apache Tika (a content detection and analysis toolkit) in collaboration with Jérôme Charron to extract data in the Panama Papers, exposing how wealthy individuals exploited offshore tax regimes.(24:58) Chris discussed his process of writing “Tika In Action,” which he co-authored with Jukka Zitting in 2011.(27:01) Since 2007, Chris has been a professor in the Department of Computer Science at USC Viterbi School of Engineering. He went over the principles covered in his course titled “Software Architectures.”(29:49) Chris touched on the core concepts and practical exercises that students could gain from his course “Information Retrieval and Web Search Engines.”(32:10) Chris continued with his advanced course called “Content Detection and Analysis for Big Data” in recent years (check out this USC article).(36:31) Chris also served as the Director of the USC’s Information Retrieval and Data Science group, whose mission is to research and develop new methodology and open source software to analyze, ingest, process, and manage Big Data and turn it into information.(41:07) Chris unpacked the evolution of his career at NASA JPL: Member of Technical Staff -> Senior Software Architect -> Principal Data Scientist -> Deputy Chief Technology and Innovation Officer -> Division Manager for the AI, Analytics, and Innovation team.(44:32) Chris dove deep into MEMEX — a JPL’s project that aims to develop software that advances online search capabilities to the deep web, the dark web, and nontraditional content.(48:03) Chris briefly touched on XDATA — a JPL’s research effort to develop new computational techniques and open-source software tools to process and analyze big data.(52:23) Chris described his work on the Object-Oriented Data Technology platform, an open-source data management system originally developed by NASA JPL and then donated to the Apache Software Foundation.(55:22) Chris shared the scientific challenges and engineering requirements associated with developing the next generation of reusable science data processing systems for NASA’s Orbiting Carbon Observatory space mission and the Soil Moisture Active Passive earth science mission.(01:01:05) Chris talked about his work on NASA’s Machine Learning-based Analytics for Autonomous Rover Systems — which consists of two novel capabilities for future Mars rovers (Drive-By Science and Energy-Optimal Autonomous Navigation).(01:04:24) Chris quantified the Apache Software Foundation's impact on the software industry in the past decade and discussed trends in open-source software development.(01:07:15) Chris unpacked his 2013 Nature article called “A vision for data science” — in which he argued that four advancements are necessary to get the best out of big data: algorithm integration, development and stewardship, diverse data formats, and people power.(01:11:54) Chris revealed the challenges of writing the second edition of “Machine Learning with TensorFlow,” a technical book with Manning that teaches the foundational concepts of machine learning and the TensorFlow library's usage to build powerful models rapidly.(01:15:04) Chris mentioned the differences between working in academia and industry.(01:16:20) Chris described the tech and data community in the greater Los Angeles area.(01:18:30) Closing segment.His Contact InfoWikipediaNASA PageGoogle ScholarUSC PageTwitterLinkedInGitHubHis Recommended ResourcesDoug Cutting (Founder of Lucene and Hadoop)Hilary Mason (Ex Data Scientist at bit.ly and Cloudera)Jukka Zitting (Staff Software Engineer at Google)"The One Minute Manager" (by Ken Blanchard and Spencer Johnson)

44BITS 팟캐스트 - 클라우드, 개발, 가젯
스탠다드아웃 098.log: 애플 라이브 이벤트 후기, 당근마켓 행사들, AWS 커뮤니티 데이 온라인 등

44BITS 팟캐스트 - 클라우드, 개발, 가젯

Play Episode Listen Later Dec 4, 2020 61:08


스탠다드아웃 98번째 로그에서는 인프런 강좌, 아이맥, 당근마켓 채용 라이브 등에 대해서 이야기를 나눴습니다. 참가자: @nacyo_t, @seapy, @subicura 정기 후원 - stdout.fm are creating 프로그래머들의 팟캐스트 녹음일 9월 18일, 공개일 12월 4일 쇼노트 44bits가 참여한 행사들 애플 라이브 이벤트 당근마켓이 참여하는 행사들 JS Conf 원티드 콘 SRE Seoul Meetup AWS 커뮤니티 데이 온라인 - 앱 현대화 특집 노션에 백링크 추가 트위터 공지 Project Xanadu Memex 넷플릭스 다큐멘터리 소셜 딜레마 Github CLI 1.0 출시 트위터 공지

aws eb memex
IL BAZar AtOMICo
Ep. 03 - Fo**uta genialità con Massimo Temporelli

IL BAZar AtOMICo

Play Episode Listen Later Nov 27, 2020 213:19


Massimo Temporelli si laurea in Fisica all'Università di Milano. Nel 2000 ottiene una borsa di studio presso l'azienda ST Microelectronics di Milano (leader mondiale nel settore dei microchip) con cui sviluppa i percorsi scientifici dei nuovi laboratori educativi del Museo Nazionale della Scienza e della Tecnologia di Milano. Nel 2005 diventa curatore del Dipartimento Comunicazione del Museo.Dal 2010 si occupa, come imprenditore e libero professionista, di diffusione della cultura per l'innovazione. Tiene conferenze e consulenze su innovazione e cultura digitale per clienti come Luxottica, Edison, Mercedes, Salmoiraghi & Viganò, Magneti Marelli, Sace, Adecco, Comau, Leonardo, Dyson, Eni, Enel, Piquadro, Kinder.Dal 2012 insegna Antropologia e Sociologia allo IED di Milano e Piattaforme tecnologiche per la televisione in Cattolica, tiene seminari sulla storia della tecnologia nelle scuole superiori, all'università e in diversi master e collabora con diverse riviste (Wired, Millionaire, Centodieci) come autore di saggi sul mondo dell'Innovazione. Nel 2012 è stato speaker al Ted di Firenze, nel 2017 è stato cerimoniere del TEDx di Bergamo e nel 2020 nuovamente speaker al Ted di Torino.Da anni è consulente per trasmissioni radiofoniche sul tema scienza, tecnologia e innovazione (Rai Radio 2, Rai Radio 3, Radio 24, Virgin Radio). È stato ospite a Superquark (Rai 1), Geo&Geo (Rai 3), Quelli che il calcio (Rai 2), La storia siamo noi (Rai Storia), Visionari con Corrado Augias (Rai 3).Ha condotto programmi sulla tecnologia per DeaKids (Sky), e per Discovery (Inside Mercedes) e LaEffe, sul digitale terrestre.Nel 2015 e 2016 ha condotto “L'Officina delle Idee”, pillole di storia della scienza nel programma Memex in onda su Rai Scuola e su Rai 2. Dal 2017 è tutor di scienza nel programma “Detto Fatto” in onda su Rai 2. Nel 2019 ha condotto ToolBox4, programma sulla robotica su Rai Scuola.Tra i suoi ultimi lavori editoriali: “Storie e cultura della televisione” (a cura di Aldo Grasso, Mondadori, 2013), “Il codice delle invenzioni. Da Leonardo da Vinci a Steve Jobs” (2011), “La Banda di Via Panisperna” (2013), “Innovatori. Come pensano le persone che cambiano il mondo.” (2015), “4 punto 0, fabbriche, professionisti e prodotti della Quarta rivoluzione industriale”(2017) e “Leonardo Primo Designer” (con Cristina Morozzi, 2019). tutti pubblicati da Hoepli Editore Milano. È direttore scientifico delle collana “Microscopi” di Hoepli.È presidente e founder di The FabLab, laboratorio innovativo in cui stampa 3d, internet delle cose e robotica cambiano il modo di progettare e produrre i prodotti.Il suo ultimo libro è F***ing Genius, edito da HarperCollins (2020).

The History of Computing
How Not To Network A Nation: The Russian Internet That Wasn't

The History of Computing

Play Episode Listen Later Nov 2, 2020 20:07


I just finished reading a book by Ben Peters called How Not To Network A Nation: The Uneasy History of the Soviet Internet. The book is an amazing deep dive into the Soviet attempts to build a national information network primarily in the 60s. The book covers a lot of ground and has a lot of characters, although the most recurring is Viktor Glushkov, and if the protagonist isn't the Russian scientific establishment, perhaps it is Viktor Glushkov. And if there's a primary theme, it's looking at why the Soviets were unable to build a data network that covered the Soviet Union, allowing the country to leverage computing at a micro and a macro scale  The final chapter of the book is one of the best summaries and most insightful I've ever read on the history of computers. While he doesn't directly connect the command and control heterarchy of the former Soviet Union to how many modern companies are run, he does identify a number of ways that the Russian scientists were almost more democratic, or at least in their zeal for a technocratic economy, than the US Military-Industrial-University complex of the 60s.   The Sources and Bibliography is simply amazing. I wish I had time to read and listen and digest all of the information that went into the making if this amazing book. And the way he cites notes that build to conclusions. Just wow. In a previous episode, we covered the memo, “Memorandum for Members and Affiliates of the Intergalactic Computer Network” - sent by JCR Licklider in 1963. This was where the US Advanced Research Projects Agency instigated a nationwide network for research. That network, called ARPAnet, would go online in 1969, and the findings would evolve and change hands when privatized into what we now call the Internet. We also covered the emergence of Cybernetics, which Norbert Wiener defined in 1948 as a the systems-based science of communication and automatic control systems - and we covered the other individuals influential in its development.  It's easy to draw a straight line between that line of thinking and the evolution that led to the ARPAnet. In his book, Peters shows how Glushkov uncovered cybernetics and came to the same conclusion that Licklider had, that the USSR needed a network that would link the nation. He was a communist and so the network would help automate the command economy of the growing Russian empire, an empire that would need more people managing it than there were people in Russia, if the bureaucracy continued to grow at a pace that was required to do the manual computing to get resources to factories and good to people. He had this epiphany after reading Wiener's book on cybernetics - which had been hidden away from the Russian people as American propaganda.  Glushkov's contemporary, Anatoly Kitov had come to the same realization back in 1959. By 1958 the US had developed the Semi-Automatic Ground Environment, or SAGE. The last of that equipment went offline in 1984. The environment was a system of networked radar equipment that could be used as eyes in the sky to detect a Soviet attack. It was crazy to think about that a few years ago, but think today about a radar system capable of detecting influence in elections and maybe notsomuch any more. SAGE linked computers built by IBM.  The Russians saw defense as cost prohibitive. Yet at Stalin's orders they began to develop a network of radar sites in a network of sorts around Moscow in the early 50s, extending to Leningrad. They developed the BESM-1 mainframe in 1952 to 1953 and while Stalin was against computing and western cybernetic doctrine outside of the military, as in America, they were certainly linking sites to launch missiles. Lev Korolyov worked on BESM and then led the team to build the ballistic missile defense system.  So it should come as no surprise that after a few years Soviet scientists like Glushkov and Kitov would look to apply military computing know-how to fields like running the economics of the country.  Kitov had seen technology patterns before they came. He studied nuclear physics before World War II, then rocketry after the war, and he then went to the Ministry of Defence at Bureau No 245 to study computing. This is where he came in contact with Wiener's book on Cybernetics in 1951, which had been banned in Russia at the time. Kitov would work on ballistic missiles and his reputation in the computing field would grow over the years. Kitov would end up with hundreds of computing engineers under his leadership, rising to the rank of Colonel in the military.  By 1954 Kitov was tasked with creating the first computing center for the Ministry of Defence. They would take on the computing tasks for the military. He would oversee the development of the M-100 computer and the transition into transistorized computers. By 1956 he would write a book called “Electronic Digital Computers” and over time, his views on computers grew to include solving problems that went far beyond science and the military. Running company Kitov came up with the Economic Automated Management System in 1959. This was denied because the military didn't want to share their technology. Khrushchev sent Brezhnev, who was running the space program and an expert in all things tech, to meet with Kitov. Kitov was suggesting they use this powerful network of computer centers to run the economy when the Soviets were at peace and the military when they were at war.  Kitov would ultimately realize that the communist party did not want to automate the economy. But his “Red Book” project would ultimately fizzle into one of reporting rather than command and control over the years.  The easy answer as to why would be that Stalin had considered computers the tool of imperialists and that feeling continued with some in the communist party. The issues are much deeper than that though and go to the heart of communism. You see, while we want to think that communism is about the good of all, it is irrational to think that people will act ways in their own self-interest. Microeconomics and macroeconomics. And automating command certainly seems to reduce the power of those in power who see that command taken over by a machine. And so Kitov was expelled from the communist party and could no longer hold a command.  Glushkov then came along recommending the National Automated System for Computation and Information Processing, or OGAS for short, in 1962. He had worked on computers in Kyiv and then moved to become the Director of the Computer Center in Ukraine at the Academy of Science. Being even more bullish on the rise of computing, Glushkov went further even added an electronic payment system on top of controlling a centrally planned economy. Computers were on the rise in various computer centers and other locations and it just made sense to connect them. And they did at small scales.  As was done at MIT, Glushkov built a walled garden of researchers in his own secluded nerd-heaven. He too made a grand proposal. He too saw the command economy of the USSR as one that could be automated with a computer, much as many companies around the world were employing ERP solutions in the coming decades.  The Glushkov proposal continued all the way to the top. They were able to show substantial return on investment yet the proposal to build OGAS was ultimately shot down in 1970 after years of development. While the Soviets were attempting to react to the development of the ARPAnet, they couldn't get past infighting. The finance minister opposed it and flatly refused. There were concerns about which ministry the system would belong to and basically political infighting much as I've seen at many of the top companies in the world (and increasingly in the US government).  A major thesis of the book is that the Soviet entrepreneurs trying to build the network acted more like capitalists than communists and Americans building our early networks acted more like socialists than capitalists. This isn't about individual financial gains though. Glushkov and Kitov in fact saw how computing could automate the economy to benefit everyone. But a point that Peters makes in the book is centered around informal financial networks. Peters points out that Blat, the informal trading of favors that we might call a black market or corruption, was common place. An example he uses in the book is that if a factory performs at 101% of expected production the manager can just slide under the radar. But if they perform at 120% then those gains will be expected permanently and if they ever dip below the expected productivity, they might meet a poor fate. Thus Blat provides a way to trade goods informally and keep the status quo. A computer doing daily reports would make this kind of flying under the radar of Gosplan, or the Soviet State Planning Committee difficult. Thus factory bosses would likely inaccurately enter information into computers and further the Tolchachs, or pushers, of Blat.  A couple of points I'd love to add onto those Peters made, which wouldn't be obvious without that amazing last paragraph in the book. The first is that I've never read Bush, Licklider, or any of the early pioneers claim computers should run a macroeconomy. The closest thing that could run a capitalist economy. And the New York Stock Exchange would begin the process of going digital in 1966 when the Dow was at 990. The Dow sat at about that same place until 1982. Can you imagine that these days? Things looked bad when it dropped to 18,500. And the The London Stock Exchange held out going digital until 1986 - just a few years after the dow finally moved over a thousand. Think about that as it hovers around $26,000 today. And look at the companies and imagine which could get by without computers running their company - much less which are computer companies. There are 2 to 6 billion trades a day. It would probably take more than the population of Russia just to push those numbers if it all weren't digital. In fact now, there's an app (or a lot of apps) for that. But the point is, going back to Bush's Memex, computers were to aid in human decision making. In a world with an exploding amount of data about every domain, Bush had prophesied the Memex would help connect us to data and help us to do more. That underlying tenant infected everyone that read his article and is something I think of every time I evaluate an investment thesis based on automation.  There's another point I'd like to add to this most excellent book. Computers developed in the US were increasingly general purpose and democratized. This led to innovative new applications just popping up and changing the world, like spreadsheets and word processors. Innovators weren't just taking a factory “online” to track the number of widgets sold and deploying ICBMs - they were foundations for building anything a young developer wanted to build. The uses in education with PLATO, in creativity with Sketchpad, in general purpose languages and operating systems, in early online communities with mail and bulletin boards, in the democratization of the computer itself with the rise of the pc and the rapid proliferation with the introduction of games, and then the democratization of raw information with the rise of gopher and the web and search engines. Miniaturized and in our pockets, those are the building blocks of modern society. And the word democratization to me means a lot. But as Peters points out, sometimes the Capitalists act like Communists. Today we close down access to various parts of those devices by the developers in order to protect people. I guess the difference is now we can build our own but since so many of us do that at #dayjob we just want the phone to order us dinner. Such is life and OODA loops. In retrospect, it's easy to see how technological determinism would lead to global information networks. It's easy to see electronic banking and commerce and that people would pay for goods in apps. As the Amazon stock soars over $3,000 and what Jack Ma has done with Alibaba and the empires built by the technopolies at Amazon, Apple, Microsoft, and dozens of others. In retrospect, it's easy to see the productivity gains. But at the time, it was hard to see the forest through the trees. The infighting got in the way. The turf-building. The potential of a bullet in the head from your contemporaries when they get in power can do that I guess.  And so the networks failed to be developed in the USSR and ARPAnet would be transferred to the National Science Foundation in 1985, and the other nets would grow until it was all privatized into the network we call the Internet today, around the same time the Soviet Union was dissolved. As we covered in the episode on the history of computing in Poland, empires simply grow beyond the communications mediums available at the time. By the fall of the Soviet Union, US organizations were networking in a build up from early adopters, who made great gains in productivity increases and signaled the chasm crossing that was the merging of the nets into the Internet. And people were using modems to connect to message boards and work with data remotely. Ironically, that merged Internet that China has splinterneted and that Russia seems poised to splinter further. But just as hiding Wiener's cybernetics book from the Russian people slowed technological determinism in that country, cutting various parts of the Internet off in Russia will slow progress if it happens. The Soviets did great work on macro and micro economic tracking and modeling under Glushkov and Kitov. Understanding what you have and how data and products flow is one key aspect of automation. And sometimes even more important in helping humans make better-informed decisions. Chile tried something similar in 1973 under Salvador Allende, but that system failed as well.  And there's a lot to digest in this story. But that word progress is important. Let's say that Russian or Chinese crackers steal military-grade technology from US or European firms. Yes, they get the tech, but not the underlying principals that led to the development of that technology. Just as the US and partners don't proliferate all of their ideas and ideals by restricting the proliferation of that technology in foreign markets. Phil Zimmerman opened floodgates when he printed the PGP source code to enable the export of military-grade encryption. The privacy gained in foreign theaters contributed to greater freedoms around the world. And crime. But crime will happen in an oppressive regime just as it will in one espousing freedom.  So for you hackers tuning in - whether you're building apps, hacking business, or reingineering for a better tomorrow: next time you're sitting in a meeting and progress is being smothered at work or next time you see progress being suffocated by a government, remember that those who you think are trying to hold you back either don't see what you see, are trying to protect their own power, or they might just be trying to keep progress from outpacing what their constituents are ready for. And maybe those are sometimes the same thing, just from a different perspective. Because go fast at all costs not only leaves people behind but sometimes doesn't build a better mousetrap than what we have today. Or, go too fast and like Kitov you get stripped of your command. No matter how much of a genius you, or your contemporary Glushkov are. The YouTube video called “Internet of Colonel Kitov” has a great quote: “pioneers are recognized by the arrows sticking out of their backs.” But hey, at least history was on their side!  Thank you for tuning in to the History of Computing Podcast. We are so, so, so lucky to have you. Have a great day and I hope you too are on the right side of history!

Geopizza
A Mãe de Todas as Demos – Geopizza #34

Geopizza

Play Episode Listen Later Sep 8, 2020 193:58


Quando a internet foi criada? Na década de 1990? Efetivamente sim, mas o compartilhamento de informações entre computadores já existia através de outros sistemas desde 1973. Porém, antes mesmo do século 20, o ser humano já compartilhava informações através outras máquinas que não eram computadores – os telégrafos desde o século 19. A ideia de criar uma rede de informações globais, onde alguém fosse capaz conversar com outras pessoas através de máquinas, veio junto com a invenção do telégrafo. Em 1891, dois advogados belgas criaram um projeto com um propósito semelhante a internet, o Mundaneum. A máquina possuiria uma grande biblioteca digital que poderia ser acessada por telégrafos diferentes. Por limitações tecnológicas e com a 1º Guerra Mundial, o Mundaneum nunca foi construído. Na década de 40, durante a 2 ºGuerra Mundial, o cientista Vannevar Bush que estava diretamente envolvido no conflito, percebeu que se as pessoas não adquirissem mais conhecimento estariam fadadas a travar guerras devido aos seus desejos egoístas. Caso o “QI coletivo” não fosse aumentado, seria só questão de tempo até o mundo destruir-se em uma guerra nuclear. Vannevar Bush cunhou o conceito de “Memex”, um dispositivo que acessaria a “Rede Mundial de Informações”, tornando possível ler e receber mensagens de diversas pessoas, assim como acessar bibliotecas no mundo inteiro. O Memex nunca foi produzido, mas ele influenciou profundamente um engenheiro na década de 60 chamado Douglas Engelbart. Engelbart foi o primeiro cientista e utilizar um computador – até então uma calculadora – para enviar mensagens através da ARPANET, uma antecessora da internet. Além disso, Engelbart criou o primeiro mouse, o primeiro teclado e o primeiro computador pessoal, o OnLine System Graças as suas invenções, empreendedores como Steve Jobs e Bill Gates apropriaram-se de suas tecnologias na corrida para criar o primeiro computador pessoal acessível na década de 70.

The History of Computing

Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us to innovate (and sometimes cope with) the future! Today we're going to cover yet another of the groundbreaking technologies to come out of MIT: Sketchpad.  Ivan Sutherland is a true computer scientist. After getting his masters from Caltech, he migrated to the land of the Hackers and got a PhD from MIT in 1963. The great Claud Shannon supervised his thesis and Marvin Minsky was on the thesis review committee. But he wasn't just surrounded by awesome figures in computer science, he would develop a critical piece between the Memex in Vannevar Bush's “As We May Think” and the modern era of computing: graphics.  What was it that propelled him from PhD candidate to becoming the father of computer graphics? The 1962-1963 development of a program called Sketchpad. Sketchpad was the ancestor of the GUI, object oriented programming, and computer graphics. In fact, it was the first graphical user interface. And it was all made possible by the TX-2, a computer developed at the MIT Lincoln Laboratory by Wesley Clark and others. The TX-2 was transistorized and so fast. Fast enough to be truly interactive. A lot of innovative work had come with the TX-0 and the program would effectively spin off as Digital Equipment Corporation and the PDP series of computers.  So it was bound to inspire a lot of budding computer scientists to build some pretty cool stuff. Sutherland's Sketchpad used a light pen. These were photosensitive devices that worked like a stylus but would send light to the display, activating dots on a cathode ray tube (CRT). Users could draw shapes on a screen for the first time. Whirlwind at MIT had allowed highlighting objects, but this graphical interface to create objects was a new thing altogether, inputing data into a computer as an object instead of loading it as code, as could then be done using punch cards.  Suddenly the computer could be used for art. There were toggle-able switches that made lines bigger. The extra memory that was pretty much only available in the hallowed halls of government-funded research in the 60s opened up so many possibilities. Suddenly, computer-aided design, or CAD, was here.  Artists could create a master drawing and then additional instances on top, with changes to the master reverberating through each instance. They could draw lines, concentric circles, change ratios. And it would be 3 decades before MacPaint would bring the technology into homes across the world. And of course AutoCAD, making Autodesk one of the greatest software companies in the world.  The impact of Sketchpad would be profound. Sketchpad would be another of Doug Englebart's inspirations when building the oN-Line System and there are clear correlations in the human interfaces. For more on NLS, check out the episode of this podcast called the Mother of All Demos, or watch it on YouTube.  And Sutherland's work would inspire the next generation: people who read his thesis, as well as his students and coworkers.  Sutherland would run the Information Processing Techniques Office for the US Defense Department Advanced Research Project Agency after Lick returned to MIT. He also taught at Harvard, where he and students developed the first virtual reality system in 1968, decades before it was patented by VPL research in 1984. Sutherland then went to the University of Utah, where he taught Alan Kay who gave us object oriented programming in smalltalk and the concept of the tablet in the Dynabook, and Ed Catmull who co-founded Pixar and many other computer graphics pioneers.  He founded Evans and Sutherland, with the man that built the computer science department at the University of Utah and their company launched the careers of John Warnock, the founder of Adobe and Jim Clark, the founder of Silicon Graphics. His next company would be acquired by Sun Microsystems and become Sun Labs. He would remain a Vice President and fellow at Sun and a visiting scholar at Berkeley.  For Sketchpad and his other contributions to computing, he would be awarded a Computer Pioneer Award, become a fellow at the ACM, receive a John von Neumann Medal, receive the Kyoto Prize, become a fellow at the Computer History Museum, and receive a Turing Award.  I know we're not supposed to make a piece of software an actor in a sentence, but thank you Sketchpad. And thank you Sutherland. And his students and colleagues who continued to build upon his work.

InSecurity
Andrew Lewman: Isn’t the Dark Net for Criminals?

InSecurity

Play Episode Listen Later Jun 24, 2020 79:41


    “What’s changed most about Tor is the drug markets have taken over… We had all these hopeful things in the beginning but ever since Silk Road has proven you can do it, the criminal use of Tor has become overwhelming. I think 95% of what we see on the onion sites and other dark net sites is just criminal activity. It varies in severity from copyright piracy to drug markets to horrendous trafficking of humans and exploitation of women and children.”    -- Andrew Lewman; cyberscoop, 22 May, 2017   Do you know what the Darknet is? No seriously… do you ACTUALLY understand what the Darknet is?     On this episode of InSecurity, Matt Stephenson and Michelle Moskowitz speak with Dark Owl Exec VP Andrew Lewman about The Darknet. As the former CEO of The Tor Project, he knows a thing or two about what happens in the Upside Down of the internet. From the Multiverse of Darknets to why business needs to be concerned with activity on the Darknet to the work Andrew is doing with law enforcement, it’s a wide-open look at an area not everyone understands.   About Andrew Lewman     Andrew Lewman (@andrewlewman) is the Executive Vice President at Dark Owl. He has more than 30 years of global-scale technology experience in a variety of domains, including information security, systems administration, and data management. His interest lies in the intersection of technology and humans.   He successfully grew a few companies as a co-founder and top executive, such as TechTarget, The Tor Project, Farsight Security, and DarkOwl. Andrew advises the US and its Allies, having worked on SAFER Warfighter, MEMEX, SHARKSEER, CRISP, and others. And as a technology advisor to Interpol’s Crimes Against Children Initiative.   Andrew is a keynote speaker and frequent media contact for conferences, invited speeches and the global press. He is publishing with Elsevier Digital Investigations, EMCDDA, and Fordham University Press. Andrew’s most recent publication is in Digital Investigation: The darknet’s smaller than we thought: The lifecycle of Tor Hidden Services. As Treasurer for Emerge, Andrew is helping to stop domestic violence through counseling abusers. As Chairman of Each One Teach One, he’s providing economic opportunity for women and girls through technology education.     About Michelle Moskowitz       Michell Moskowitz is Vice President of Business Development & Chief of Staff at Sublime Communications. In her previous lives, she spun up the New Media Division for Lifetime network as well as working with numerous cybsecurity startups.   With a career spent swimming in the waters of digital marketing and consulting Michelle has somehow found the time to also be a journalist at the Greenwich Sentinel.   Michell will be joining us as a recurring co-host to bring additional perspective to the important role that communication plays in a world that grows increasingly technical.     About Matt Stephenson       Insecurity Podcast host Matt Stephenson (@packmatt73) leads the Broadcast Media team at BlackBerry, which puts him in front of crowds, cameras, and microphones all over the world. He is the regular host of the InSecurity podcast and video series at events around the globe.   Twenty years of work with the world’s largest security, storage, and recovery companies has introduced Stephenson to some of the most fascinating people in the industry. He wants to get those stories told so that others can learn from what has come     Every week on the InSecurity Podcast, Matt interviews leading authorities in the security industry to gain an expert perspective on topics including risk management, security control friction, compliance issues, and building a culture of security. Each episode provides relevant insights for security practitioners and business leaders working to improve their organization’s security posture and bottom line.   Can’t get enough of Insecurity? You can find us at ThreatVector, Blackberry, Apple Podcasts and Spotify as well as GooglePlay, Stitcher, SoundCloud, I Heart Radio and wherever you get your podcasts!   Make sure you Subscribe, Rate and Review!

Advent of Computing
Episode 26 - Memex and Hyperlinks

Advent of Computing

Play Episode Listen Later Mar 22, 2020 41:38


The widespread use of the internet has shaped our world, it's hard do imagine the modern day without it. One of the biggest featured would have to be the hyperlink. But despite the modern net feeling so new, links actually date back as far as the 1930s and the creation of the Memex: a machine that was never built but would influence coming generations of dreamers. Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and stickers: https://www.patreon.com/adventofcomputing

Management 2.0 Podcast
LOA047 – Die Idee des lernOS Memex beim #teamsbc20

Management 2.0 Podcast

Play Episode Listen Later Mar 8, 2020 32:24


Am 31. Januar 2020 habe ich die Idee des lernOS Memex als "persönliches Wikipedia" für Wissensarbeiter*innen beim Teams Barcamp vorgestellt. Die Session wurde direkt aus Microsoft Teams heraus aufgezeichnet, ich bitte die nicht immer gute Tonqualität daher zu entschuldigen.

CogNation
Episode 25: NASA Data Scientist Chris Mattmann

CogNation

Play Episode Listen Later Feb 22, 2020 62:01


Chris Mattman, Principal Data Scientist at NASA's Jet Propulsion Laboratory, talks about bridging the gap between lab scientists and data scientists, his work with DARPA unearthing the dark web, machine learning in autonomous planetary rovers, and other cool stuff he's been doing. Chris Mattman's page at NASA (https://scienceandtechnology.jpl.nasa.gov/dr-chris-mattmann) More information about the Memex program at DARPA can be found here (https://www.darpa.mil/program/memex). Chris's forthcoming book, Machine Learning with Tensor Flow (https://www.manning.com/books/machine-learning-with-tensorflow-second-edition?query=Chris%20Mattmann)(2nd ed.) will be available soon. CogNation listeners can get 40% off all Manning products by using the code "podcogn20" when ordering from Manning Publications (manning.com). Special Guest: Chris Mattmann.

Pensieri in codice
La Filosofia Dell'ipertesto

Pensieri in codice

Play Episode Listen Later Apr 3, 2019 9:54


Tutti i giorni ci muoviamo sull’enorme ipertesto che forma il Web. Ma come è nato e da quali idee? Fonti: As we may think - https://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/ A File Structure for The Complex, The Changing and the Indeterminate - http://cs.brown.edu/courses/cs196-9/p84-nelson.pdf La storia dell’ipertesto - https://it.m.wikibooks.org/wiki/Filosofia_dell%27informatica/Storia_dell%27ipertesto Attrezzature: Microfono Blue Yeti* - https://amzn.to/3kSE35f Filtro anti-pop* - https://amzn.to/3baPMsh Filtro anti-pop* - https://amzn.to/2MH0Wf1 Schermo fonoassorbente* - https://amzn.to/3sOZE0P Il costo di acquisto non sarà maggiore per te, ma Amazon mi girerà una piccola parte del ricavato. Canale Telegram - http://bit.ly/joinPicTelegram Spreaker - http://bit.ly/picSpreaker Youtube - http://bit.ly/picYT Spotify - http://bit.ly/picSpotify Itunes - http://bit.ly/picItunes Sostieni il progetto Sostieni tramite Satispay Sostieni tramite Revolut Sostieni tramite PayPal Sostieni utilizzando i link affiliati di Pensieri in codice: Amazon, Todoist, ProtonMail, ProtonVPN, Satispay Partner GrUSP (Codice sconto per tutti gli eventi: community_PIC) Schrödinger Hat Crediti Montaggio - Daniele Galano - https://www.instagram.com/daniele_galano/ Voce intro - Costanza Martina Vitale Musica - Kubbi - Up In My Jam Musica - Light-foot - Moldy Lotion Cover e trascrizione - Francesco Zubani

Pensieri in codice
Ep.7 - La filosofia dell'ipertesto

Pensieri in codice

Play Episode Listen Later Apr 3, 2019 9:54


Tutti i giorni ci muoviamo sull'enorme ipertesto che forma il Web. Ma come è nato e da quali idee?Fonti:As we may think - https://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/ A File Structure for The Complex, The Changing and the Indeterminate - http://cs.brown.edu/courses/cs196-9/p84-nelson.pdf La storia dell'ipertesto - https://it.m.wikibooks.org/wiki/Filosofia_dell%27informatica/Storia_dell%27ipertesto Canale Telegram - http://bit.ly/joinPicTelegram Spreaker - http://bit.ly/picSpreaker Youtube - http://bit.ly/picYT Spotify - http://bit.ly/picSpotify Itunes - http://bit.ly/picItunes Crediti:Montaggio - Daniele Galano - http://bit.ly/2UkKzXk Voce intro - Costanza Martina VitaleMusica - Kubbi - Up In My JamMusica - Light-foot - Moldy Lotion

Folksalert
EP 50: Artificial Intelligence - Fighting Human Trafficking

Folksalert

Play Episode Listen Later Mar 6, 2019 40:25


Mayank Kejriwal is a computer scientist at the USC Information Sciences Institute, where he conducts research on the IARPA HFC, and DARPA LORELEI, CauseEx, D3M, and MEMEX projects, which has been covered by 60 Minutes, Forbes, Scientific American, the Wall Street Journal, the BBC, and Wired. He holds a PhD from the University of Texas at Austin. His dissertation, "Populating a Linked Data Entity Name System,” received the Best Dissertation Award by the Semantic Web Science Association in 2017. Mayank has worked on AI architecture called DIG that help resource-strapped law enforcement crack down on trafficking activity.

Bionic Bug Podcast
Memex (Ch. 30) – Bionic Bug Podcast Episode 030

Bionic Bug Podcast

Play Episode Listen Later Nov 18, 2018 24:38


Hey everyone, welcome back to Bionic Bug podcast! You’re listening to episode 30. This is your host Natasha Bajema, fiction author, futurist, and national security expert. I’m recording this episode on November 18, 2018. First off, a personal update. I’m excited to announce that I’m in the midst of producing an audio book for Bionic Bug.I’m currently auditioning professional narrators and expect the project to be finished in early 2019. I apologize for the break in releasing podcast episodes. I’ve been recovering from a nasty sinus infection the past few weeks, and my voice continues to be scratchy. You’ll probably notice if you listen to the next chapter of Bionic Bug. I hope you enjoyed the bonus episode where I interviewed Samuel Bennett, an expert on robotics, AI and Russia. Check out the episode and the show notes, which include links to his recent publications. Let’s talk tech. Just two headlines for this week. “Jon and Daenerys United in First Pic from Game of Thrones’ Eighth and Final Season,” published on Nov 1 on SyFy.com “Production on the final season stretched to 10 months and includes another year of post to tackle the challenging visual effects.” “Season 8 is anticipated to showcase what's being touted as the biggest battle in TV history, which took an unprecedented 55 nights to shoot.” “And the whole process has been so secretive that HBO has taken to deploying "drone killers" to take out any drones that might have been flying above the set to spy on the action.” “HBO Literally Shooting Down Drones To Prevent Game of Thrones Season 8 Spoilers” “Shaped like a gun, a "drone killer" is aimed at any flying nuisance and shoots out a beam, disabling the drone and driving it back down to the ground. IXI Technology in Yorba Linda, California is responsible for the technology, which costs about $30,000 a pop. The company's also supplied new gadgets to the U.S. military for over three decades.” “Are Killer Robots the Future of War? Parsing the Facts on Autonomous Weapons,” published on Nov 15 by Kelsey Atherton in the NY Times. This article addresses a fundamental question: should machines be allowed to make lethal decisions in battle. Until now, these decisions have been made by humans. Even though autonomous systems exist today, a human remains in the loop to make the ultimate decisions on destroying targets. But we’re moving into an era where autonomous systems are becoming more intelligent and thus more capable of making such decisions. One of the challenges is the decision-making speed of machines exceeds that of humans. When one country decides to go fully autonomous on the battlefield, others may be compelled to follow. Because as Paul Scharre has said: speed kills. Check out his book Army of Noneon Amazon. This is a fantastic article to introduce you to the key issues and I encourage you to read it. Let’s turn to Bionic Bug. Last week, Lara found important clues at Sully’s townhouse, and Rob and Lara got locked in the safe room. Let’s find out what happens next. The views expressed on this podcast are my own and do not reflect the official policy or position of the National Defense University, the Department of Defense or the U.S. Government.

Designing Interactive Systems I '18
5.3 Transactions Systems, Time Sharing, Memex, and Radar Systems

Designing Interactive Systems I '18

Play Episode Listen Later Nov 5, 2018 8:45


Speaker for the Living 'Human Trafficking' Podcast
Challenges of Using Software to Fight Human Trafficking

Speaker for the Living 'Human Trafficking' Podcast

Play Episode Listen Later May 6, 2018 37:26


Seth and JJ talk about the challenges of using software to fight human trafficking, most notably on the Internet. Software tools like MEMEX and those produced by Thorn are essential in making connections across many forms of data, but algorithms can only do so much, and false positives and false negatives can occur. Sex and labor trafficking require different software approaches. Software targeting sex traffickers can analyze ads and forum postings across the web. Assessments and audits of supply chains are one way of using software to address labor trafficking. Humans will remain essential in collecting, analyzing, and interpreting data. Seth explains software concepts. And JJ questions the effectiveness of demand reduction approaches on the Internet for sex trafficking (while not talking about demand reduction for labor trafficking). Sources: Ashton Kutcher Testifies to Senate Foreign Relations Committee, Speaker for the Living Memex Helps Find Human Trafficking Cases Online, Human Trafficking Center How the Global Economy Fosters Human Trafficking, Stefano Zamagni The Economics of Human Trafficking, Institute for Faith, Work & Economics Spotlight, Thorn Software that detects human trafficking, The Economist Assessments, Verité Maurice Middleberg, Free the Slaves Our Research, Laboratory to Combat Human Trafficking Photo: DARPA

Komando On Demand
The fascinating and dangerous Dark Web

Komando On Demand

Play Episode Listen Later Mar 23, 2018 17:02


Part 2 of our series on The Dark Web. It’s a fascinating and dangerous place, but as public intrigue grows, so does the number of horrific stories that emerge from this misunderstood corner of the internet. In this Komando on Demand podcast, I’ll explain how this modern technology is enabling modern slavery and I’ll talk to the freedom fighters who bring cyber criminals to justice.

Komando On Demand
The fascinating and dangerous Dark Web

Komando On Demand

Play Episode Listen Later Nov 24, 2017 18:22


Part 2 of our series on The Dark Web. It’s a fascinating and dangerous place, but as public intrigue grows, so does the number of horrific stories that emerge from this misunderstood corner of the internet. In this Komando on Demand podcast, I’ll explain how this modern technology is enabling modern slavery and I’ll talk to the freedom fighters who bring cyber criminals to justice

Komando On Demand
Modern slavery: Model escapes captors who allegedly try to sell her on the Dark Web

Komando On Demand

Play Episode Listen Later Sep 29, 2017 20:11


The Dark Web. It’s a fascinating and dangerous place, but as public intrigue grows, so does the number of horrific stories that emerge from this misunderstood corner of the internet. One such story is the traumatic nightmare that British model, Chloe Ayling claims she went through when she was kidnapped and sold online. In this Komando on Demand podcast, I’ll explain how this modern technology is enabling modern slavery and speak with the freedom fighters who bring cyber criminals to justice.

Advanced Manufacturing Now
The Philosophy and Application of Data-Driven Manufacturing

Advanced Manufacturing Now

Play Episode Listen Later Jul 10, 2017 17:59


A manufacturing concern can get the greatest return on its investment by optimizing its overall equipment effectiveness, or OEE. In this sponsored podcast, Memex CEO David McPhail details the relationship between OEE and the Industrial Internet of Things. Data, he explains, is what drives this relationship. Data-driven manufacturing consists of keeping track of everything that happens on the shop floor, and software, including Memex’s Merlin Tempus solution, helps monitor and manage that data.  

Voices from DARPA
Episode 9: The Datamancer

Voices from DARPA

Play Episode Listen Later May 1, 2017 28:32


Mr. Wade Shen of the Agency’s Information Innovation Office has made it his mission to improve how human beings and their computers put their respective heads and cognitive frameworks together to yield deep insight into how the world works and how information affects the way people think and act. Listen in on how Shen is enacting that mission with the DARPA programs he oversees, among them the Data Driven Discovery of Models (D3M) program, the Quantitative Crisis Response (QCR) program, and the Memex program, which is devoted to advancing search capabilities far beyond the current state of the art. Shen also muses about what it would take to build a universal translator that would enable all 7.4 billion people on the planet to overcome language barriers and to talk with one another.

Decipher SciFi : the show about how and why
Forbidden Planet: robust design in AI, exobiology, and philology w/ Chris Noessel

Decipher SciFi : the show about how and why

Play Episode Listen Later May 3, 2016 72:58


Robbie the Robot Prop design. AI language understanding. Super strength and a matter replicator. Bad reactions to conflicting orders. Like a mechanical calculator dividing by zero! Morbius He’s totally Zod! Philology. Engineering. Dangerous tech designs. Gender Stuff Cutting edge sexiness in the 50s. Innocent virgins. Blaming the victim. To the captain go the spoils. The deleted scenes that explain the animal behaviour: Decoding the Krell Best attempts at judging Krell biology from their technology and architecture. Their “plastic educator.” Deciphering their language. Using alien Wikipedia on microfilm. Vannevar Bush’s Memex. The alien arithmetic problem. Jef Raskin's Alien Arithmetic Problem Jef Raskin Self Destruct A comparison of scuttling procedures in Alien and Forbidden Planet. Speculation about Krell hearing and sight frequencies. The Design of Everyday Things by Donald Norman: iTunesAmazon Support the show!

RCI Canadá en las Américas Café

Memex, el nuevo buscador de la Deep Web

Take Back the Day

Sam and Simon materialise in closely proximate time and space for an actual, real person discussion about goals and systems, democracy in the workplace, memory devices and other curious things. It's all happening. Stuff mentioned in this episode: Brain Pickings. How to Fail at Almost Everything and Still Win Big: Kind of the Story of My Life by Scott Adams. As We May Think by Vannevar Bush. The Blood episode of Radiolab. 23andMe. Sex, Death and The Meaning of Life by Richard Dawkins. Google and Calico.

Gresham College Lectures
From World Brain to the World Wide Web

Gresham College Lectures

Play Episode Listen Later Nov 9, 2006 45:45


The World Wide Web has evolved into a universe of information at our finger tips. But this was not an idea born with the Internet. This lecture recounts earlier attempts to disseminate information that influenced the Web - such as the French Encyclopédists in the 18th century, H. G. Wells' World Brain in the 1930s, and Vannevar Bush's Memex in the 1940s.This lecture was jointly held with the British Society for the History of Mathematics.

Metamuse

Discuss this episode in the Muse community Follow @MuseAppHQ on Twitter Show notes 00:00:00 - Speaker 1: I think the process is just inherently much messier than that, and you need to let go a little bit and say the tool is going to help you make this stew, and then you’ll sleep on it for a few days and then somewhere else, something new will pop out. Hello and welcome to Meta Muse. Muse is software for your iPad that helps you with ideation and problem solving, but this podcast isn’t about Muse the product, it’s about Muse, the company and the small team behind it. I’m Mark McGranaghan, and I’m here today with my colleague Adam Wiggins. How’s it going, Adam? 00:00:30 - Speaker 2: I’m pretty good, Mark. I just got back from a short trip up to the Baltic Sea, which is a pretty easy train ride from where I live in Berlin. This is the first real trip I’ve taken since, I guess, pandemic started, so about 667 months. And it was really refreshing, even though it was just a couple of days, and I was reminded of something you said when we, I think it was in our very 2nd episode of the podcast about having good ideas, which is how fresh surroundings refresh your brain creatively. And yeah, I had that there and it was really, was really that came to mind because I was really reminded of how much I, I missed that in this time where travel is not a part of our lives the way it used to be. 00:01:12 - Speaker 1: Yeah, I’m always surprised by how powerful that effect is. So today our topic is tools for thought. Now Adam, what does that mean for you? 00:01:21 - Speaker 2: Well, Tools for Though means a lot of things to me, but I think the first place my head goes to is Howard Rheingold’s classic book from, I think it was the 80s, where he details Xerox PARC and many of these visionary folks who are thinking about computing in its early days and what that could do for humans and our creative and productive lives. But I actually stepped back even a little bit from there because the original tools for thought, I feel like are anything that lets you externalize your thoughts. And so pen and paper, you know, writing, language, uh, is the starting place there, the printing press maybe. Uh, but more in modern times, things like sketchbooks or I don’t know, in a startup office, you’ve got whiteboards in a school, you’ve got chalkboards, Post-it notes are a great tool for thought, in fact, because you can write down these little snippets of information and move them around maybe in a physical space with colleagues. Um, there’s even something like, I remember at a team summit we had a few years back, might have even been there in the park in Seattle, you wanted to illustrate a point and ended up grabbing a stick and basically drawing a very simple diagram in the dirt, right? So anything that lets you really either make visual or somehow externalize what’s in your mind, I think is, is a type of tool for thought. And that also includes, I think the cult of the consumption side, which is Um, what I usually call active reading. So a book and a highlighter together, I think is, is a type of tool for thought. The act of highlighting passages that you find impactful or relevant to what you’re trying to learn about makes this learning, this reading process into an active process and a learning one, and that, that becomes a tool for thought as well. 00:03:06 - Speaker 1: Yeah, and I’m sure we’ll dive into a lot of different kind of specific instantiations of tools for thought. But another way to think about this is, what is the problem you’re trying to solve here? Two possibilities, one would be, you’re trying to obtain the knowledge that has already been generated by someone else. You’re trying to learn some facts, memorize some figures, maybe uh retain some ideas and different tools for thought can help you with that. Another angle would be, you’re trying to generate new ideas, novel thoughts, and uh a tool might help you accomplish that as well. And I think actually which one you’re trying to do is quite important for which tool you choose. 00:03:40 - Speaker 2: Yeah, another reference I was looking up in prep for this episode was Andy Matzek’s work, and he’s got a piece called How Do We Develop Transformative Tools for Thought, and his work, his current research track is more on that learning, retaining side of things, these mnemonic devices and so on. This is a nice article. I’ll I’ll link it to the show notes because he does on the later part of the article, he describes a lot of this history, particularly around the computing tools for thought, Steve Jobs and the bicycle for the Mind. Uh, he talks about that he thinks, quoting Alan Kay, who’s who’s one of the sort of big visionaries in this world, uh, as saying that actually medium for thought in some cases might be a better, better term, but for whatever reason, the, uh, the tools for thought seems to be the, the label that that stock. So Andy’s work, I think it’s a good example of the how do you how do you get more out of what you’re trying to learn about and then there’s the having ideas or generating new thoughts or generating original ideas, which is obviously the space Muse is trying to plan or at least we’re trying to create a tool that can help the the end user to have better ideas to develop their ideas. So yeah, coming back to the digital space, the Tools for Thought book spends time on, for example, the Xerox PARC lab that invented a lot of the modern GUI operating system and other things that we, we sort of take for granted in the modern computing world. There’s also folks like Doug Engelbart and his vision to augment human intellect. There’s people like Alan Kay who invented small talk and object-oriented programming, had this vision for a thing called a DynaBook that I guess you could say physically looked a lot like an iPad looks today, but was more focused on the creative and productive uses of computing. And there’s even stuff like, uh, or folks like Vever Bush, who wrote an essay that people still quote today from the 1940s about this thing called a Memax or this vision he had for a thing called a Memex. And I think one thing you get when people talk about these, uh, the Engelbarts and caves and bushes. They’re often sort of lamenting a future that maybe we were dreaming of in these times that then you look at today’s computing and for all the really impressive technology that we have and all the things that computers and software and the internet can do for us, in some ways, we didn’t really fulfill some of the beautiful vision that these folks had. In fact, I think some of those folks are even in some ways a bit bitter, you know, towards the end of their careers when they see all these startups and whatever, putting all this money into these shiny products that in fact are more kind of entertainment boxes rather than something designed to really elevate the human race. 00:06:23 - Speaker 1: Yeah, and Andy makes a point in his article that there are good economic reasons why that’s the case or why we would expect that to be the case. Um, essentially because new ideas and tools for thought are sort of public good, so it’s hard to capture economic value when you make innovations in that space. Um, but we still think it’s possible, um, both to have new ideas here and to build a business around it. 00:06:43 - Speaker 2: Yeah, well, I guess if we fast forward a little bit from the Halcyon. Days of these, these computing visionaries in the 60s, 70s, and 80s, a little bit more to when personal computing became commonplace, maybe the 1990s, and I think what you see is when you, or at least when I think of productivity software really broadly speaking, I tend to think of authoring, what I usually call authoring applications. This is something like you use Illustrator, if you’re a designer or you use Microsoft Word, if you’re a writer, or use Excel if you’re a financial analyst. These are really designed for an end artifact. You’re producing something to be consumed by someone else. When you type into your word processor, it’s because eventually you want to publish that book or publish an article online. I think folks often do use these offering. Tools for the thinking phase. If you’ve ever opened your word processor or programmer, maybe use a text editor or something like that to sketch down some ideas, not what the intention is that’s ever going to be given to someone else, but to get your own, your own head together. Just because that’s the tool, you know how to use, it’s right there, but it’s not really designed for that. In fact, in a way, it’s it’s a poor fit, you just happen to know about it. And I’ve seen some really creative uses, uh, certainly on the, for people that like laying things out visually and spatially kind of like we. We strive for with Muse. We’ve seen, for example, um, we saw someone that did a master’s, did all their master’s thesis research in illustrator, because they wanted to lay out all these papers they were reading and the excerpts they were taking from them and how they all connect together. They wanted on this big spatial canvas. And it turns out that illustrator was the best choice for that at that time. Maybe nowadays people do that with figA somewhat, which I think is great, uh, that people are doing these innovative uses. But that was part of what led to the impetus for us wanting to build a tool for thought that was more something that’s purpose built for enhancing the individual’s or even a group’s thinking. Now in practice, because we’ve seen so few commercial tools for thought, I wonder if that means that either people don’t value that ideation step enough to want to invest in that. So that’s, you know, monetarily, do they want to pay for software, but it’s also just taking the time to learn a piece of software. Uh, or to put your data and your thoughts into a piece of software when that’s not the end place it’s going to be. Um, so I think that’s, that’s certainly a, you know, a risk or an open question for Muse and really anyone else that’s working on a Uh, on a tool for thought. 00:09:13 - Speaker 1: Yeah, I do think there’s a commercial piece there where the obviously biggest market is when you’re close to the end product that you’re producing for a business, and you’re producing a presentation, you’re producing a book, there’s obvious economic value that you want to attach to that and there’s a bunch bunch of people who obviously need to do that. 00:09:28 - Speaker 2: Yeah, and I think that’s most notable when you, when you try to sell software to professionals, if you say one of the best pitches you can offer is this will make you look really good to your client. You will close more deals or you will impress your boss or you will get that big, that big deal that you’re trying to do and so presentation software or really good, you know, financial modeling or, you know, the word processor, that’s, that’s the value there is, is really clear to people. If you say this will make your ideas better or make your decisions better for some reason that that’s a less poignant sales pitch, I think. 00:10:00 - Speaker 1: Yeah, and I keep coming back to this idea that there’s an incomplete understanding of the creative process. We’ve long advocated for this. 3-step process where you’re 1, gathering raw materials, 2, actively reading, processing, ruminating, brainstorming on those materials, and then 3, offering an end product. I think a lot of people think of the creative process as 1 and 3, because there’s obvious, you know, physical content that you’re dealing with in each of those cases that you have to pull in some raw materials like site in your paper, and you have to produce a paper at the end to send to the publisher, but you can kind of get away without doing the middle stuff without, you know, thinking. Um, or you can just do it all in your head, but the premise with Muse is that there’s a very powerful and important second step there that with the right tooling support can give you even more power as a thinker. 00:10:47 - Speaker 2: Yeah, one place that Tools for Thought has come back into the current conversation is the product Rome Research, who’s been getting a lot of traction among people who I think like to think deeply, particularly built around a daily journaling practice, which I think is a really A good way to get your thoughts out in a free form way. Uh, one of the things I remember them complaining about, if that’s the right way to put it, is being trapped in this category of note taking. Note taking is an interesting category because it seems to span a lot of things. You’ve got a classic like Evernote. Which in theory should be kind of a tool for thought. It’s supposed to be sort of a second brain. You put stuff into it, you can find it later. But the reality is it doesn’t necessarily help you find connections. I think it sort of failed to deliver on that promise. It’s maybe more of a knowledge base or knowledge store. I use Dropbox for that, for example, and I think that’s true for a lot of notes apps, things we’ve talked about here before, something like a bear, for example. It’s a really nice way to quickly capture a thought, you know, on your smartphone and then you have access to it later, but it’s not really a place to do a lot of deep ideation. I don’t know, maybe you sketched down a few, few thoughts you have in bullet point form, but it’s not a good place for really freeform ideation and maybe that’s a place where Rome is helping change. Things a little bit. Uh, I also see this tool for thought, uh, sometimes applied to some other hot new products which include notion, which is more of a team wiki team brain kind of thing, but I think it can fit with that as well. FIMA, as previously mentioned, sometimes people use it as this kind of visual canvas, even something like Air Table, which is a spreadsheet, but often again, people use it in these team contexts to capture knowledge, uh, and to basically find shared understanding on the team. It’s not in. there’s no end artifact for the client. Uh, it’s more internal to what the team’s own sense of understanding of problem space. 00:12:37 - Speaker 1: Yeah, now what’s interesting to me is why are tools like this useful tools for thought. I think some people would say it’s because for example, you can capture and store all this information and you can form explicit links between them and everything is organized and searchable, and I think there’s something to that. There’s certainly um value. In that use case, but I believe that most of the creative work the mind does, especially around generating new ideas, is not done in your thinking mind. It’s basically not done consciously. You have this massively parallel process running in your background, including when you’re sleeping, that’s generating new ideas, forming, forming new connections, and you basically can’t think your way, can’t put 1 ft in front of the other to get to new ideas like that. You have to just kind of let it go wild and hope, hope it comes up with something. And the way you feed that process is you ruminate over a lot of interesting intellectual material. So the reason I think these apps are useful for two of thought is twofold. One is people like to use them. They just like to spend time writing notes in Rome and kind of regardless of where those notes end up or if you ever read them again, just the process of writing and thinking as you’re writing. Generates a lot of fodder for this process in your, in your sleeping mind. And number 2, increasingly these tools support multimedia, and I, I’ve long said creative thinking never takes uh just one medium, it’s never just text or just images and tools like Figma, it’s very easy to make a canvas where you have images and text and vector graphics and so on, all in one place. I think that’s important because that’s again, naturally how the mind thinks creatively. 00:14:08 - Speaker 2: For sure. I’m a big believer in the, as you said, feeding the sleeping mind problem, working on problems in the background, stew, and yeah, this externalizing your thoughts in some form is a way that helps you turn it over. And that can be lots of different forms. It can be sketching, it can be writing, voice memoing is another interesting trick, even just talking to another person, right? This is where an open-ended chat, you know, the classic water cooler talk or just taking a walk and talking with a colleague, working through something that that helps see that that’s doing that background process in the brain, and I agree. Whatever it is should be enjoyable and comfortable. And so that means for uh something like in one of these analog tools, I think the reason why sketchbooks and mole skins and whatever have continued to have such a place in the heart of creative people like me and many others is that they’re just enjoyable. You grip a nice pen and the feeling that tactile feeling of your, your hand. Moving across the page, I think whiteboards, the whiteboard can have a similar feeling as well. And with digital tools you need, you need the same thing. If it’s fun and enjoyable to open a new notion page and assign it an emoji and drag in some media and type out your text and then share it with a colleague for discussion, then you’re going to want to do it. And then that in turn is a nice virtuous cycle. 00:15:27 - Speaker 1: Yeah, and this is a podcast about tools for thought, and I think it’s appropriate to, you know, keep it scoped, but I would say the human creative process is so much bigger than tools, things like uh the social element, you know, who you’re talking to and who you’re motivated by, uh, the physicality element, the position of your body, how it’s moving or not, the location element like we’re talking about in Intro. These are all super important and I think it’s easy for us as technologists to over rotate towards what’s on the rectangular screen when there’s so much more to the creative process. And again, it’s something we’ve tried to tap into with Muse so that for example, you can use it while you’re reclining on your comfy couch or you can use both your hands at the same time and use all the degrees of freedom you have in your arms, things like that. 00:16:04 - Speaker 2: Absolutely. Related to that, one thing I wanted to ask you about. is whether you’ve read this book Thinking Fast and Slow by Daniel Kahneman. Yep, classic. Yeah, I found myself thinking about that in this tools of thought context. So just to briefly summarize for for those that are not familiar, the author basically categorizes our ways of thinking in daily life in these two creatively named System One and System 2 brains. Where system one is more the, the fast thinking, the, the quick judgment, the, the immediate reaction, and the system two brain is slower, more analytical. I especially like the system one brain’s main, the framing of the system one brain’s main job is this assessing normality, they call it, that is to say we have these built-in habits and assumptions about the world’s worldview and this. Just this way that we expect things to be, everything from how my furniture is arranged in my home to what the political landscape is like in my nation. Our system one brains are constantly taking stock of whether what they’re seeing fits into that established pattern and essentially kind of raises a flag or raises. Something into our attention when something breaks that pattern. So that system one fast instinctive emotional brain, I think is pretty natural to reach for in certainly in social settings, but especially in information age, style, um, online gathering places, the social media and so on, whereas the system too. is obviously what we’re most interested in in our team and with the tool thought that we’re building, which is the slower, more deliberate, more logical, analytical, slower both in a literal time sense, but also in a sense of more consideration and purposely breaking the habits that are already in your mind. And trying to form new connections. 00:17:56 - Speaker 1: Yeah, I think that’s true, but I also think there’s value in domesticating if you will, the system one mind. It’s so powerful, but it’s also by default very wild and instinctual, but if you know, give it the right care and feeding with uh the, the right intellectual material. That you’re ruminating on to continue the animal meta for here, um, it can be very powerful. And again, I think this is especially true in your sleep, basically, where if you take in the right materials, do the right active reading, and if you give that a few days, you’ll often form interesting new connections and ideas. 00:18:26 - Speaker 2: For me, a go to technique is to literally sleep on it. And in fact, I’ve even brought this up often enough on Teams sometimes that people poke fun at me that my solution to any tough problem is to go to sleep, but I really find that so many breakthrough solutions or new ways of looking at something have occurred to me after that stepping away and particularly the restorative power of, of sleep and what happens to your mind at that time, and that obviously just actually requires Time you can’t if you’re trying to turn around a decision the same day, you can’t, you can’t sleep on it either literally or figuratively and so trying to maybe arrange your creative life or set things up in your work or other places where you want to make good decisions and have good ideas to allow yourself this time. I know on the Muse team we often like to do things in part or each of us I’ve seen. likes to do things in parallel. We may have a few different projects going on at a time that even maybe you switch back and forth between a little bit. Sometimes that can be lack of focus, which is, you know, a bad sign, but in some cases, I find this is a really effective way to work on something for a while, maybe get a little stuck or not be sure what the best path forward is, sort of step away, switch my contacts for a few days or something, and when I come back to it, I’m because in a way this background process has been working on the problem the whole time. 00:19:47 - Speaker 1: Yeah, and then bringing it back to tools for thought. I think it’s important then that the tools not try to draw too straight of a line between ideas and steps. Often I feel like tools are trying to, you know, you get the inputs in, you form the right connections, and then somehow the tool will like lead you to the right answer. I think the process is just inherently much messier than that and you need to let go a little bit and say, The tool is going to help you make this stew and then you’ll sleep on it for a few days and then somewhere else something new will pop out and you might not even be able to see that straight line, right? and it might not be refied in the tool, but you have to trust that that process is going to happen in your sleeping mind. 00:20:19 - Speaker 2: Another area under tools for thought I was curious to get your take on is the role of attention and focus, and I touched on that with the System one brain and how it surfaces things to the to the system 2. In the process of doing deep work and going deep on a problem. We know it’s important to be able to focus on something deeply, but how do you see that as interacting with a tool for thought like news or these others we’ve talked about? 00:20:44 - Speaker 1: Well, now that you mentioned it, my half joking answer is that perhaps the most powerful tool for thought that I have is industrial strength noise canceling headphones, like the type you wear when you’re using a chainsaw. It’s actually very helpful in in slotting out the noise that I have here in the city. 00:20:58 - Speaker 2: On controlling noise in your environment. I think it was one of our very first email updates from use that we linked to a what I think of as a very useful tool in the Creative person’s toolkit, which is a white noise generator. In this case, it was one for the, for the iPhone, but I’ve used a web-based white noise generator that does, you know, rainfall and fireplace crackling and whatever that I can put into a pair of headphones, especially noise canceling headphones. It can be really nice for particularly you’re in a noisy environment like trying to work on a plane or a train or something like that, because absolutely, it takes effort to keep your attention on something and the more that your environment demands your attention, the more you, the less effort you will have to spend on the thing you’re trying to focus on. That’s why I like quiet office, uh, physical. that’s conducive to to doing work. 00:21:46 - Speaker 1: I think if there’s even the possibility that you’ll be distracted or pulled back from your creative thought process, it’s, it’s hard to get into it. I remember this when I was a full-time programmer, even that I had a meeting on my calendar at like 3 p.m. made it hard to do certain programming problems in the beginning of the day. It’s because you knew at some point you were going to be in. and you had to break your train of thought. I think there’s the same dynamic happening if you know that that little red dot could come up or if you could get a notification pop on your screen. So one idea we’ve had with Muse is to really be respectful and giving the creator control over if they’re ever going to have anything interrupt their work and anything else appear on their screen. 00:22:20 - Speaker 2: The reason I brought it up was that I see this as, I guess, coming back to this glorious vision for what computers could do for people. Some decades back and where we are today, attention or really a direct conflict between what you want out of call it consumption technology. So when it comes to your phone and your messaging apps and your social media, precisely what you want is to feel connected. You literally want to be interrupted. That is the feature, the feature of it. You want to feel connected to what’s going on and you know about the breaking news right away. And when there’s some important message from from a thing that’s happened with your team. Or a thing that’s happened in your family that you’re going to find out, be connected to that, be able to turn around and respond immediately. And that’s well and good, but it is just in direct opposition to what you want. If you want to sit down and get a really big chunk of productive work done, or particularly bring your energy and attention to bear on a problem that is maybe just past what you’re currently capable of doing, whether that’s a new thing you’re trying to learn, whether it’s because you’re an academic that’s trying to Develop a fresh idea that’s pushing the boundaries of science, whether it’s you’re a product creator or a startup person and you’re trying to You know, figure out the strategy for your company or something like that. You’ve got, you’ve got to really push yourself and you need every spare, or you need every single cycle of your brain computing power you can get and anything that draws your attention away or demands your attention interferes with that, makes you slightly less able to go after solving that problem. 00:23:53 - Speaker 1: Yeah, totally. I think a related idea on this. The theme of headspace and how you’re feeling is the aesthetics of the tool environment. I think it’s really important that creators have control over the aesthetics of their environment and can change it to their taste. I think if we told an artist that you have to go into this studio, has to be exactly the size, you can only paint the walls this one gray color, you can only use this one paintbrush, you have these, you know, 4 colors you can use. Uh, you can only paint in the style. It’d be like, what are you talking about? We do that all the time with software. Your environment has to look like this and, and by the way, it often looks and it can, it can feel trivial just like to give us users this basic agency over what they’re doing in their environment. But I think it’s really important. One small example from you is we have these setting panels type things and most apps, when you open the settings type panel, you know, it goes in the upper left or it goes in the upper right, and that’s that, and hopefully you’re OK with it if it’s covering some of your content, well, too bad, but we had this idea that even for something as simple as Settings panel, you should be able to put it where you want to put it so that if you have something on the right hand side of your screen that you’re working on, you can put the settings panel left or vice versa. And just giving users basic agency over like over their environment like that I think is really important. 00:25:01 - Speaker 2: Yeah, I think one of our get switch research pieces touched on the desire for creative types to nest, where basically when you walk into the professor’s office, when you walk into the designer’s studio, you tend to see an arrangement that reflects their personality, their certainly the needs of their work. But also as a kind of home, kind of a creative home, and I think that connects not only to the utility of it, OK, I, I tend to use this one tool, physical tool, so therefore it’s sitting in a place that’s easy for me to reach my desk, but also just reflects this feeling of comfort, safety, familiarity, and I think you’re able to do your best work and be, be creative and productive and focus when you feel those things. And it’s much harder to do it an unfamiliar. environment, a sterile environment, one that, um, one that maybe isn’t adapted to your needs in the same way. Going back to Andy’s great article about tools for thought, he has a section there where he talks a bit about sort of the machine learning AI stuff. Now I guess GPT 3 is the new, the new buzzy item, and this is a question I think I’ve run into quite frequently and when I talk about what I’ve worked on, what I’m working on here at Muse, what I’ve done. As well in the research lab, which is to kind of oversimplify the response that often, you know, if I say I’m working on tools for thought and kind of describe what that is, there’s a reaction that’s, well, pretty soon AI is going to be here and do all our thinking for us, so like what’s the point of that? And I don’t have a great answer to that. Uh, I don’t believe that in my heart, but maybe that’s because I’m incentivized not to believe it because I enjoy building tools for people to think and create. So maybe I’m, I have a little bit of a blind eye to it, but have you run into that question? If so, how do you think about the role of, let’s say AI, however you want to define that in tools for thinking and creativity? 00:26:49 - Speaker 1: Well, let’s say first that there are a lot of interesting areas where AI is vastly superior, but people are still really interested in learning. So my favorite examples here are chess and go and other games like that. The computers now are insanely powerful. People still love learning those games because there’s the intellectual challenge and the reward, and I I actually think a really interesting frontier for tools of thought is how do you leverage this amazing AI power to help people learn these games faster in a programmatic way. So I can imagine something in the style of Andy’s mnemonic medium, which is, in his case, it’s using space repetition to help you stay at the frontier of your knowledge, so you’re kind of when you’re on the brink of forgetting or when it’s most important to learn a concept is when it challenges you with a question. Um, I can imagine a similar thing um applied to a domain like a game where instead of having Some linear and predetermined set of lessons or problems, it plays you and says, OK, these are your weaknesses, um, you need to do some exercises in these three areas. I’m going to keep giving them to you until you master them, and then we’ll move on to the next area and that can all be done programmatically because these computers have a much better understanding of the game than we ever will, even experts. 00:27:51 - Speaker 2: Chess actually makes me think of this book I read a little while back by Garry Kasparov. I just looked it up. It’s called Deep Thinking where Machine intelligence Ends and Human Creativity Begins. And famously, this guy is both the chess, one of the world famous sort of um what’s the word for it, chess grandmaster or whatever, the the highest ranked chess player in the world for, for a period of time. But he was the one who was first, the first time that the best human at chess in the world at the time was beaten by a computer, and many really heralded that it was this huge, certainly PR win for the people that were building these AI algorithms, but for a lot of people that it really heralded the beginning of call robots taking our job or the AI is going to be here or or what have you. And he gets, it’s, it’s so interesting because on one hand, he just reflects on the experience of that just being so, I’m not sure what the word is for the, the, the, he walks through the experience of grappling with this alien intelligence or this thing that plays the game in a way that is so different from any how any human would. Then he goes on to talk about how the game has changed in the years since, which is now it’s just taken for granted that chess computers are better than human players period. But it didn’t necessarily lead to A generalized artificial intelligence for now you just, it’s computers can be extremely good at playing chess and that doesn’t really seem to lead to something beyond that. You know, you can obviously go from there to, OK, now they can play go and they can play StarCraft. Maybe that does eventually lead to something general purpose. But the, but the point you mentioned that made me think of the book was he talked about how the game has changed in the form that really what it comes down to is humans and computers collaborate. To play their best game. They analyze, for example, the games of the players that they’re going to go up against. So even if you’re not using a computer at the time of playing the game, your game has changed substantially because you have this computer, I don’t know what it is, assistant helping you in the, the training, the analysis, the pre-game, the postgame, um, and so in fact, we’re seeing that it’s not really that chess AI replaces human chess playing, it’s more that it’s, it’s just morph. 00:29:57 - Speaker 1: The whole mor the whole sport, right, and I think that points to the, the general future here. It’s, it’s not AIs taking over all our jobs and our work it’s more of a symbiosis and collaboration. Perhaps the most obvious version of this is, uh, the AI is very good at generating a bunch of plausible possibilities, especially one like GPT 3, you know, just spits out a bunch of texts and maybe 90% of them are no better than plausible, like you read them closely, don’t really make sense, but 1 out of 10, the human can say, ah, that’s actually quite interesting. I’m going to pluck. That one for my business email or what have you. Um, so I think we’ll see a whole wave of tools like that, but otherwise I’ll believe that the takeover of AI when I see productivity statistics, which of course we haven’t for some decades. 00:30:33 - Speaker 2: Yeah, I think on the creative tools in tandem or in symbiosis with a human generative design, I think is one area that’s got some, some buzz on that, and that’s the basic idea that you can feed a computer algorithm or or an AI of some kind. A set of constraints for a problem you have, you know, you’re designing a building and you want it to be this, hold this many people and have these kinds of structural qualities and have these kinds of aesthetics qualities. And it essentially generates you a bunch of options and then you can choose between them and kind of winnow winnow down this kind of assistive tools often that has to do with more the called the brute force, the ability to generate lots of options and lots of weird options potentially. Uh, actually, one place that um I’ve used that thing, not a, not AI but just an algorithm is in naming several different companies, including Hiroku and Inc and Switch. I basically wrote little programs that took some of our raw input that we brainstormed. And combine them together in every feasible way. In the case of Inc and Switch, we knew we wanted two words separated by an ampersand. We came up with every word for each slot A and slot B that we wanted, and I just wrote a program that spit out every single possible combination, and we could go through them and look for what we liked best. That that that’s pretty far from generative design, I suppose. But, but it fits into this general assistive tools thing. And certainly one thing I, I hear from folks a lot when they talk about this is, OK, we, we’ve come to accept autocorrect in our writing. 00:31:58 - Speaker 1: Uh, what’s the autocorrect for though I feel like autocrack is getting worse. It’s just like it’s going rogue. It says underlining random words now. 00:32:04 - Speaker 2: I actually did an experiment some, I, I got it irritated enough with autocorrect in terms of it’s great when it works, but when it doesn’t, it’s way more effort to go back and correct or, yeah, it’s way more, more effort to get what you want. I did an experiment for a little while of just turning off autocorrect on my phone. Actually, you know what, I think I was about as fast. I was like slower overall, uh, or slower on individual words that autocorrect would have gotten, but if you took away the correcting for mistakes, uh, thing that I so often had to do it, I think it came out as kind of a net wash, and then also not being there was definitely an emotional win to not being frustrated with the thing, uh, auto correcting a person’s name or whatever for the 10th time. 00:32:46 - Speaker 1: Another potential angle on AI and tools for thought is via social networks. As much as I like tools and and software, it’s probably the case that the most powerful technologies, if we will, that we have for thought are the social networks and the institutions that we participate in. The thoughts that we have are so influenced by our friends, our colleagues who are talking to what we’re seeing, and of course, we’re seeing a lot of that happening via social networks these days, and there’s a lot of ways you can say that’s bad or troublesome, and there’s certainly some work to do, but just something like YouTube or Twitter, being able to help you find people in your area of interest. To talk to and learn from is, is very powerful and I think there’s actually a lot more we could do in that space that is using AI to build robust social networks and in turn helps you have better thoughts. 00:33:30 - Speaker 2: Yeah, that also connects back to the creative fodder idea, as we’ve said many times before, ideas don’t come from nowhere. They’re they’re brickage of other ideas. Where does that come from? Well, exposing yourself to as many different ideas as you can through as many sources as you can. Something like Twitter, for example, is just a really amazing place to do that. YouTube as well can be. Now, I think it’s hard or even impossible to have your own ideas or have original ideas if you’re constantly plugged in. Same thing is true at a like a work or a team level, your company’s slack, your whatever other formats you have for connecting with your colleagues, it’s really powerful to be connected to that group mind. And be influenced to bombarded by and influenced by all the ideas and opinions. But in the end, if you want to have an original thought, I think you need to disconnect from that a little bit. But to completely disconnect, you’ll just won’t have that fodder. But if you’re plugged in all the time, you’ll just never have an original thought because you’re just being pushed to and fro by everyone else’s ideas. And so there’s some pendulum swing of connection to isolation, where you can connect for a while, get all that fodder, disconnect a little bit, go a little deeper on your own ideas, come back and reconnect. So thinking about the future, we’ve already seen some exciting movement in tools for thought, making it into production or commercial environments with things like notion, Rome, Sigma, as well as great research work like Andy’s work on mnemonic devices, or something like Aki, the space repetition. Uh, system that’s, it’s kind of related to that. What do you think the future holds, particularly given the, the public goods problem you mentioned earlier of how this stuff gets funded? Are we gonna enter a renaissance where we can maybe finally reach the beautiful vision that these folks from the 60s and 70s, 80s outlined? Do you think there’s a new direction where things will go? Uh, is it going to continue to be hard to get tools of thought? Built in today’s world. 00:35:27 - Speaker 1: The economics problem is going to remain hard but not insurmountable, which I mean these things are inherently somewhat of a public good. It’s hard to fund them slash capture the value when you make great tools and I think that’s going to be the case for the foreseeable future given the social technology that we have. But that said, I feel like it’s still very doable to make a lot of progress in these areas and it just takes a bit of, of will and vision and perhaps The the willingness to forego maximum economic return for yourself personally, but I feel like even small teams with today’s technology can make a lot of progress and I think we’re seeing that. And then I think in the substance of the tools, I think first of all, we’re going to continue to see certain trends keep playing out. So one is this trend of uh mixed media and multimedia in the same tool. I think that’s very important, I think with tools like Notion and Sigma and Rome, people are becoming more and more accustom. that and that’s going to be baked in and we’re going to be less tolerant of tools that are strictly for one medium. I think another trend that we’re continuing to see is the improved aesthetics slash the consumerization of industrial strength thinking tools, which again I think is great. 00:36:26 - Speaker 2: Needs to be fun, fast, a little playful. You could argue that Moleskin, which is a, you know, just a sketchbook company, but I definitely count them as a tool for thought. They’re more expensive than but no better in a practical sense than a GP paper notebook, but people like how they feel, they like how they look, and that aesthetic element makes a difference for, I think you. Your ability to do good creative work. 00:36:51 - Speaker 1: Yeah, and one other existing trend that I see continuing and accelerating is leaning on video slash video games. These are mediums that were hard to use or hard to produce content for even 5 or 10 years ago, and now the technology is such that basically anyone can make really high quality content in these areas and so we’re seeing more more and more of that, YouTube being the predominant example, but I think video and Slash the video game model will be integrated more into Tools for Thought. And then looking forward, OK, I think there’s a fairly obvious bet about AI that we talked about. I think that one’s been played out a fair amount on Twitter and so on. So I won’t go into that too much here. But if I had to pick one less obvious trend to bet on, it would be, but if I had to pick one new trend to bet on, it would be leveraging software to enhance these traditionally non-toolly aspects of the creative process. So, the social side, the physicality side, things like that. I think those are kind of two pretty different worlds historically, um, but I see more tools, uh, bridging that gap and leveraging the importance of those spaces for your creative process. 00:37:51 - Speaker 2: Yeah, very interesting. What are some examples of products or companies or um tools that you’ve seen that tap into this community and people’s side of things. 00:38:00 - Speaker 1: So it’s often the case that gaming industry was the leader here. So there are now these incredibly sophisticated communities around individual video games where people follow creators who they’re really interested in, and it started as just kind of watch someone playing the video game, then they become these, these social environments where there’s a kind of community around it, and then it becomes a way to learn how to play the game. Like there’s a bunch of tutorials and lessons and you learn from other people in the community and you watch each other play and stuff. Like that, and that’s all mediated by technology, um, because it’s, it’s otherwise very hard for these people to find their community because there might be 1000 people in the world who are really into this niche video game and who are playing at a high level with the right tools and platforms between like Twitch, YouTube, and the game itself, for example, and Discord, you can, you can form a community. 00:38:44 - Speaker 2: And I’ll note that that includes not just playing games. But also like speed running or something like that, but also includes creating the games. Many indie game developers stream themselves, program the game designing game on Twitch. People jump in and watch that and learn from them. And there’s also, yeah, huge YouTube communities and channels and things around just generally learning to program and learning all kinds of technical skills. Certainly I’ve learned things about video editing and things like that. Through, through YouTube. So this kind of watch a creator or producer use some sophisticated piece of software to do some, do their creative process, maybe thinking out loud as they do. That’s a really powerful way to share tacit knowledge about how people do what they do. 00:39:29 - Speaker 1: Yeah, and then I think it’s like you’re saying, it’s trickling down from games into more like professional environments or tradecraft environments, things like you said, Photo editing, video editing, or things like woodworking, there are now sophisticated communities around that and online tools we can learn. But then I think bringing it back to tools for thought, we’re starting to see these, these communities and tools form around more like intellectual topics and ideas. So there’s, there’s a bit of a progress studies community developing, for example, now we have podcasts and classes and Twitter. Cohorts and some slacks and some discords and those feel pretty early, but it feels like we’re bringing some of those patterns and sensibilities from the gaming world and into these more intellectual domains. 00:40:12 - Speaker 2: Well, that comes back to that I think when we say tools for thought, sometimes you talk about maybe for example methodologies, how to work things like getting things done or inbox zero or building a second brain or something like that. Um, so you’ve got communities, you’ve got Software that you run, you’ve got analog tools, you’ve got uh techniques and methodologies. So really this is, I guess, a lot, a lot broader than just as you said earlier, what goes in the rectangle. 00:40:40 - Speaker 1: And also I think technology is going to infuse all these other areas and we’re going to have a sort of Technologies for thought, if you will, um, both software per se, but also communities, networks, methodologies, habits, institutions, Twitter threads, and so on, all working together to help people develop better ideas. 00:40:59 - Speaker 2: Well, that makes me pretty excited for the future of being a thinker and a creative person. 00:41:04 - Speaker 1: Well, with that, I think we can wrap it, and if any of our listeners out there have feedback, feel free to reach out to us at @museapphq on Twitter or hello at musesApp.com by email. We love to hear your comments and especially ideas for future episodes. 00:41:18 - Speaker 2: See you later, Mark. 00:41:19 - Speaker 1: See you, Adam.

Metamuse

Discuss this episode in the Muse community Follow @MuseAppHQ on Twitter Show notes 00:00:00 - Speaker 1: Being able to do important and deep work in a world where information is not scarce, but abundant, not only abundant, but so abundant that it essentially becomes a problem. Hello and welcome to Meta Muse. Muse is a software for your iPad that helps you with ideation and problem solving. This podcast isn’t about Muse product, it’s about Muse the company and the small team behind it. My name is Adam Wiggins. I’m here today with my colleague Mark McGrenigan. How’s it going, Mark? Alright, Adam, how are you doing? Yeah, I’m doing well. Reading an interesting book about the life of Claude Shannon, the guy that invented information theory. So this was at Bell Labs circa I guess middle of the last century. Uh, for example, the, uh, that seminal paper coined the term bit, which I think I, I almost take for granted sometimes these fundamental inventions, you think, well, it’s just always, we’ve always known what a bit is, but in fact, boiling information down to a stream of ones and zeros and being able to reason about that mathematically is a uh an extremely significant breakthrough to put the, to put it mildly and surprisingly recent from my perspective. Yeah, interesting. So our topic today is the information age, and I usually put information age in caps. It’s in comparison to say the Iron Age or the industrial revolution. And I guess the the basic idea with this is that humanity or society has entered an era that’s defined by the I guess the massive availability and the free flow of information. This dates back to, I think the Wikipedia page talked about the invention of the transistor which kind of made possible things. Like global telephone networks and radio and TV, but obviously the computer as well came out of that. I think it’s become particularly cute or the information age and how different that is from what came before is really dramatic in the last 10 years or so, uh, with smartphones and the internet and social media. Uh, one statistic I read recently, I found a little Uh, mind blowing was that the essentially there’s total penetration of internet and smartphones, the stat I read was that there’s 5 or 5.5 billion people on Earth who are over the age of 15, so adults, and of those 5 billion of them have some kind of mobile phone and about 4 billion of them have smartphones. So for our purposes. Again, everyone’s connected, and now this new age is kind of defined by that. 00:02:35 - Speaker 2: Well, that’s a broad and weighty topic. What’s on your mind about the information age then? 00:02:39 - Speaker 1: Yeah, well, obviously connects to to muse here because we see it. As a tool that helps with this, which is particularly for creative professionals and being able to do important and deep work in a world where information is not scarce, but abundant, not only abundant, but so abundant that the it essentially becomes a problem. I read a nice article the other day called the Information Pathology. And they make this comparison to how in the 20th century, the abundance of food sort of slipped our widespread health problems from not enough nutrition, not enough calories, which essentially is a problem most humans had faced most of their lives or, you know, most of the existence of, uh, certainly civilization and flipped it over to now we’re worried about essentially having access to too much food, that the problems are obesity and uh diabetes and heart disease and so forth. And the author here makes a comparison to say, well, maybe in the 21st century, we have a similar thing with information, where we’re also hardwired in many ways to seek information, that new information is a way to be. Prepared for what the future might hold, assess our safety and do things to improve our lives when you know things about what’s going on in the world around you that can be extremely helpful to say the least. But then you add in this era of hyperconnectivity and the 24 hour news cycle and social media and newsletters and everything’s being pushed to you all the time and everything seems important. And that can quickly turn into more of a gambler at a slot machine getting the the dopamine hit from that next, um, that next piece of information rather than, yeah, rather than spending your life on things that are more meaningful to the point that we have people thinking about things like digital detox and Deleting social media from their apps and this is, this is quite a big topic now of how you actually manage this problem of information abundance. 00:04:38 - Speaker 2: Yeah, you had shared that article with me, and I found it very interesting and indeed alarming. The topic of food and nutrition is one that I’d studied for a while, and that’s an area where there’s something that’s incredibly important, but over the past 100 years. So we’ve really lost the plot and it’s caused an enormous amount of damage to us as individuals and as a society, and we haven’t fully confronted or even understood that 50 or 100 years in. So if you analogize that to the information age, it could very well be the case that we are, you know, victims of our own own abundance here in ways that we don’t and perhaps won’t understand for another decades or even. 10 years. And that’s a pretty alarming thought. 00:05:15 - Speaker 1: It’s hard to know whether it will be on that same scale, but I certainly feel that certainly the the change in our daily lives as humans and the changes to our society of the information age broadly is huge and dramatic. I, I do think it’s on par with the industrial revolution. We we don’t know yet because we’re not far enough to do it, but that’s, that’s my gut feel going from there to, well, such big changes in the world will bring both positive and negative. And there’s obviously many positives to having access to essentially unlimited information all the time, uh, but there’s also many negatives, and I don’t think we’re going to figure that out in the next few years. I think it’s going to be an ongoing process of society adapting and figuring out. Um, how to, how to manage this and try to get the good parts and leave behind the bad bits. 00:06:03 - Speaker 2: So how do you start to grapple with that? What’s good about the information age and what’s struggle? 00:06:08 - Speaker 1: Yeah, well, I think the, you know, what’s good in terms of access to all the world’s information at your fingertips is almost so obvious that it hardly needs stating, but You know, Wikipedia is amazing, Google is amazing, uh, Twitter is amazing. Uh, you can get access to information that could be relevant to your career. Certainly if you’re a person that does creative, you know, you’re creative professionally do knowledge work of some kind, which probably anyone listening to this podcast, it’s likely in that category, having access to so much is extremely powerful for your career. Uh, and also in your life, right, making decisions about Important life things like parenthood or adopting a pet or taking care of an aging parent or buying a home or any health things, and the abundance of information, you can get personal experiences, you can get academic information. You can download books, you can watch videos on YouTube, you can become, let’s say not become an expert, but you can completely absorb yourself in almost everything humanity knows about any subject at any time, from the comfort of your own home, uh, even just on your phone, if you choose to do that. One personal anecdote I give from my life about how kind of information and particularly broad, let’s say more like global news has an impact on your life. Uh, with the pandemic that of course we’re still in the midst of here in in 2020, but when that came along, I was alerted to it essentially by basically a lot of people being alarmed on Twitter, and that caused me to stop and think, yeah, let me look into this briefly and kind of do my own research, which, you know, for me was making a use board and pulling out a few relevant bits onto that so I could kind of poke them around and try to make, make sense of it. And I’m really glad I did because a few weeks later, someone that I live with basically had a close encounter where basically her school, her entire school was shut down due to someone there uh testing positive and then suddenly there’s all these, you got. quarantine, you got to do this, you got to do that. And I think that would have been really surprising and disorienting and upsetting if I hadn’t already been studying exactly what was going on with this. And instead, I said, oh, OK, I actually have my head around this. I know what to do and um that information being not just that I went out to seek, but actually being pushed to me through these, uh, through social media and through through news channels turned out to be very helpful in making good decisions and uh. Essentially it was, it was information that had an impact on my life. So how do you think about uh the information age? Do you, first of all, I guess do you agree with me, uh, we, we didn’t talk about it before, but do you agree with me that it does have, you know, an impact, such, such an outsized impact potentially, and, uh, where do you see the, the, the, you know, the, the benefits to you or humanity at large and likewise the Downsides. 00:09:05 - Speaker 2: Certainly I think there is a big impact from the information age. I think that’s hard to deny. An interesting insight I got from a book called The Rise and Fall of American Growth, though, is that as seemingly as important as the information age is, it’s, it still hasn’t kind of fully impacted the whole real economy and our, our entire physical world. Uh, this book makes a point that if you look at the, the economy of developed nations, it’s things like housing, healthcare, education, caring for children and elderly people, uh, things like this comprise much of the economy, and those have, you know, started to be affected by the information age around the edges for sure. You have like Zillow for real estate, for example, but the way that we build homes is basically the same. As it is 40 years ago, but you have a nest on the wall that connects to Google. Um, so in that sense, I think there’s potentially actually a lot more that could happen as computing and information pervades more of our real physical lives. Now, now that said, I think certainly it’s already been very impactful and and in my line of work, I enjoy a lot of those benefits, but potentially a lot more to go. And the other thing I would say is, I don’t think we understand the full impact and implications of all these new information flows. Again, I think the analogy to food and nutrition is very useful, where it took us decades to begin to unravel all the weird stuff we were doing to our bodies and our societies, um, with these new food pathways. I’m afraid we’re going to go through the same experience with all these information flows. 00:10:27 - Speaker 1: Well, to bring it back down to the personal level, I guess one question that I see a lot of people grapple with is how to have a healthy relationship with, they usually say technology or social media, but I think of it as the information fire hose being connected to the whole of humanity and everything that it is thinking about and it’s going on because it’s a powerful feeling, this feeling of being informed or in the loop or connected. And you know, whatever that may mean for you, it might be connected to your field or connected to a smaller community that has a private group, but it could also be connected to global news and what you do is you end up or what I often hear people talking about and and face myself a little bit, which is how do you, for example, spend less time on social media and more time reading books. So for example, um, the YouTuber and podcaster CGP Gray. Uh, did a pretty substantial, not quite a digital detox, but basically got off social media and all this sort of thing for some period of time with the justification of, I want us to take more walks in the wood and woods and read more books, and I hear a variation of that a lot, people. I don’t know, maybe in Silicon Valley, people go on their 10 day meditation retreat, they don’t speak and they don’t bring technology with them, and you even see things like software specifically made for this, uh, even as far back as when I was in Y Combinator, which is now 13 years ago, uh, one of the folks in our batch was rescue Time, which is still operating today. It’s basically a plug-in for your computer that monitors how you’re spending your time and helps direct you away from the You know, spending time on Reddit or whatever and towards things that you define as productive how you define those things, which of course is especially confusing for a knowledge worker, I think because you have stuff like Slack and email being connected to your company’s sphere and I think it can have kind of the same quality of being connected to the news cycle or uh sort of the global global news, which is always some new thing, you know, I open up, I don’t know, notion, I open up slack, I open up Figma, I’ve got, you know, a little notification thing. Someone left a comment and someone’s done a new thing and there’s, you know, there’s someone’s pushed a new thing to Github. There’s always, there’s always some some new thing to follow and that becomes even more true as company gets bigger and more mature and that has some of the same quality where you can easily lose a lot of time in your day to these more reactive type things rather than the deep work or the bigger projects or prioritizing your own time. And I think when Apple came along with screen time, that was also kind of an acknowledgement of that and I see people doing tricks there. But I kind of see all of that stuff as as really like mitigation strategies, um, it’s it’s our short term hack for OK, we’ve recognized that losing your whole day to being on Twitter or spending too much time answering email or slack versus focus projects, you don’t feel good about that, you don’t feel like you’ve spent your time in a good way, but the techniques we have for managing that feel less like we found a way to live in harmony with the nature of the world and our information. flows and more like we’ve just put these little blocks in place in various places to again to try to manage that. So I’d love to figure out and I’m still exploring this for myself, but how to, how to live in harmony with the information fire hose and get the most of that, get as much from that as I can for my work, for my life, uh, while at the same time kind of avoiding some of the worst, the qualities of it, the addictive qualities or the qualities that in retrospect I feel like I. Spent my time well. 00:13:59 - Speaker 2: Yeah, for sure. First of all, I think you’re seeing this emerging intuition that information flows have different quality. Also, we’re seeing that there’s opportunity cost to spending your time with these different flows. Any time that you spend checking Reddit, for example, is time that you can’t spend with your family or exercising or what have you. But then I think there’s in the last 5 or 10 years, this has all been amplified by the social networks and the feeds, and there I think the situation is getting more. Adversarial and intense because you have these companies that are motivated one way or another to engage you right with these, these feeds at the same time, the individuals like ourselves who are on the other end of this, we don’t have full-time people who are working to represent our side, you know, this harmonious engagement with information flows. So I think it’s not surprising to me that it feels like we’re kind of on our back foot, like we’re playing defense, like we’re trying to mitigate, like we’re trying to put our finger in the dam. Um, I think that’s a function of the structural situation that we’re in. 00:14:55 - Speaker 1: Again, that fits with the food metaphor where it’s easy to just put it all in the individual, and I think each of us can make healthy choices, but when you have a pretty serious, let’s call it infrastructural approach to making you want this thing, whether it’s fast food that’s designed to push all your primal buttons for sweet, savory and salty, and then on the information diet side of things, you’ve got. Some very, very smart people working for the Facebooks and Twitters and Instagrams of the world to get you to come back, re-engage, be involved in the feed. So as an individual trying to use willpower to manage that is, is a challenge for sure. One thing that my eyes on this quite a bit was I read this book Hooked back when I was working for a, for a company and at the time the book was circulating among the product people there saying, hey, there’s some interesting ideas here with, for example, using push notifications to help people re-engage with your app and for apps that are focused on active users and that sort of thing, that’s, that’s a desirable thing. And I remember reading this book and just having a sinking feeling in my stomach. This was, you know, I don’t know, 67 years ago. Having the sinking feeling in my stomach of a wait, we’re engineering things to sort of create these loops to bring you back according to not what’s most valuable to you or how you can get the most utility from whatever this product is, but just according to your, your natural desires of wanting to be connected or the orientation response or something like that. And actually the reading that book, which was not was not intended to be a cautionary tale at all as far as I know, uh, but that had a big impact on me and the next thing I did, which was, uh, start the ink and Switch research lab and one of our core ideas that we wanted to explore is OK, as technology and social media and the internet is taking on this new quality that’s going to be harder and harder to resist or hold at bay. How can makers and people who need to focus and get in the zone and do work, how can you manage that? How can we take back maybe some of the way that computing is made and the way that software uh works to better serve, I guess the user’s life goals or work goals rather than companies, let’s call them engagement goals. 00:17:20 - Speaker 2: Yeah, this is an insight from the ink and Switch lab that’s really grown on me over time. I’ve come to appreciate how important it was. There’s this world of call it consumer engagement based computing, which is really flourishing, like there’s a huge amount of investment and lots of great services, some of which I spend a bunch of time on like Twitter, um, and then there. The enterprise computing world like B2BASS, which again is, is great. I spent a lot of my career working in that and there’s natural economic funding for those two worlds, but we really needed to make a deliberate effort to support this world of computing for creators for having better ideas. So I’m glad we, we ended up working on that together at the lab. 00:17:53 - Speaker 1: One of the small areas there that I became aware of through the research that we did was the prevalence of notifications. I think I mentioned earlier, like even something like notion or FIMA tends to have some kind of a notification thing. Even VS code, which is a, you know, a programming editor, has a little, you know, has some little like indicators that sort of uh click here, there’s something happening, something you need to know about. The red dots, man. red dot. Sometimes if they’re they’ll make it a blue dot if they’re making it a little more chill, but yeah, the red dot badge put that on where we’re talking with Max recently about the no spinners thing and for me the no notifications basically respect the user’s attention and focus, don’t get in their way and Certainly don’t try to distract them or lure them away with that inbox feeling or there’s something I need to check. And it’s tricky because of course, there are times you do need to proactively let the user know something or maybe they they want to know, but certainly I hope news will never have anything resembling a notification segment. 00:18:52 - Speaker 2: Yeah, and speaking of notifications, this reminds me of another book in this in this genre, which is now a whole huge thing. There’s a bunch of books, you know, written along this vein, but this is digital minimalism, part of his thesis is that consumers are being constantly bombarded now by notifications and re-engagement loops and that you can, he argues should be more deliberate about how you engage with those platforms and do less of a notification based model and be more selective about how you engage in these information streams. Yeah. 00:19:18 - Speaker 1: Certainly, I put a lot of work into basically turning off almost all notifications on my devices. I have a couple of key things that go to my phone. Never want it from my desktop computer. I had everything turned off on my iPad for my phone, I do have a couple of things, messages, emails that I do want to be notified about. I think of that as my communicator device, that’s purpose, so it makes sense that I would be notified there, but certainly I never want push notifications for something like breaking news or Twitter mention. or anything like that. I want to be more deliberate and even email um is something where I, you know, I like the model of check it in the morning and again in the afternoon rather than something that’s more interrupt driven. But the nice thing about having the phone be the notification box uh is that I can turn off the ringer and put it face down someplace whenever I’m going to explicitly go into a work session and not be worried about, for example, being right in the middle of something and then suddenly my phone. Tablet and my computer are all chiming to get my attention for for a single thing. 00:20:16 - Speaker 2: And that perhaps seems like a small change, but I think managing my notifications has been really important, which is mostly turning them off, moving to a model where I choose if and when to engage with these different information streams. A similar one would be holding social media feeds. Again, the structural pattern there is that these sites really want you to go there and refresh all the time, which in some cases it’s, it’s hard to avoid because there’s no APIs, but wherever possible. I’ve I’ve moved to a model where these updates get batched, sent to me, and I can review them asynchronously. So for the Washington Post and Hacker News, for example, I get emails once a day for those, I check them at some point, but I’m not constantly refreshing. 00:20:50 - Speaker 1: Now, so far, most of these techniques we’ve mentioned here are things that let’s say are general, general purpose that basically social media and news and messages and emails is something essentially every human on the planet, more or less needs to needs to manage. But then bringing it to the realm specifically of the knowledge worker, the creative professional or someone that is doing something that requires deep focus and they want to create, either as an individual or in groups, uh, there I feel like it becomes less clear. There’s obviously those same techniques that individuals can use of managing your notifications or measuring screen time or something, but I was, I’ve also been struck by the number of techniques that I’ve seen seen emerge for let’s say more maker oriented activities. Which includes, for example, uh, pen and paper sketchbooks remain not only as popular as they ever were, but I almost feel like more so because it’s a place where you can go and write down your ideas and have information technology, which pen and paper certainly is at your disposal, there’s no risk of a notification popping up or being tempted to switch away and pop open Twitter or whatever. Then at the same time in groups, I’ve seen, for example, Sometimes it’s certainly it’s considered maybe a good habit to turn off your your ringer in a meeting, but I’ve also seen things like, OK, there’s a basket in the middle of the table, everyone put your phone there, and we do this just to enforce the discipline that we’ll all be here in presence in the moment and scribbling on the whiteboard and having the group discussion and not tempted to to switch away. And maybe I get a version of that as well with um using a Kindle hardware device to do my book reading, um, and there I like that I do get a lot of the benefits of digital, which is obviously I can have a lot of books in this one small device and I can highlight things and highlights go into a database and so forth, but it cannot do anything other than read books. So I stay really focused. Those were some techniques that struck me as kind of how you can do more, say, knowledge knowledge work type things in an information age, uh, that sort of holds the holds the fire hose at bay. Do you know of any uh techniques that you’ve seen or that you use for yourself in that nature? 00:22:52 - Speaker 2: techniques that I use tend to have that same flavor of pre-commitment, like you do. Something upfront such that you, you sort of commit yourself and your knowledge work to doing the thing that you want to be doing, and you’re not constantly having to make the decision of should I be doing the work that I want to do, or should I be checking Twitter. Uh, so a big one for me has actually been reading books on paper. For a long time, I read books on my phone or my iPad with the Kindle app, which is nice. Uh, you get a lot of flexibility. Obviously, you can carry a zillion books, but I’ve always had the temptation. of checking the other apps on the phone, or even like thinking about it and having to decide not to, it ends up getting wired very deeply into you. I think if you use these devices a lot that you can, you know, press the home button and see all these shiny icons and click on them and get stuff. Um, so I’ve moved to a model of I read books on paper, even try to go sit physically away from my devices, you know, put them somewhere else. So it becomes a session that’s about the reading and the thinking. 00:23:40 - Speaker 1: I’ve even seen different social, let’s say reactions from others when you are, yeah, reading a paper book with a pen and paper sketchbook, even with my Kindle hardware, I think there’s a version of this which is when I, I also read books on my phone just using the Kindle app for a few years and yeah, people just assumed that you’re on Facebook. Which is funny, and, and they, they respond differently. They treat you almost with more deference like this person is thinking deeply. So in the lab, we had a track of research around attention and focus and how that connects to getting into a state of flow and doing work, particularly difficult maker work. And one of the insights we had there was that the benefit of the information banquet, of course, is being able to go out and search Google Scholar and find every paper that’s ever been written on a subject or go on YouTube, or go down a Wikipedia rabbit trail and end up with 50 open tabs. That that’s a really powerful way to gather to collect information, but there’s sort of no bottom to that. At some point we found that people need to draw a line around or draw a fence around a set of information and say, OK, I’m not going to go further than this. Now I want to take this set of sources, whether they’re papers, whether they’re websites, whether they’re tweets, whether they are excerpts, whether they’re photos they’ve captured on their phone, whatever that material is, and I’m gonna take this set. And I’m gonna treat that as a fixed set, and now I’m gonna look through that, read it, ponder on it, look for connections, look for patterns, and often that or second phase, I think we, we call it in some of the, in one of the papers that sort of second phase of rumination is best done, a little disconnected, a little removed. In fact, the ideal thing would even be going offline, going along train ride or something, and there’s just there’s no Wi Fi or what have you, um, and being able to do that. But the thing is you’re either all on or all. There’s no middle ground. You’re either in digital detox mode, your phone’s in airplane mode, you’re not using your devices effectively, or you’re fully connected. You’re on the internet. Basically almost all software nowadays requires internet connection to work properly. And so the idea, for example, being able to look through a set of web pages, uh, without an internet connection isn’t really very viable. So one of the things that that research and those insights fed into use was this idea that you’d be able to ingest things into this. Private, safe, sanctuary like space, know that anything you put in there is not dependent on an internet connection and then be able to take that set of things and then go someplace whether or not you’re connected. In fact, maybe it’s even better if you’re offline and be able to go through all of it and think about it and draw your conclusions and potentially use those conclusions in whatever work you’re going to do. 00:26:25 - Speaker 2: Yeah, I think that character of where you’re doing this deep thinking and rumination is really important. Especially now because the public wild internet is this incredibly frenetic, almost combative information space, you know, likes, retweets, refreshes, ads, notifications, uh, and, and the prospect that any of those could change or you could open a new app. It’s almost like you’re in fight or flight mode, right, when you’re out there on the wild internet, and I think it’s just psychologically really hard to relax so that your mind can do the deep background processing that it needs to um accomplish this rumination. So I think crafting the space, well that’s physical. digital or perhaps both that supports that work is really important. 00:27:03 - Speaker 1: You mentioned the technique of reading paper books as one way to manage this. Uh, we’ve also seen in kind of the ethnographic studies that printing stuff out is a technique that people use for that, so it’s the same kind of ideas as the book, which is, OK, I’ve got these, I’ve got a couple of papers, I’ve got this one website, and I’ve got a couple of screenshots and I’m gonna print all those things out and then be able to go and work in a paper workspace just on a desk or something like that. Uh, with this fixed set of things. And of course it seems really funny to be printing out web pages and printing out screenshots, but in fact it is a good technique precisely because it is this fixed set because you’re not tempted to to go down the shiny objects path, tumble off that edge, and you can really stay focused on what’s in front of you. 00:27:49 - Speaker 2: Yeah, I think printing also relates to your physical space and posture. I know that some people print out stuff so they can read it at their desk, like the same place that they have their computer, but I know that when I like to do this rumination type work, I prefer to do it basically on a chair or a couch in a sort of semi reclined posture. When I’m quickly gathering information on the go, that’s the phone, when I’m ruminating, doing deep thinking. Developing ideas in something like Muse, I like to be sitting down in a soft chair, and when I’m doing like editing complex documents, I like to be sitting upright office chair at my computer and I found that those different physical postures are actually really important to encouraging the right type of creative thinking. 00:28:24 - Speaker 1: The other thing you get with printouts as well as the ability to put them in say 3D space, physical space. So often when you go to, I don’t know, an agency office or yeah, certainly an academic’s office or any anyone who does designers, for example, often pin up storyboards and screenshots, you know, annotated screenshots of an application that they’re working on. Obvious movie filmmaking folks tend to do kind of storyboards on the walls, but there’s something about not only being able to have that sort of tactile, uh. Experience and different posture like you said, but also the potential ability to put it around us in space and to have agency overdoing it. 00:29:02 - Speaker 2: I think this is another really important psychological state like you feel like you have agency over your information, your work, it’s really hard to invest your deepest creative energies when you feel like it could shift out from under you at any time or someone could take it away or could get, you know, refreshed or something, the physical printout or desktop style apps that are very stable. Give you that sense of agency over your work. 00:29:22 - Speaker 1: Yeah, I think that agency element is a big part of why I still like files. Uh mobile platforms have largely abstracted away files, and I think that’s basically for the best for certainly the the common case of the, yeah, consumer that has more limited needs in their computing life, but for the maker, for the creator that wants to build up their own personal archives over time. Maybe I shouldn’t speak for others, but I’ll speak for myself, which is files have this very simple quality. They seem very tangible, even though of course they’re they’re digital, but they’re, they have this, this timelessness and maybe it’s that they work across platforms, maybe it’s that they’ve they’ve been around a long time, but I feel like there’s more to it than that. They have a they have a feel where I feel like, OK, if I’ve got the file, I’ve got it. I’ve really got it. Nothing’s gonna take it from me. It’s not gonna change out from underneath me. There’s this, I guess there’s the ownership quality, but, but it really does feel like agency. If I want to have two copies of the file, I can. If I want to delete the file and know that it’s fully gone, I can also do that files for all their challenges they maybe create in the computing world for people needing to, I don’t know what manage their hard drives when they’re not prepared for that. They do have a lot of qualities that I think are really promising. 00:30:32 - Speaker 2: I think that’s really important. I think this is an example of a case where that we’re going to find out in the order of 1020, 30 years that the approach that we’re taking to information management today has some big downsides. Personal data, creative data that’s tied up with applications, especially applications that are like networked on the internet, uh, you can only load them remotely. This is very brittle. It works great, you know, now you get this web app that you can load anywhere, but it’s really unlikely that the Data is going to be readable and accessible in 30 years, for example, whereas if you had a text file from 30 years ago, there’s a very good chance that you could have preserved it, and that would, that would be in your agency to do so. And so I think this is a trade-off that’s only going to become apparent as we get a few decades of experience with these tools. And and my bet is that files and file like data that’s independent from applications is going to be the right side of that bet. 00:31:17 - Speaker 1: this reminds me of something we talked about previously, which was. and um command line and that sort of thing and that particular paradigm and that particular way of interacting with my computer fell out of being so central and important for me because as the phone became more uh bigger part of my computing life, plain text is a great example of something where I’ve I’ve relied on that for a very long time and I I do love plain text, but increasingly as I find it’s hard to embed a link, it’s hard to emoji in there. I kind of want an image and you know, I kind of want bullet points. Those are a hassle and like increasingly the capabilities it has are not quite enough or not quite keeping pace with the modern world and so yeah, it’s always a trade-off there where at some at some point I go, well, plain text just doesn’t quite cut it for me anymore, but then yeah, we are in in making choosing to, I don’t know, jump into some. Some app that has all the modern sleek features, uh, then I also lose some of these qualities of timelessness, agency, data ownership. 00:32:20 - Speaker 2: And perhaps we can do a whole podcast at some point about our thinking there and how we’re trying to bridge those two worlds with Muse, but I think as it relates to the information age just this idea of retaining your data is really important. I think an unresolved question for our current set of network-based apps. 00:32:34 - Speaker 1: And maybe another piece of that. That is your data, so the concept of what your data even is, which is maybe a little bit like the drawing the fence around things that I mentioned earlier, but yeah, if I, if I write a paper, that’s obviously my work, but if I reference a paper someone else has written, if I download that paper, go through it in detail, mark it up with a bunch of highlights, well, I tend to think of those highlights as being my. And certainly my Kindle highlights, I think that way, even though they’re an annotation on someone else’s work. I think there is this threshold you cross or there is this better way to put it is that I think it would actually be a good idea or it is potentially an approach to living in this information age that could be helpful, particularly for knowledge workers is to have an idea of what’s mine and what’s the rest of the world’s. So when I’m just scrolling through a Twitter feed, That’s the, the flow of the information world. It’s not mine in particular. I don’t even really want to keep it around. I would quickly my information systems would quickly get clogged if you try to track every single thing that you read, which by the way, is a is an idea that came up frequently when people were talking about this kind of Memex derived lines of research, which is why, why don’t I just save everything I’ve ever seen. And it turns out that people have written systems to do that and it quickly becomes unmanageable, not just in the sense of large data sets are unmanageable, but in the sense of it’s not useful to me when I do a search and I find what seems like a bunch of pretty irrelevant stuff because 98% of what you see, you don’t care about is not relevant, you’re just, you just keep on scrolling and having this moment where you decide to actively people use the word curate, but Draw something that’s that’s a little too, almost a little too high minded. It’s really just to say, I’m gonna take this paper and read it and that and and make a few highlights and that in a way makes it mine. Not the paper, but the the reading of it or the highlights of it or my personal understanding of it. Now it’s my And it should be in my information set, in my personal knowledge base, whatever that is, having a better concept of that. And yeah, I think the the nature of kind of cloud and web applications and most mobile applications work this way too is you don’t really have a concept of that. I guess you have your account, but the reality is that what’s in your account is very Can shift very significantly depending on what the people the service decide, right? 00:34:58 - Speaker 2: And so even things that you’ve seen with your own eyes can be taken away from you. A related situation here is how enterprise software is often managed. So again, I think a fundamental psychological thing for creative work is a sense of safety and privacy. It’s a very vulnerable act to create something new, especially when it’s risky or uncertain. I hypothesize that it’s harder to do that. When you feel like someone may immediately own or have control or be able to see that work, I think you need a private personal space. And I’ve always found that a little hard to do in classic enterprise software. So for example, on Google Docs, if you have a Google Docs or for your company and you go to make a new document that’s only quote unquote visible by you, well, sort of, right? So anyone Google can see the document and really anyone at your company can see it. You know, the administrators who own the company really own the document. It’s kind of your It’s like it’s your name on it, but not really yours. And I think some people, that’s fine, they’re able to do their creative work like that. But I think other people, either explicitly or implicitly, have a really hard time putting their full heart into their work when they know that it’s not really theirs and who can see it when it isn’t really under their control. I like the classic academic model. I come back to this analogy a lot, where you’re a professor and you’re doing creative work, and you have a personal private office, and the stuff that you write there on pen and paper is yours by default, um, but it’s still a very social thing. So you can elect to go out into the hallway and scribble some stuff on the whiteboard with colleagues or invite someone to come into your office and look at a work in progress, or you might have a big department meeting. But all of those actions are explicitly bringing your work into the group or taking the group into your work. It’s not that by default anything that you do is on a big, um, you know, whiteboard visible by everyone. 00:36:31 - Speaker 1: I think collaboration models is a huge area for potential innovation. We dipped just a even half a toe. of that in the the research lab. I know you worked on uh on some projects that explored some of the decentralized collaboration models, not just the technology, but also, you know, what does it actually look like to potentially improve on the Google Docs model or the FIMA model which Which really hasn’t changed much since since sort of Google Docs first introduced it 15 years ago or whatever, but it’s a very, uh, there’s these very discreet jumps, which is, yeah, you’re either in the org, you’re in the company’s Google Docs account, or you’re in maybe your personal account. And then once you’re there, there’s maybe very specific work groups. I think we’ve seen the real world collaboration is much fuzzier than that. GitHub does OK with this, I think to some degree with outside collaborators on repos, but it’s rarely that just you have a document or a set of project files or repo or whatever it is. In something like Figma, Google Docs notion that just everyone in your company should have full read and write access to. And at the same time, you’re often collaborating with people outside the company, right? So there’s a project you’re working for these two people in the company, but then you’ve got this outside contractor who’s doing a few things, and you, you get these sort of shifting work groups, you know, the enterprise, guess it’s the enterprise model of control, but I think it’s also just Just a very simplified version of what work groups and collaboration looks like in practice. And that’s an area I’d love to see much more innovation on come out the technology world. 00:38:05 - Speaker 2: Actually speaking of other people in collaboration, this leads me to another idea on the information age. We said at the beginning that there’s this incredible abundance of information out there, almost like everything is online, and I feel like in some ways that’s true. So you can see all of the I don’t know weather data from the US online presumably, but I think it’s important to realize that in a lot of cases, the stuff that’s online isn’t like proportional. The stuff that’s true or correct, for a lot of reasons, you know, in some cases, people are just confused, but in other cases, there’s there’s even perverse incentives for the wrong stuff to persist. Um, and I think something that becomes important in this inconsistent information age is deliberately and actively reading and processing the information and making decisions about who you’re going to follow on Twitter to get the better or right information and things like that. 00:38:48 - Speaker 1: Certainly, that makes me think about a lot of the recent discussions around social media platforms as arbiters of truth and the element of Perhaps once upon a time, or in the not too distant past, newspapers and journalists and other kinds of news outlets were in a way arbiters of truth. You have journalistic ethics, which are all about trying to represent things fairly and focus on finding truth and that sort of thing. And now, yeah, of course, the internet is this wild west where anyone can share an idea and that’s great in some ways, but it does mean that just because an idea is loud, uh, or because it’s it’s repeated often doesn’t necessarily make it true. That doesn’t give it weight. That’s another thing. Certainly our society is trying to grapple with is how to reckon with what is, what is truth and certainly what is a what is a shared understanding of our reality so that we can all make collective decisions together. One group I’ve enjoyed following on that front is the Center for Humane Technology, and they’ve looked a lot at um they have some interesting manifesto type stuff on their website that I’ll link to in the podcast where they they talk a lot about this, the interaction of technology. These kind of individual choices we make about our information diet and that sort of thing and how we get the truth as individuals and as society and how we can hopefully change the technology but also our own individual habits to again get the best results for this both as an individual and a societal level. 00:40:11 - Speaker 2: You mentioned this idea of shared truth. I’m afraid it might be even trickier than we realize. I’m reminded of the so-called Gal man amnesia effect. This is when you’re reading a newspaper. Article in a subject of your personal expertise and you realize that the author doesn’t really know what they’re talking about. They’re making a lot of mistakes and so on. But at the same time, you turn the newspaper page, you read the article on some other topic that you’re not an expert on, and you say, oh, that’s you know the newspaper, they must know what they’re talking about, right? So let’s assume it’s true. 00:40:37 - Speaker 1: Yeah, I’ve I’ve I’ve had that experience multiple times. It is uncanny how you can immediately switch back to feeling like the news source of the journalistic sources and authority once it’s writing about something you’re not. Knowledgeable about, right? 00:40:50 - Speaker 2: And so I would go back and say, before the modern information age, when we just had broadcast media like print papers and cable, we didn’t really have a shared source of truth per se. We had a shared source of like statements that we just didn’t have a better shared source to, you know, come to some agreement around, which is a sort of shelling point of quasi-truth that that’s the best we got, but with the modern information age, and especially social media, all of the individual citizens have the ability to analyze the different media that’s coming out, perhaps in their area of expertise and see the source. Details and then go back on social media and say what they’re seeing, which might be that, you know, for example, I’m an expert on this topic and this newspaper doesn’t know what they’re talking about. And this brings me back to a book called The Revolt of the Public, another one published by Straight Press, and the whole thesis here is that this is causing a big societal upheaval, because there’s no shared source of truth, especially around, you know, the classic political topics and the uh call them information elites, people like the newspaper editors are being revealed in this world to have less. Accuracy and authority than they might have been perceived to have previously and that’s causing all sorts of downstream issues and complications. And the way that I would tie that back to me in this podcast perhaps is that leaves individual citizens with a lot of responsibility for processing the information streams themselves and making their own decisions and conclusions. That’s a big thing that we try to support in the app is you bring this all this disparate information into your sanctuary, your information sanctuary, and then you have to make sense of that yourself. 00:42:11 - Speaker 1: Well, that strikes a chord with me because one thing that I strive to do in my life is be a good citizen, be a good member of society, be a good member of my neighborhood and communities that I’m part of. And a lot of that is, I guess knowing stuff, and it’s not just being informed in the sense of, I don’t know, reading the newspaper or reading your community bullet. It’s knowing the stuff that matters and is relevant to the society you’re living in. And so that means both subscribing to those feeds, whatever form they might. They might come in, but then be able to pick out the parts that matter and then think through the parts that matter. Yeah, so one thing that I try to do with my information tools is to have a space where I can pull in things that are relevant so that I can be informed so that I can think it through, so that I can understand the issues at hand for me and for my neighborhood and my society and hopefully be able to be a good citizen and there’s too much. Any one person to pay attention to or know uh in this information age, information banquet, fire hose, overload thing that we all face, but I think our information tools, if we chose them well and we use them in the right way, can be a big help there. 00:43:23 - Speaker 2: Well, that seems like a good place to wrap it. If any of our listeners out there have feedback, feel free to reach out to us at museA HQ on Twitter or hello at useapp.com via email. We always love to hear your comments and ideas for future episodes, and in this case, we’d love to hear if you have a way for managing your personal information stream. 00:43:43 - Speaker 1: Yeah, I’d love to hear folks' techniques, the tools they use, approaches, tricks, hacks, and general principles for having a healthy information diet, particularly how that connects to your work as a creative professional, because we’re really only at the start of this information age and I think we can all help and support each other as we try to make our way into this brave new world.