POPULARITY
Hello Interactors,It's winter. So, as the sun tilts toward the sun (up north) my writing tilts toward the brain. It's when I put on my behavioral geography glasses and try to see the world as a set of loops between bodies and places, perception and movement, constraint and choice. It's hard to do that right now without running into AI. And one thing that keeps nagging at me is how AI is usually described as this super-brain perched in the cloud, or in a machine nearby, thinking on our behalf.That framing inherits an old habit of mind. Since Descartes, we've been tempted by the idea that the “real” mind sits apart from the messy body, steering it from some inner control room. Computer metaphors reinforced the same split by calling the CPU the “brain” of the machine. And now we're extending the metaphor again with AI as the brain of the internet, hovering overhead, crunching data, issuing guidance. An intelligence box directing action at a distance is a tidy picture but it risks making us miss what's actually doing the work. Let's dig into how the brain leverages the loops of people, places, and interfaces we all move through to extend it's richness and reach.GRADIENTS GUIDE WHILE BODIES BALANCEHave you ever hiked or skied in snow or fog and seen the middle distance just in front of you disappear? It takes the world you thought you knew, like ridge lines, tree lines, and the comforting predictable geometry of “just ahead” and reduces it to panic stricken near-field fragments. I've sensed once familiar ski runs become suddenly unfamiliar not because it changed, but because it was no longer accessible to my brain.In these moments, we're all forced to reckon, recalibrate, and (usually) slow down as our senses sharpen. We take note of the slope under our feet and the way the ground shifts. We listen for clues our eyes can't see and notice which direction the wind is blowing, how the light is changing, and how our own heartbeat and breath changes with each calculated risk. We know where we are, but the picture is fuzzy. Our memory only gets us so far. Everything around us becomes this multi-faceted relationship between our body making sense of it all while our brain updates its status moment by moment. The last thing a brain wants is to have its co-dependent limbs fail and risk falling.That experience demonstrates how the world is coupled with us. In world-involving coupling a living system survives through ongoing coordination with the affordances and constraints of its surroundings. In behavioral geography this frames spatial behavior as dynamic, reciprocal coordination between individuals and their environments, rather than just isolated internal cognition.Places actively shape decisions through the physics of the world and all its constraints. Actions, in turn, then reshape those surroundings in ongoing loops. This approach to cognition shifts focus from isolated mental maps to lived, constitutive engagements. It treats the world as a partner in our own competence.Before brains, gradients existed. Living systems navigated heat, cold, salt, sugar, thirst, dark, and light to persist. The first cognitive problems were biophysical. Surviving in a world that constantly disrupted viability relied on basic mechanisms like membrane flows, chemical reactions, and feedback. These primordial loops coupled an organism to a given environment directly. There were not yet any neural intermediaries. These were protozoa drifting toward nutrients or recoiling from toxins. It is in this raw attunement that world-involving coupling emerges.In 1932, physiologist Walter Cannon coined the term “homeostasis” to describe the body's active pursuit of stability amidst environmental pressures. Living systems, whether single-celled or more complex, maintain survival variables within narrow bands. Cells detect changes in these variables, which affect molecular states. Temperature, acidity, pressure, osmosis, and metabolic concentrations all influence reaction rates. Feedback loops alter cell-environment interactions through heat transfer, ion flux, water movement, and gas exchange, ultimately restoring the system to a viable band. Organisms are not passive vessels but actively engage with these detection loops, triggering adjustments like a wilting plant drawing water. Sensing and action are fused operations for persistence.About 600 million years ago, cells in an ancient sea sensed electrical fields or chemical plumes on microbial mats. These pioneering cells formed diffuse nerve nets, evolving into jellyfish and anemones. Simple meshes firing to contract thin membranes in bell-shaped forms, they lacked a brain but coordinated propulsive pulses to keep the organism in bounds or sting prey. Within 10s of millions of years, bilateral animals evolved. Flatworms like planaria emerged with nerve cords laddered along their undersides, thickening toward their tips. These proto-brains sped signal spread across their elongated forms.As vertebrates appear, control becomes more layered. Circuits in the brainstem evolve to coordinate breathing, heart rate, posture, and basic orienting reflexes. The cerebellum emerges to sharpen timing and coordination. Competing actions, drives, and habits become sorted with the help of the basal ganglia. With mammals — and especially primates — the cortex expands. Perception and action become more flexible across situational contexts and with it comes longer-horizon learning, social inference, and planning.But at every milestone, bodies are still constrained and governed by gradients and fields related to gravity, friction, heat, oxygen, hydration, predators, prey, and terrain. The cortex sits on top of these older loops, stretching them in time and recombining them in new ways. Even the most “abstract” human cognition still rides on the same foundation of reflexes and sensorimotor sampling. This is what keeps an organism in operable biochemical ranges while it propels itself through an environment that perpetually pushes and pulls.BOXED BRAINS BEGET BIG BELIEFSThe field of physiology deepened this bio-chemical inquiry through the early 20th century. Physiologist and neurologist Ivan Pavlov revealed how sensory cues could chain to responses through neural rerouting creating conditioned ‘Pavlovian' reflexes. Neurophysiologist Charles Sherrington coined the term “synapse” as he dissected and described them as switches in these loops coupled to the world. Through this inquiry, the autonomic nervous system emerged as a kind of homeostatic controller. Sympathetic surges in the system were found to create fight or flight reactions as our parasympathetic system kicks in to dial us back. This can be seen as a more complex version of the same push-pull of Cannon's original homeostasis.By the mid-20th century, mathematician and philosopher Norbert Wiener, working closely with physiologists and engineers, compared the nervous system to a servomechanism — a self-correcting governor found in engines. He coined the term cybernetics in his 1948 book Cybernetics: Or Control and Communication in the Animal and the Machine where he treated animals and machines as systems that regulate themselves through feedback. He and his collaborators argued this was a form of “purposeful behavior” or goal-directed action — a kind of negative feedback loop that reduces the difference between a current state and a target state. These ideas hardened in engineering fields during wartime as they were used in weapon systems for prediction and control of trajectories by compensating for delay and uncertainty. Cybernetics helped make the physiological regulation of Cannon's biological homeostasis structurally analogous to engineering.This mechanical metaphor sparked a long-standing debate, dating back to Descartes' 17th-century mind-body split. Dualism posited an immaterial mind as a rule-following pilot controlling mechanical flesh. Alan Turing's 1936 paper had already formalized this possibility, presenting a “machine” capable of computing any algorithm. Two decades later, the Dartmouth summer workshop coined “artificial intelligence” and encouraged the idea of engineering minds as programs. Around the same time, Herbert Simon and Allen Newell built early “logic theorist” programs that proved theorems, making intelligence seem like a boxed process involving symbols and reasoning. That lineage hasn't disappeared. This is largely the default engineering posture of AI. Even when the machinery shifts from hand-coded rules to learned statistical patterns, we still talk as if intelligence lives inside a system. AI models claim to “form representations,” “build a world model,” “store knowledge,” “plan,” and “reason.” Contemporary training methods reward this language because they really do produce rich internal states that can be probed, steered, and reused across tasks.Less discussed is the metaphysical shift from “the system has internal structure supporting performance” to “the system contains an inner arena where meaning emerges and is inspected before action.” Daniel Dennett, a philosopher who dismantled this intuition in theories of mind and consciousness, called this picture the “Cartesian theater.” He noticed that scientific explanations often subtly reintroduce the central place where “it all comes together” for an internal witness. Dennett believes this inner stage is a comforting fiction derived from Descartes' split between observer and world. Brain imaging reveals coordinated network activity, but not a literal inner ‘screen' presenting a unified world-model. Many neuroscientists describe cognition as emerging from distributed, parallel, and recurrent processes, sometimes with large-scale integration. Dennett's point is not that internal processing is unreal, but that our language tempts us toward a surreal Cartesian picture in a central place we can't empirically reveal.RESAMPLE, RESTABILIZE, AND RESHAPENeuroscience reveals that perception differs from a camera feeding a private theater. Our eyes rapidly sample information based on our actions, and the brain stabilizes perception during movement. Much visual processing is organized in the service of action, with partially distinct but interacting pathways supporting perceptual report and real-time visuomotor control. This suggests that the brain resembles a system for maintaining a relationship with the world through continuous sampling, correction, and skilled engagement, rather than a world-reconstruction engine.James J. Gibson, the founder of ecological psychology, arrived at a similar conclusion earlier from behavioral and perceptual evidence. He argues that the world provides lawful patterns, regularities constrained by physics and geometry, that guide behavior because they remain stable across changing viewpoints. These patterns are not complete. Organisms make them available by moving, shifting gaze, turning the head, walking, or touching. Perception is an active process of sampling the world.If perception is about staying attuned to lawful structures in the environment, the evolutionary consequence is organisms don't just read the world, they also write it. As organisms became more complex and mobile, they gained the power to reshape the very patterns they depend on. They start cutting paths (pathways worn into grass, game trails beaten into forests), building shelters (bird nests, termite mounds, human dwellings), altering flows of water and heat (beaver dams, termite mounds), and laying chemical trails (ants depositing pheromones).Evolutionary biologists call this niche construction. Organisms modify their environments, which then feed back into selection pressures and development, creating a dynamic cycle where the environment becomes a product of life and a force that shapes it further. As the world guides behavior, behavior reshapes the world, and the remade world trains bodies and brains into new skills and expectations. Over time, these modifications become external organs of coordination, storing information, reducing uncertainty, and channeling action.A worn trail is navigational memory made durable, a nest or mound is a climate-control device that stabilizes temperature and airflow, and a pheromone path is a distributed signal that recruits other ants into collective action and direction. Complexity scientist David Krakauer calls this broader idea of “mind outsourced into engineered matter” exbodiment — where artifacts actively constrain and channel cogntion. In this view, cognitive work is no longer confined to nervous tissue but accomplished through bodies working with worlds they've built.Humans take this to an extreme. Clothing and shelter externalize thermoregulation, fire externalizes digestion and protection, tools externalize force and precision, drugs alter chemistry, writing and calendars externalize memory and timing, and institutions externalize norms and coordination. Much of what we call “human intelligence” is not only in our brains but also distributed across artifacts and practices that have accumulated over generations.Cognitive anthropologist Edwin Hutchins made the point vivid by studying navigation. On a ship, “knowing where you are” is not privately derived nor sealed in a captain's skull. It is a collective achievement through a system of charts, maps, instruments, procedures, language, coordinated roles — an entire ecology of cognition comprised of tools and social organization. Here geography and cognition merge. Orientation is not just mental but enacted in relation to representations that are anchored and socially maintained in our material reality.When I was at Microsoft, I followed the work of sociologist Lucy Suchman who studied human-machine interaction. She arrived at a similar conclusion criticizing the fantasy that action is simply “execution of an internal plan.” Real action, she argues, is situated. It's responsive to unfolding circumstances — often improvisational — and is shaped by context in ways that cannot be fully specified in advance. In other words, if we look for intelligence as a prewritten script inside the head, we will miss how intelligence is often produced when enacted in a world that refuses to hold still.Large language models, at first glance, seem to embody the “internal plan” fantasy. They're sealed systems containing competence in weights and parameters, ready for queries. However, they're closer to Suchman's warning. Trained on vast archives of human writing, LLMs learn statistical regularities in vast continuations of text. When used, they produce a new continuation conditioned on prompts and context. Prompts aren't mere inputs. They're situated actions in human-computer interactions. They set frames, narrow affordances, cue roles, establish constraints, and often iterate in a back-and-forth that resembles Suchman's improvisation with a powerful partner who is also techy and textual.Philosophers Andy Clark and David Chalmers, in their extended mind thesis, claim under certain conditions, external tools can become constitutive parts of cognition when they are reliably integrated into the organism's routines. As we've learned, the boundary of cognition is not always the boundary of skin or skull, it's the boundary of a stable loop.When the fog rolls in and visibility gets low, the boundary of this loop becomes quickly apparent. “The mind's eye” is not that helpful…practically or metaphorically. If anything, the brain wants nothing more than for the body to widen contact with the world. It slows us down, sharpens listening, and increases tactile attention. It calculates different gradient thresholds to measure risk…it might even glance at an external sensing device that is prompting some intervention or improvisation! We are not watching a movie in our head to get through the fog. We are trying to stay oriented in a world that refuses to be fully represented.This is the reframing of intelligence — artificial and otherwise — I wish for. I'd like to see more talk of intelligence being less a coveted individualistic thing hidden inside us and more an achievement of coordinated biophysical, social, infrastructural loops across time. When we mistake a metaphor (“there's a theater in there”) for an ontology (“that's where cognition lives”), we get misled about minds and we get misled about AI. The alternative is not anti-technology. It's conceptual hygiene. Let's start asking where cognition actually happens, what it is made of, and how places — natural and built — participate in making it possible. You know, Interplace — the interaction of people and place. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit interplace.io
As educators, a great deal of our understandings of what learning is has been dominated by behaviourist (check out previous episode with Carol Sanford) and cognitivist ideas, but what if our decisions about how we design learning environments, and think about pedagogy and curriculum had taken in ecological insights of Eleanor Gibson and James Gibson and the branch of psychology known as ecological psychology. So few educators know that such a sub-discipline even exists!Rather than individual students's brains neatly arranged in rows in intentionally informationally impoverished learning environments to compute information and construct meaning in a meaningless world out there, we might have young people as object-environment systems moving around and exploring informationally rich environments to fine-tune their action-perception through multi-sensory relating to the ecologies that they participate in! Sounds like a pretty different world!This episode welcomes Miguel Segundo-Ortín and Vicente Raja, post-doctoral researchers at the MINT Lab, and research fellows at University of Murcia, Spain. Together, they are the authors of the book Ecological Psychology (Cambridge Elements, 2024) -https://www.cambridge.org/core/books/ecological-psychology/9E79001702D4D8029E19D11CD330149FMiguel Segundo-Ortin is a postdoctoral fellow in the Department of Philosophy and member of the Minimal Intelligence Laboratory at the University of Murcia (Spain). His research is in the philosophy of the cognitive sciences, particularly embodied cognition, comparative cognition, and human agency.https://miguelsegundoortinphd.com/Vicente Raja a post-doctoral researcher at the MINT Lab, a research fellow at University of Murcia (Spain) and external affiliate faculty of the Rotman Institute of Philosophy at Western University (Canada). His research lies at the intersection of philosophy, cognitive science, neuroscience, and the history of the sciences of the mind, and has appeared in venues including Synthese, Minds and Machines, Physics of Life Reviews, Behavioral and Brain Sciences, Scientific Reports, Frontiers in Neuroscience, Philosophical Psychology, Adaptive Behavior, Cognitive Systems Research, and Theory and Psychology, among others. He has also edited/is editing a book for Routledge and special issues for the Journal of Consciousness Studies and Topics in Cognitive Science. https://www.um.es/mintlab/index.php/about/people/vicente-raja/This is a talk given by Vicente In Memoriam: Eleanor Gibson - https://youtu.be/QmV4Iz1jJs8?si=HAScaBYB2RcNKjTaJames J. Gibson: https://en.wikipedia.org/wiki/James_J._Gibson
Manche Dinge muss man einfach ... drücken ... knuddeln ... anfassen ... wegwerfen ... Sie haben, wie der Psychologe so sagt, einen "Aufforderungscharakter", englisch "affordance". Geprägt hat den Begriff James J. Gibson aus der Erkenntnis, dass manche Gegenstände -- irgendwie -- eine bestimmte Handlung nahelegen, vermittelt über ihre Gestaltung. Gutes Design kann Produkte schaffen, die einen solchen Aufforderungscharakter haben. Man könnte auch sagen: bei denen ganz intuitiv klar ist, wie man sie handhabt. Aber wo genau liegt dieser Charakter: im Kopf, im Gegenstand, oder gar im Kontext? Darüber diskutieren Niklas, Claus und Marius.
Join York University graduate Tobey Senderovich and I for a sprawling conversation about thought, cognition, psychology, behaviorism, wisdom, Heidegger, James J. Gibson’s ecological psychology, John Vervaeke’s machinery of knowing, Friedrich Kittler, social lock-down, media effects, and anything else we can throw in! Tobey’s degrees in Neuroscience and Psychology make him the perfect friend to paint […]
My guest today is Andrew Hinton. Andrew has worked in the digital design field for two decades. He's one of the founders of the Information Architecture Institute and author of the book Understanding Context. In this conversation, you'll learn about the foundations of information architecture and why Andrew thinks of himself as a “radical information architect.” Listen to the full conversation https://theinformeddotlife.files.wordpress.com/2020/01/the-informed-life-episode-26-andrew-hinton.mp3 Show notes Andrew Hinton Helix (database) Understanding Context: Environment, Language, and Information Architecture by Andrew Hinton The Information Architecture Institute The Information Architecture Conference The Informed Life Episode 21: Vanessa Foss on Event Planning Shared Information Environment: let's unpack that, shall we? by Andrew Hinton MUD Interactive fiction (e.g. text adventure games) World of Warcraft O'Reilly Media Peter Morville Ecological psychology James J. Gibson & Eleanor J. Gibson Phlogiston The Copernican Revolution Cartesianism Play-Doh Contextual inquiry Service design Ecosystem Map Bodystorming Attention deficit hyperactivity disorder (ADHD) The Informed Life Episode 15: Jeff Sussna on Cybernetics Norbert Wiener Claude Shannon Due app Apple's Reminders app Steve Jobs: “Computers are like a bicycle for our minds” The Mother of All Demos Doug Englebart Read the full transcript Jorge: So, Andrew, welcome to the show. Andrew: Great. Hey, Jorge, thanks. Very glad to be here. Jorge: So, you and I have been friends for a long time, but for folks who might not be familiar with you, would you please tell us about yourself? Andrew: Yeah, sure. I'm Andrew Hinton. I have been in the design community, in doing digital oriented design things for probably 25 years now, if we count things I was doing before I was being paid full time for it. But definitely 20 years solid now for actually this being my “job” job. And information architecture is kind of my, I don't know, I consider that sort of my home turf. My origin story in all of this really, I think is, is information architecture story. The first community I really kind of bonded with and got connected with was the early IA community, back in the late nineties. Since I started doing this, I've worked roughly half and half, as an internal in large organizations as well as an external consultant, or agency style person. but even then, typically it's very large like… Early on, it was manufacturing in the Southeast. That was like most of our clients in the company I was with then. So, I've worked with a lot of different, big companies and IT organizations and things like that. Nonprofits, profits. But before I got to doing all this, I was more of a humanities person and I still am, I think, at heart. Was a philosophy major, went to seminary briefly as a way to get a theology and philosophy graduate education, but then left because the seminary started getting weird. And then I went into literature and got a masters in that and then ended up with a Master of Fine Arts and poetry. Mostly all of this was just a avoid the real world until I was about 30. But then I had to get like a real job, and it turned out that this fixation I had on the internet, was something that people would pay for more than poems. So, I got into that at that point. But before then, I had really done odd jobs and things where I think a really early formative thing for me was early nineties working in a doctor's office while I was in grad school and all they had was a typewriter and a phone. And I had seen a demo and a Mac user group of something called Double-Helix, I don't know if you remember that. It was later called Helix. But it was just a sort of a drag and drop style way to make a relational database. And I was like, “Ah, we need a database for all of these clients, you know, all these patients, and their accounts and things.” So, they let me do that. And I had to teach it to other people who work in the office and kind of figure out how the interface would work. And really it was sort of this crucible for figuring out how to make things on screens that people could use. And I sort of went from there. Yeah, that's in a nutshell. I ended up writing a book, which just turned five just a couple of days ago, called Understanding Context. and I've been involved in the IA community for a good while, was one of the co-starters of the erstwhile Information Architecture Institute. And I'm looking forward to hopefully being in New Orleans, with my, the IA community, which I really think of like a family reunion for me, honestly. Jorge: I recently had Vanessa Foss on the show; she is one of the people who runs the IA Conference. And that notion of that event as a family reunion came up. It definitely feels like that to me as well. Andrew: Well, and it feels like a family is growing too, which is great. Like I used to worry that it was just a bunch of, you know, old hands getting together. But every year I see these new faces and voices who are stepping in and doing things, you know, and loving the community too. So, in spite of some of the ups and downs, with organizations and whatnot, I'm very optimistic about the community's health. Jorge: And the community is a community in part because of your work. Thank you for the efforts that you've put into the information architecture community over the years. You said that you had studied fine arts and poetry as a way to avoid the real world. And I will say this, you entered the real-world with a bang. I remember myself entering the information architecture community and being influenced by your writings. I remember one piece in particular about the centrality of hyperlinks and how that was different about this work. And then the book that you brought up, Understanding Context, which I consider an important book in the information architecture field. And I was hoping you would tell us a little bit more about that. Andrew: Sure. The challenge is that… A little bit of a qualifier: it's always hard to know where to start. But, really, I think where it came from was really, I think, very early on in my involvement in the IA field, as it was starting to get going as well in the web IA community, I guess I should say. I had already been online, doing things on the internet since right out of college. And I was fascinated with how something like — if our listeners are not familiar, there were these things called — and there are — these things called MUDs, multi-user domains, or multi-user dungeons, because some of the earlier ones are really more like online D&D games, like text-based adventure games, but made in a way where multiple players can be in the same place at the same time. And a precursor to things like World of Warcraft and stuff like that. But there were bunches of these, with different code bases. And it was just one example of where it felt like you were in a real place with people. Like there were emotions involved, there were social interactions and meaning being created. I mean, it really, it mattered. It wasn't virtual in the sense of somehow non-corporeal. It was real. People had bodies who were interacting with one another in this environment. It was just mediated through language, but it felt different than just a conversation. Right? It felt like you were in a place because there were structures, and those structures felt like they affected those interactions, and they mattered. So anyway, that and some other things just had me thinking for a long, long time about what is it that makes this feel this way and work this way? I didn't have this way of saying it then, but now, how is it that language can be environment in that way? So that's always been in the back of my mind. One reason why information architecture was so fascinating to me is because to me, it's never really been especially a metaphor. It's really been just a different way of making structures that people live together in. So, from that, I also was curious, “Okay, we were doing this thing called information architecture. What is it that we're making? Like what do we mean by that?” So, the architecture part is, you know, it's sort of clear, but then the information part is not so clear. I just really wanted to go deep on understanding; what is my material if I'm an information architect? And if we're going to have this discipline, then we need some kind of grounding. Don't we need to really understand what it is we're doing, at a very fundamental level? And I had this hunch that something about digital technology was changing the way human experience worked in terms of how context worked, because anything as simple as just accidentally hitting “reply all”, a button that looks exactly like the “reply” button, except for some minor differences, having a wildly outsized effect, compared to the actual action you're taking. As opposed to in physical life, right? If you want to talk to 10,000 people or whatever, instead of just one person, there's a massive physical difference in what you need to do. All you have to work with is just physical stuff, right? Nothing technological. All the way up to the way Facebook was, clearly, essentially, you know, even early on, basically almost phishing people to get at their information and to trick them into connecting to more people and inviting more people in ways that were manipulative. These were all really preoccupying to me. But also, I really cared about the IA community and what we were doing. And I thought, we need to understand what it is we're saying when we say this information architecture thing. Because I was willing to let go of the label entirely if it turned out it really didn't mean anything different that was important. But I was just so convinced — and still am — that there is a thing that we need, and we need it to be good that other phrases about things like interaction design or user experience and these other labels, they don't quite get at. So, all of those things together. I went on this, I thought, “Hey, I'm going to write this little book about context. I'm just going to… I've got some thoughts. I'm going to put them down.” Somehow, I talked to O'Reilly about doing this with me, and thankful to Peter Morville for helping me make that connection. And it just morphed. And I'll end with this bit that — and you've heard me say this before — I think I wrote 100, 150 pages of just all of these ideas and thoughts I've had from talks and writing, some things I've already done. And then I just got into this part where I was like, “Okay, well I need to address what information is.” And I just didn't know, having some [inaudible] academic background, I was like, “I need to make sure I'm really researching these things and being clear.” So, I asked around, and I asked some of people we know, who teach in universities, about information. And I asked them, and I could not get a straight answer. And I thought, well, that's interesting. And, anyway. Ended up finding out about this whole other way of thinking about information that comes from ecological psychology, the work of James J. Gibson and his wife, and how that was influencing embodied cognition as a theoretical approach. And it just kind of went from there and it blew everything up and I had to kind of start over. And then I ended up writing a much bigger book than I believed. But that was sort of the story behind like why I even got into it. And what it's done is it's really rewired the way I think about the way people interact with their environment. Even just me saying it that way is an artifact of that rewiring, right? I tend to talk about environments rather than just individual devices or things or websites and whatnot. Anyway, it just really changed the way I think about what I do, that I'm still really coming to understand. Jorge: You said that a part of your pursuit for writing the book was coming to an understanding of what the material is that we're working with when we are working on an information architecture. Can you speak more to what that material is or where you've landed on that? Andrew: The material, it turns out to be material. And what I mean by that is, I think early on I thought… So, I use this analogy sometimes. You know how early science and alchemists would use this term — “phlogiston” — to talk about some substance or thing that they knew must be there because they could see the effects of what was going on? They treated it as if it was a thing, even though it isn't really a thing. It was multiple things and processes and whatnot, that we now have names for. But to me, that's kind of how I was using that word “material” early on. It feels like we were using information in a material way, but I really couldn't explain what that meant. Now, after going through all this, I've come to realize, well, actually it isn't material like it's, it's stuff. It's our bodies — and our brains are part of our bodies, so I just say our bodies — are interacting with the environment around us. And the environment around us has stuff. You know, it's objects and surfaces and all of that. And that's where information comes from, and everything else is really sort of this linguistic construct that we've created, or in a human sphere of language-meaning. But all of that is ultimately grounded in our bodies and the way our bodies interact with the world, the physical world around us. So, it's really more of a continuum for me now between something like knocking on the table I'm sitting at right now — that's physical — to, if you go all the way to the other end of the spectrum and saying the word “table” and all the meanings that that can have. But ultimately, the only reason that those meanings can be there is because in some way, whether it's three or four or 10 degrees of separation, it's connected to that kind of meaning. So, to me it's about the relationship between the creature, of the human, interacting with that material world. And then when you add language to that, then you get this really interesting material that can be very slippery and hard to pin down because language is like that. But it's in that interplay between our bodies and our environments and the way we talk about our experience and communicate with one another. That's the material. Jorge: One of the challenges that many of us face — many of us who think of ourselves as information architects, primarily — is that the stuff that you're speaking about is stuff that we take for granted in our day-to-day lives. I think that it's in your work that I read about this analogy with fish and this old trope about fish not being aware of the water they're swimming in and somehow, we are swimming in language. And because we are dealing with architecting structures of language that change how people perceive the environment they're operating in — that's a fairly abstract notion. And I'm wondering, for you who have worked, like you said, part of your career internally in large organizations and also as a consultant, how does one make this palatable or actionable to the folks who need this perspective as part of their work? Andrew: So, one of the real challenges of trying to write about this and teach this is that very thing. And part of the challenge of that is, there's a sort of a Copernican shift that you almost need to be able to make, to see it differently. Meaning, you know, the Copernican revolution /[that was/] basically a complete reframing, right? Where it's like, no, everything doesn't revolve around the earth, all these planets revolve around the sun. And it changed… It simplified astrophysics, astronomy. But it was a really hard shift to make because people's just ingrained idea of their experience, where it was not that. And this is really coming from this undoing of Cartesian thinking around body-mind separation and things like that that's sort of been an increasing part of the conversation in the sciences over the last 20 some years, I guess. People are so… It's so ingrained to think about, especially the West, I guess it's, it's so ingrained to think about things in a certain way you know, this idea that you could take your brain and put it into a vat and it'd still be you. But, well, no… Your brain only knows what it knows because of your body and vice versa. That part it feels like it's, to really get a lot of this, you have to get to that, but I'm realizing too that like, well, I can't sit people down and get him there every time. So, the way I've been teaching the workshop, for example, it has been just starting off with just grounding people in a substance or an object and building up from there. Just getting them grounded in, “I have a body,” and so I use Play-Doh in the workshop. So, everybody gets their own Play-Doh and you have to hold the Plato and you have to write down things about like how your body's interacting with it. You put it back in the container, you cover it. You have to think about right now, okay, what is your body experiencing with the Play-Doh now? Well, you can't see it. You can't touch it, but you can see and touch this container. And these all sound like very simplistic, primitive questions. But that's the whole point, to ground people back in simplistic, primitive way of thinking about how bodies and environments interact with one another. Because ultimately what we're trying to get to is all of this abstraction we've created around ourselves, all this information-sphere, all these other things, our bodies want those things to be as straightforward as being able to squish some Play-Doh in my hand or to pick up a hammer and hit a nail. And so that's kind of how I've been framing it is, is getting rid of some of the theory at first, and just grounding people in, “Okay, you've got a body, you're experiencing things,” and then gradually trying to get to the point where we're talking about now, how does language function on top of that? And in what ways does language complicate that simplicity. And then when we add digital, there's a whole other realm of complication or complexity. But it's building up to the abstract, I think helps. It's what I'm ultimately trying to do, is to get at the root. That's why lately I've been calling myself a “radical information architect.” I felt silly that I didn't know this until just recently, that, that radical — the root — really, the root of the word “radical” is the word “root” or the same root. But basically, radical's meaning really comes from this idea that you're changing something at the foundations, right? You're rewiring what's underneath. And I feel like that's what I'm trying to do with this. So if I get people to get out of abstract-head and out of information-head, the way that we typically think of information and start with, how do we understand our physical environment and interact with it in the same way lizards and spiders interact with their environment. The principles are basically the same. And then build from there. That's how I can teach this. Now, if I'm working with just colleagues on the fly in the middle of a project, or I'm talking to my colleagues here at work, I don't go into all that. I mean, I've been here six months and I have yet to go into all that. But what I do is try to slip in this grounding and kind of draw on the whiteboard. Here's a person. Here's some things that they're interacting with. Here's how that might change over time. I'm always trying to locate it into like, you've got a human in an environment doing stuff. Because ultimately that's what user experience brings to the table. There's a human being, and we have to make all this other stuff we're making compatible with that human being. So we're creating new parts of their environment that we want them to use and understand, right? So, in my day-to-day that's just how I started and it's been helpful that we have methodologies like contextual inquiry and service design and things like that where you have some tools, with things like ecosystem mapping and whatnot, that if you really put some pressure on them to make sure you're staying very grounded with a human, with a body doing a thing, that really helps to get people there with you. Things like bodystorming can help too, but it's hard to get engineers to do bodystorming or others. So that's not as common for me. Jorge: You said that this line of thinking has changed how you work, and I feel like we're getting a little bit into that with this conversation, in your interactions with your team. I'm wondering how, if any, it has also influenced the way that you manage your own information and get things done? Andrew: Yeah. I kind of inadvertently learned a lot about myself and the way that I interact with my own environment. You know, another thing about me is, it wasn't until I was in grad school that I was diagnosed with ADHD. And that's something that plagued… I was going to say plagued. That's maybe not the best way to put it. But until I knew what was going on, it was — and you'll hear this from a lot of people who were diagnosed as adults — I really had a lot of challenges that, that really got to the core of myself as a person from that, because I really couldn't trust myself to behave in ways that I wanted to behave in the world and things done and understand things and to keep track of things and all of that. And in fact, just writing a book with one of the scariest things I could even consider. That's one of the reasons I felt like I had to do it, was because it's just very, very hard to marshal… People talk about a train of thought. And for years I've made this joke that I've really got this sort of a Beijing-full of rickshaws of thought. Like, I don't have a train, just these things bouncing around. Understanding this more has helped me to understand so much better that I have to design my environment around me so that it can supplement and help me. Right? And you mentioned earlier before we started recording, you talked about how in one of your podcasts you talked to Jeff Sussna about cybernetics. And honestly, that's a topic I wish I had gone deeper in when I was writing the book, although then I would've had to make it even longer. So, I don't know. But Norbert Wiener and the people who were working in cybernetics, they were really getting at something that the more abstracted Shannon information science, in-theory world, wasn't quite getting at, which was this very ground, that idea of how our bodies and our environments are, are very symbiotic. But it's taken a long time for mainstream thinking to catch up with us. But now I have no shame in creating crutches for myself. So, for example, I use an app called Due on my phone. And good Lord, if this developer ever stops making or updating this, I'm going to be in terrible shape because it works just the way it needed to, which is any little thing that I go, “That feels like something I'm not going to remember.” I put it in there and then it bugs me until I do something about it. Right? So, it allows me to snooze it in a way where I can snooze it in small increments of time or big increments of time of time. For me, it's much more successful than Apple's Reminders, for example, which are too calm for me. And in fact, I think it's the thing where it's like, if it comes up more than a certain number of times, it goes away. I've yet to even figure out what the rules are around Reminders; I find them untrustworthy. Whereas Due, I have this love hate relationship with, because it just nags the hell out of me. But it does it because I told it to. So that's for things in the moment or things I need to remember this at this time. One thing that I really love about Reminders on the iPhone is the location-based thing. So, I take the train to work, which in Atlanta is sort of like winning the lottery to be able to take the train to work. And there are things that I know I need to do as soon as I get to the station near my house, but I know I'm going to forget them because — and it turns out there's research about this, and I write about this in the book — that changing physical environment, affects what you're able to remember. The thoughts that you're having on one room can just disappear when you go to the next room and things like that. And it's not some magical problem. The problem is that your body, your whole cognitive system, is using your environment as a partner in the way that it is making thoughts and thinking through things and remembering things. So, anyway, I can set it so that it's going to remind me of something as soon as I get to the train station. And sure enough, every damn time, it turns out I have forgotten the thing. And I'm thankful that I had told my phone to remind me when I got to the train station. But that's helpful because it's variable. I never know exactly when I'm going to get there when I set the reminder. So, there's things like that I have to do, and I'm in it and it still feels like I'm treading water most of the time, but at least I'm not drowning. And I have other things I do too, but that's just an example of one of those things I've had to do. Other things like routines, where I put my keys, where I put my wallet, where I put my badge for work, I have to do it exactly the same way every day, and if I don't, or if I do this thing where — and again, this is an embodied cognition thing that I understand better now because of that way of thinking — if for some reason, I have some other object in my hand on the way out the door — and this is probably true for a lot of people — like if I've got a letter, I'm trying to mail or something, or especially if it's in any way the shape of another object that I always carry, I'll often forget the thing that I'm always carrying because my body is just sort of halfway paying attention and just assume it's like, “Oh, I've got everything.” Right? So, there's leaks that can happen, but I'm always trying to plug them. Jorge: One of the benefits that we've gained from having these digital things in our lives is that they can augment that relationship between the person and the environment in ways that give us perhaps a little more control and that make it possible for us to suit it better to our needs. Would that be fair? Andrew: Yeah, absolutely. And it's that augmentation again, the thinking around cybernetics, the original work was very much about, right? Which was, let's not create this whole separate alien thing. Like this is all environment, it's all human. So, let's use it to supplement. And even in AI circles, that's one of the big — I don't want to say tension points, but one of the big dichotomies — I guess is it's sort of the school of thought of, well, let's replace certain kinds of human labor using AI or certain human activities or behaviors or whatever versus let's use it to supplement humans and humans supplement it in this more symbiotic kind of a relationship. So, I think, I think that theme, that augmentation theme, I mean, even Steve jobs, right? The bicycle for the mind. I mean, this was, and I think he borrowed a lot of this thinking from… Sorry, his name is escaping me, but the mother of all demos, you know? Jorge: Doug Englebart? Andrew: Yeah. So, this idea of augmenting human needs with technology in this way, it's got a long tradition. But the devil's in the details, right? It's as to how, how do we arrange those things? How do we make them really good for us? You know, rather than things that somehow turn against us, or other people can turn against us. Jorge: Well, thank you. I want to thank you for your work and for helping us be more aware of those relationships. And thank you for being on the show. Where can folks follow up with you? Andrew: I'm online; andrewhinton.com is just sort of my home site and it's got the ways to ping me. There's a contact form, all that stuff, and links to my book, which people are still apparently buying it, because I still get a little check every now and then. So, I'm super happy to know that. I'm starting to feel self-conscious about, about some of the content cause it's getting a little old. But I feel that hopefully the principles are still stable. So contextbook.com is the home site for that. So, you can find me either one of those ways. Jorge: Fantastic, and I will include both of those in the show notes. Thank you so much for being on the show, Andrew. Andrew: Thanks, Jorge. This is great. It was great to catch up and an honor to be on your show.
Hi, I’d like to present an idea: “Become as capable as the weapon you wield.” In his 1977 article “The Theory of Affordances”[1] psychologist James J. Gibson describes the world as perceived through a lens of all “action possibilities” latent in the environment. For understanding what this means, I particularly like an example given […] The post #4 – Moral Obligation appeared first on Comfort War.
Hi, I'd like to present an idea: “Become as capable as the weapon you wield.” In his 1977 article “The Theory of Affordances”[1] psychologist James J. Gibson describes the world as perceived through a lens of all “action possibilities” latent in the environment. For understanding what this means, I particularly like an example given […] The post #4 – Moral Obligation appeared first on Comfort War.