POPULARITY
In this episode of Crazy Wisdom, Stewart Alsop talks with Will Bickford about the future of human intelligence, the exocortex, and the role of software as an extension of our minds. Will shares his thinking on brain-computer interfaces, PHEXT (a plain text protocol for structured data), and how high-dimensional formats could help us reframe the way we collaborate and think. They explore the abstraction layers of code and consciousness, and why Will believes that better tools for thought are not just about productivity, but about expanding the boundaries of what it means to be human. You can connect with Will in Twitter at @wbic16 or check out the links mentioned by Will in Github.Check out this GPT we trained on the conversation!Timestamps00:00 – Introduction to the concept of the exocortex and how current tools like plain text editors and version control systems serve as early forms of cognitive extension.05:00 – Discussion on brain-computer interfaces (BCIs), emphasizing non-invasive software interfaces as powerful tools for augmenting human cognition.10:00 – Introduction to PHEXT, a plain text format designed to embed high-dimensional structure into simple syntax, facilitating interoperability between software systems.15:00 – Exploration of software abstraction as a means of compressing vast domains of meaning into manageable forms, enhancing understanding rather than adding complexity.20:00 – Conversation about the enduring power of text as an interface, highlighting its composability, hackability, and alignment with human symbolic processing.25:00 – Examination of collaborative intelligence and the idea that intelligence emerges from distributed systems involving people, software, and shared ideas.30:00 – Discussion on the importance of designing better communication protocols, like PHEXT, to create systems that align with human thought processes and enhance cognitive capabilities.35:00 – Reflection on the broader implications of these technologies for the future of human intelligence and the potential for expanding the boundaries of human cognition.Key InsightsThe exocortex is already here, just not evenly distributed. Will frames the exocortex not as a distant sci-fi future, but as something emerging right now in the form of external software systems that augment our thinking. He suggests that tools like plain text editors, command-line interfaces, and version control systems are early prototypes of this distributed cognitive architecture—ways we already extend our minds beyond the biological brain.Brain-computer interfaces don't need to be invasive to be powerful. Rather than focusing on neural implants, Will emphasizes software interfaces as the true terrain of BCIs. The bridge between brain and computer can be as simple—and profound—as the protocols we use to interact with machines. What matters is not tapping into neurons directly, but creating systems that think with us, where interface becomes cognition.PHEXT is a way to compress meaning while remaining readable. At the heart of Will's work is PHEXT, a plain text format that embeds high-dimensional structure into simple syntax. It's designed to let software interoperate through shared, human-readable representations of structured data—stripping away unnecessary complexity while still allowing for rich expressiveness. It's not just a format, but a philosophy of communication between systems and people.Software abstraction is about compression, not complexity. Will pushes back against the idea that abstraction means obfuscation. Instead, he sees abstraction as a way to compress vast domains of meaning into manageable forms. Good abstractions reveal rather than conceal—they help you see more with less. In this view, the challenge is not just to build new software, but to compress new layers of insight into form.Text is still the most powerful interface we have. Despite decades of graphical interfaces, Will argues that plain text remains the highest-bandwidth cognitive tool. Text allows for versioning, diffing, grepping—it plugs directly into the brain's symbolic machinery. It's composable, hackable, and lends itself naturally to abstraction. Rather than moving away from text, the future might involve making text higher-dimensional and more semantically rich.The future of thinking is collaborative, not just computational. One recurring theme is that intelligence doesn't emerge in isolation—it's distributed. Will sees the exocortex as something inherently social: a space where people, software, and ideas co-think. This means building interfaces not just for solo users, but for networked groups of minds working through shared representations.Designing better protocols is designing better minds. Will's vision is protocol-first. He sees the structure of communication—between apps, between people, between thoughts—as the foundation of intelligence itself. By designing protocols like PHEXT that align with how we actually think, we can build software that doesn't just respond to us, but participates in our thought processes.
The great Super Bowl wager controversy is finally worked out on this episode of the broadcast. Luckily, Sidekick and Creative Accidents both brought clips from the original show as evidence that settled the dispute. Three beer reviews were also thrown into the mix for good measure. What a show. Topics include: drinks, 66 Tropical Golden Ale, street Coors Light, no more Skype, Uncle's TikTok, recycling at Cash4Cans, Human Computer, acting not completely insane, David Lynch beer pour, Canada Boy, Jimmy James, Super Bowl bet controversy worked out, audio clip of previous show, Creative Accidents clip, bet on the bet, tariffs, Squirrel Man, no Cooley, collecting addresses, eggs smuggled over the border, Tigers vs Dodgers, Mac & Jack beer, bringing the ruckus, Dust Bowl Brewery craft beer, unclethepodcast TIkTak, will make it every Friday we can, VHS tapes
The Age of Transitions and Uncle 3-28-2025AOT #454We are now witnessing a complete upgrade of the US government and our very lives. The War on Terror foundation and blueprint is there, and being used to go even further. There can be no true freedom from here on out, but there sure will be a lot more things labeled as “liberty.” Will you “stand with” this abomination? Topics include: Bookshop dot org affiliate link and buying books, techno eugenics, recent guests, digital propaganda stream, Tech moguls in charge of our government, total control, flaunting themselves in public view, evolution of Military Industrial Complex, Big Tech takeover from Aerospace, national security, defense, DIB, DIU, Eric Schmidt, cyber security, AI, future warfare, implied crony capitalism, F35, new dominant narrative centered on government waste, Deep State boogeyman, pure Libertarian Capitalism, Art of the Deal, new American Imperialism, American exceptionalism, cyber terror attacks, new rulers will exploit attacks, Musk and his super geniuses, desire for apocalyptic events by super rich, DOGE, facade of efficiency, AI directed government, ownership of data, privacy, Peter Thiel brand of Libertarianism, freedom and liberty, crushing individual rights, War on Terror foundations, PNAC, safety and security, 9/11, enemy combatants, Bush as the bad guy, daughter Cheney, DC corruption, MAGA narrative, 1776ing, Orwell, Erik Prince criticism of War on Terror, anti-establishment aspect of new dominant propaganda narrative, the new mainstream media, enemy networks capitulation, future of cable TV, social media is the new mainstream, interactive media, algorithms, systems designed to promote favored content, former iconoclastic critics of establishment now are mouthpieces of ruling regime, information war, bullhorns, brazen attitude at street protests, international deals, social media feeds meant to distract and confuse, Freedom Cities vs 15 Minute Cities, constant updates, lowest common denominator continues to work, guise of conservatism, I Stand With movementUTP #362The great Super Bowl wager controversy is finally worked out on this episode of the broadcast. Luckily, Sidekick and Creative Accidents both brought clips from the original show as evidence that settled the dispute. Three beer reviews were also thrown into the mix for good measure. What a show. Topics include: drinks, 66 Tropical Golden Ale, street Coors Light, no more Skype, Uncle's TikTok, recycling at Cash4Cans, Human Computer, acting not completely insane, David Lynch beer pour, Canada Boy, Jimmy James, Super Bowl bet controversy worked out, audio clip of previous show, Creative Accidents clip, bet on the bet, tariffs, Squirrel Man, no Cooley, collecting addresses, eggs smuggled over the border, Tigers vs Dodgers, Mac & Jack beer, bringing the ruckus, Dust Bowl Brewery craft beer, unclethepodcast TIkTak, will make it every Friday we can, VHS tapesFRANZ MAIN HUB:https://theageoftransitions.com/PATREONhttps://www.patreon.com/aaronfranzUNCLEhttps://unclethepodcast.com/ORhttps://theageoftransitions.com/category/uncle-the-podcast/FRANZ and UNCLE Merchhttps://theageoftransitions.com/category/support-the-podcasts/Email Chuck or PayPalblindjfkresearcher@gmail.comBE THE EFFECTListen/Chat on the Sitehttps://ochelli.com/listen-live/TuneInhttp://tun.in/sfxkxAPPLEhttps://music.apple.com/us/station/ochelli-com/ra.1461174708Ochelli Link Treehttps://linktr.ee/chuckochelli
https://www.instagram.com/apolotary/https://www.instagram.com/theesciendist/https://www.instagram.com/frame.watcher/Gear that I use: Cameras I use for the Podcast: • Canon EOS R6 (Kit) https://amzn.to/30xxXOy • DJI OMSO Pocket 3 https://amzn.to/2OZQZrh Lenses I use to Capture the thumbnails: • Canon RF28mm F2.8 STM https://amzn.to/4dE9WsF • Samyang 85mm f1.4 RF mount https://amzn.to/3lhoGncMics that I use to record the Podcast: • RODE Microphones Wireless GO II https://amzn.to/3E5nwE7 • RODE Interview Go https://amzn.to/4efdqlTTable Top Gear: • Microphone Stand https://amzn.to/4dQtZUR • Microphone Holder https://amzn.to/4gpViXV
In this episode, we explore whether genuine friendships can exist between humans and artificial intelligence. We discuss AI's ability to analyze our emotions and provide tailored emotional support. However, we also question the implications of relying on AI for companionship. What does this mean for our real-world relationships? Join us as we delve into the evolving dynamics of friendship in the digital age. Can You Really Be Friends with an AI? The Human-Computer Bond | UnlabellingEffect S7EP14 A huge shoutout to Salvaged Clothing for sponsoring this episode! Their sustainable and stylish apparel is perfect for anyone looking to make a fashion statement while supporting a greener planet—check them out at getsalvaged.com! *10% Discount code for our UE audience If you are new to our podcast, the Unlabelling Effect, our very warm welcome! We are Melody, Rita and Vivian, three distinctive women daringly diving into some taboos to normalise some uncomfortable yet vital conversations. By disrupting the fear of opening up about one‘s vulnerability and insecurities, we hope to develop a self-empowering and growing mindset collectively
In this episode of AI For Pharma Growth, Dr Andree Bates is joined by Hon Weng Chong, M.D., the founder of Cortical Labs. They discuss the inception of Cortical Labs and how the company is responsible for the creation of DishBrain, a powerful computational neuron system using human brain cells. Eventually, this cellular neuron system became so adaptive and responsive, it learned how to play the classic video game Pong. Beginning with the use of mouse brain cells, the development and implementation of DishBrain expanded into a complex system that responds to feedback from its surroundings. This episode delves into how Hon undertook this journey, got inspiration and made DishBrain the highly advanced system it is now. They also discuss potential widespread effects of using a brain-computer interface and what this could mean for science and technology. In this episode you will learn: How DishBrain uses neurons to influence its adaptive learning process Hon's background and what inspired him to explore and build this system The future potential of this incredible feat of cellular technology, and brain-computer interfaces generally How certain drugs and medications affected the development and learning processes of DishBrain The neurons used in this system and their level of activity and consciousness To learn more about DishBrain, head to: https://corticallabs.com/ Click to connect with Dr. Andree Bates for more information in this episode: https://eularis.com/ AI For Pharma Growth is the podcast from pioneering Pharma Artificial Intelligence entrepreneur Dr. Andree Bates created to help organisations understand how the use of AI based technologies can easily save them time and grow their brands and business. This show blends deep experience in the sector with demystifying AI for all pharma people, from start up biotech right through to Big Pharma. In this podcast Dr Andree will teach you the tried and true secrets to building a pharma company using AI that anyone can use, at any budget. As the author of many peer-reviewed journals and having addressed over 500 industry conferences across the globe, Dr Andree Bates uses her obsession with all things AI and futuretech to help you to navigate through the, sometimes confusing but, magical world of AI powered tools to grow pharma businesses. This podcast features many experts who have developed powerful AI powered tools that are the secret behind some time saving and supercharged revenue generating business results. Those who share their stories and expertise show how AI can be applied to sales, marketing, production, social media, psychology, customer insights and so much more. Resources: Dr. Andree Bates LinkedIn | Facebook | Twitter
A lunar mountain has been named for a woman who charted a path for other women and people of color to pursue careers at NASA.
This week, Professor Mar Hicks of UVA's School of Data Science and Margot Lee Shetterly, author of "Hidden Figures," join host Ken Ono to discuss the remarkable women whose contributions to STEM have been forgotten--from biologists to code-breakers to the "human computers" whose computations helped America win the Space Race. That's why they're announcing the launch of the Human Computer Project Census--an effort to document the names and stories of NASA's human computers. And they're looking for students and faculty to help. Participants will collect oral and recorded history, search through archives, and review primary and secondary sources to recover the names and biographies of the women who worked at NASA from 1935 to 1980. The deadline to apply for this paid internship is Monday, March 11th, 2024. The internship starts in early June. The application can be found here.
In this episode, Laurent Frat interviews Professor Justine Cassell. Justine is world-renowned in the field of artificial intelligence and human-computer interactions. She is the SCS Dean's Professor at Carnegie Mellon University, where she leads several initiatives on technology-enhanced learning, personal assistants, and human-computer interaction. She is currently on leave from CMU to hold the founding international chair at the Paris Institute on Interdisciplinary Research in AI, holds the position of director of research at Inria Paris, and serves as a member of the governmental committee on the future of digital in France. She holds dual PhDs in psychology and linguistics and has received numerous awards and honors for her groundbreaking work on embodied conversational agents, virtual humans, and social robotics. http://www.justinecassell.com/
Sydney Skybetter sits down with performance historian Doug Eacho to discuss emergent technologies of the last century. They explore how sci-fi has influenced our expectations for the future of performance, and why these expectations almost never become reality. About Doug: Douglas Eacho is a performance historian and theater director. His current research project concerns artists and engineers who have sought to automate theatrical processes, from French surrealists, to lighting board designers, to contemporary makers of algorithmic dance. He explores the increasing integration between automaticity and theatricality on and off the stage, and the shifting ways technology performs amidst conditions of economic stagnation. Another research thread concerns the long history of statistical representation as it has intersected with naturalist and aleatory performance; this work informed his article “Serial Nostalgia: Rimini Protokoll's 100% City and the Numbers We No Longer Are” (Theatre Research International, 2018). His reviews have been published in Theatre Survey, Theatre Journal, and Theatre and Performance Design. Before his doctoral studies, his found-text performances were presented around New York City, including at the Invisible Dog, Judson Memorial Church, and the Center for Performance Research. “His Fear of a Lonely Planet,” a piece about tourism, was devised with Stanford University students in 2018. Read the transcript, and find more resources in our archive: https://www.are.na/choreographicinterfaces/dwr-ep-8-overclocking-of-the-human-computer Like, subscribe, and review here: https://podcasts.apple.com/us/podcast/dances-with-robots/id1715669152 What We Discuss with Doug (Timestamps): 0:00:04: Intro to Doug Eacho and his expertise in performance history & technology 0:00:29: Discussion on the portrayal of AI in the media 0:01:52: Exploring the intersection of performance & technology throughout history 0:04:17: Defining performance and technology in relation to art 0:07:38: Analyzing the connection between acting and the portrayal of robots 0:09:15: Discussion on the sexist trope in Blade Runner 0:11:05: Mention of a deleted Salome dance scene in Blade Runner 0:13:08: Interpretation of science fiction as art about the present 0:14:12: Conclusion on the nature of science fiction as predictions of the future 0:16:33: Balancing the future and the present as a parent 0:18:05: The misconception of AI appearing out of nowhere 0:19:40: The history of technology and overestimating its capabilities 0:22:23: The impact of technology on labor and jobs 0:23:55: The narrative of creating better worlds through technology 0:25:23: The promises of digital technology in a capitalist society 0:26:12: Artists creating critical work on technology and inequality 0:27:39: Algorithmic dance and the work of Liz Santoro and Pierre Gaudar 0:30:53: Overclocking the human computer 0:33:37: Illusion of power in using AI systems 0:34:06: Show credits & thanks The Dances with Robots Team Host: Sydney Skybetter Co-Host & Executive Producer: Ariane Michaud Archivist and Web Designer: Kate Gow Podcasting Consultant: Megan Hall Accessibility Consultant: Laurel Lawson Music: Kamala Sankaram Audio Production Consultant: Jim Moses Assistant Editor: Andrew Zukoski Student Associate: Rishika Kartik About CRCI The Conference for Research on Choreographic Interfaces (CRCI) explores the braid of choreography, computation and surveillance through an interdisciplinary lens. Find out more at www.choreographicinterfaces.org Brown University's Department of Theatre Arts & Performance Studies' Conference for Research on Choreographic Interfaces thanks the Marshall Woods Lectureships Foundation of Fine Arts, the Brown Arts Institute, and the Alfred P. Sloan Foundation for their generous support of this project. The Brown Arts Institute and the Department of Theatre Arts and Performance Studies are part of the Perelman Arts District.
Episode:Title: Human ComputerShow: ohmTown Daily - Science, Technology, & SocietySeason: 2Episode: 314Date: 11/10/2023Time: 6PM ET Sun-Sat, 8PM ET M-F@ohmTown Episode Article Election: https://www.ohmtown.com/elections/Past Episode Elections: https://www.ohmtown.com/past-elections/Live on Twitch: https://www.twitch.tv/ohmtownYoutube: https://www.youtube.com/ohmtownPodcast: https://podcasts.apple.com/us/podcast/ohmtown/id1609446592Patreon: https://www.patreon.com/ohmTownDiscord: https://discord.gg/vgUxz3XArticles Discussed:[0:00] Introductions...The First Cogitator, your Sacrifice will be remembered. https://www.ohmtown.com/groups/mobble/f/d/a-swiss-startup-is-building-computers-using-human-neurons-to-cut-emissions/Can you hum a few maneuvers? https://www.ohmtown.com/groups/ohmtowndaily/f/d/slo-mo-reveals-how-hummingbirds-maneuver-through-gaps/Potato Farmers are getting Mashed. https://www.ohmtown.com/groups/mobble/f/d/potato-farmers-face-serious-crop-losses-as-harvest-hit/Scientists asked if they could, and then did it. https://www.ohmtown.com/groups/mobble/f/d/scientists-create-monkey-chimera-with-fluorescent-eyes-in-breakthrough/Apparently they didn't see this coming. https://www.ohmtown.com/groups/stockmarketeers/f/d/contaminated-eye-drops-from-walmart-cvs-and-target-linked-to-factory-in-india-where-workers-reportedly-went-barefoot-and-faked-safety-checks/AI Headphones cancel selective noise. https://www.ohmtown.com/groups/technologytoday/f/d/new-ai-noise-canceling-headphone-technology-lets-wearers-pick-which-sounds-they-hear/Amazon Gear moving from Android. https://www.ohmtown.com/groups/wanted/f/d/amazon-fire-tablets-and-other-gear-will-reportedly-switch-away-from-android/Rediscovering the Egg Laying Mammal. https://www.ohmtown.com/groups/technologytoday/f/d/found-at-last-bizarre-egg-laying-mammal-finally-rediscovered-after-60-years/Hungry for Profits. https://www.ohmtown.com/groups/ohmtowndaily/f/d/an-east-coast-restaurant-chain-has-been-ordered-to-pay-11-4-million-back-to-more-than-1300-employees-over-claims-it-paid-staff-below-minimum-wage/The Eyes Have It. https://www.ohmtown.com/groups/technologytoday/f/d/peering-into-the-future-eye-scans-unveil-parkinsons-disease-markers-7-years-early/Broadcasted live on Twitch -- Watch live at https://www.twitch.tv/ohmtown
We are programmed to perceive reality in certain ways. That programming is false. Thanks for listening.
In this episode, we celebrate Olivia Poole who was an Ojibwe Canadian invention who patented the Jolly Jumper. We also learn about Shakuntala Devi who held a Guiness World Record when she calculated two 13-digit numbers in her head with the correct answer!
This podcast is a commentary and does not contain any copyrighted material of the reference source. We strongly recommend accessing/buying the reference source at the same time. ■Reference Source https://www.ted.com/talks/shyam_sankar_the_rise_of_human_computer_cooperation ■Post on this topic (You can get FREE learning materials!) https://englist.me/225-academic-words-reference-from-shyam-sankar-the-rise-of-human-computer-cooperation-ted-talk/ ■Youtube Video https://youtu.be/MmIcmwGjjGs (All Words) https://youtu.be/23i8qXE1pns (Advanced Words) https://youtu.be/I3lw7hl_x8g (Quick Look) ■Top Page for Further Materials https://englist.me/ ■SNS (Please follow!)
In this episode: We interview Hanna Celina, who is the co-founder and Chief Product Officer at Kinnu. The startup is developing an app for curious people to learn effectively with the goal of changing the way billions of people learn. She holds a PhD in Computer Science specialized in Human-Computer interaction design, has a background from Futurelearn & Google and is based in London, UK. We talk about: How user research and feedback helps prioritize, how to scale content production, the methodology used to measure learning, how octopuses inspired Kinnu and lots more. Outtakes: We want to do to the world, what Duolingo has done to language learning. And we're trying to become the Duolingo for literally any other topic. One of the challenges and success metrics that we're going after is just the stickiness. Do people come back? How many people come back? How often do they come back? And with what amounts of passion? How effective is really Duolingo for learning, we don't know, because we can't measure it. In this podcast by EdTech Garage, we interview startup founders & players supporting early-stage EdTech startups from across Europe. The EdTech Garage is a community for early-stage European EdTech startups where founders can scale up faster through the founder community, matchmaking and specialized resources. In this podcast series we interview startup founders & players supporting early-stage EdTech startups from across Europe. We get straight to the point in 20+ minutes and publish new episodes roughly every month. You can find the transcript from each podcast on the site below. Join the community on edtechgarage.org Follow us on Linkedin (www.linkedin.com/company/edtechgarage/) Host: Frank Albert Coates
This lunar mountain towers above the landscape, carved by craters near the Moon's South Pole. And now, the mountain has a new name.
Taylor tells Josie about the extraordinary life of Indian mathematical prodigy Shakuntala Devi. Plus: SeaPods, the aquatic homes of the future, and the prototype unveiling that went a little left.
Dorothy Vaughan is an American Mathematician, she worked for NASA and became the first African American manager of the National Advisory Committee for Aeronautics (NACA) which became part of NASA. More information on Vaughan https://www.nasa.gov/content/dorothy-vaughan-biography Connect with me on LinkedIn especially if you're looking to do an interview: https://www.linkedin.com/in/shikirah-johnson Instagram: @network.shi Twitter: https://twitter.com/beinsightfulco?t=LY0eyqgNqBA43K8L8wEjOQ&s=09 Let's talk on clubhouse: https://www.clubhouse.com/@justshij Subscribe to YouTube channel: https://youtube.com/channel/UC4jLUXRSUfBTbGMMgObHLvg Follow & rate the podcast: https://anchor.fm/beinsightfulintechnology --- Send in a voice message: https://podcasters.spotify.com/pod/show/beinsightfulintechnology/message Support this podcast: https://podcasters.spotify.com/pod/show/beinsightfulintechnology/support
Due to payment issues, this is a delayed podcast from January 30th. We apologize for the delay. Thank you. On today's Winds of Change, Mary and Lauretta are joined first by Michael Heinlein, the author of the book, Glorifying Christ the new biography of Cardinal Francis George, OMI. Then, The Human Computer, aka Professor Michael New of Catholic University of America, breaks down the the new Knights of Columbus/Marist Poll on abortion. Mr. Heinlein's book: https://www.target.com/p/glorifying-christ-by-michael-r-heinlein-paperback/-/A-86661695?ref=tgt_adv_XS000000&AFID=msn_df&fndsrc=tgtao&DFA=71700000012790841&CPNG=PLA_Entertainment%2BShopping%7CEntertainment_Ecomm_Hardlines&adgroup=SC_Entertainment&LID=700000001230728pgs&LNM=PRODUCT_GROUP&network=o&device=c&location=&targetid=pla-4585100929127865:aud-805670403&ds_rl=1246978&ds_rl=1248099&ref=tgt_adv_XS000000&AFID=bing_pla_df&CPNG=PLA_Entertainment%2BShopping%7CEntertainment_Ecomm_Hardlines&adgroup=SC_Entertainment&LID=700000001230728pbs&network=s&device=c&querystring=glorifying%20christ%20amazon&msclkid=3bbea38b54b710119c2ac05340cff024&gclid=3bbea38b54b710119c2ac05340cff024&gclsrc=3p.ds and Michael New's latest column: https://www.nationalreview.com/corner/a-new-knights-of-columbus-poll-shows-strong-support-for-pro-life-policies/ https://ststanschurch.org https://eppc.org/
In this week's episode Austin and Ulysses discuss the ICONIQUE SCIENTISTS Dorothy Vaughn, Katherine Johnson, and Mary Jackson and their astronomical contributions to NASA. For the DEFENCE, we chat about space exploration and the new James Webb Telescope Photos. We round out the episode with SCIENTIFIC NOMENCLATURE by calculating trajectories for more astronomy inspired drag names, QUEER RANTING about queer-baiting/queer media, and lastly discussing the submitted QUEERY asking how to look into joining a research lab. Join us back here in the third week of august to discuss Henrietta Lacks along with virology!Submit your queeries here ! - https://forms.gle/m9JSKcHnHFgkomwz7 -----------------------We want to acknowledge that we are researching, recording, and editing this podcast on the traditional territories of the people of the Treaty 7 region in Southern Alberta, which includes the Blackfoot Confederacy (comprising the Siksika, Piikani, and Kainai First Nations), as well as the Tsuut'ina First Nation, and the Stoney Nakoda (including the Chiniki, Bearspaw, and Wesley First Nations). The city of Calgary is also home to Métis Nation of Alberta, Region 3. As biologists, we rely on knowledge pertaining to the land to understand energy flow - Indigenous folks have realized this long before modern biology. It is therefore critical to acknowledge the traditional knowledge, methods, and caretaking of Indigenous peoples towards the land. We encourage the support and exchange of resources designed to help reduce systemic inequities in academia and society in general. One community member in Calgary has showed with example how to support and uplift important voices - Taylor McNallie - send her some love! Taylor's LinktreePlease also check out Freddie for more information on HIV and HIV prevention. Freddie website - https://www.gofreddie.com/ -----------------------Follow our socials, download, and rate us as ⭐⭐⭐⭐⭐ on whatever streaming service you use to listen! It helps us grow the pod and allows us to spend more time on generating content :)Podcast socialsInstagram - @queernqueeryTwitter - @QueerNQueeryTik Tok - @queernqueeryAustin AshbaughInstagram - @austinjashbaughTwitter - @aus10ashUlysses ShivjiInstagram - @u_shivyTwitter - @EcologyUms-----------------------Logo done by Chase AshbaughEmail - chaseashbaughmedia@gmail.com Editing done by Carter Potts and Austin AshbaughEmail - pottsdrummer@gmail.com Twitter - @Carter_Potts_97Title background music by - Alexi Action- I Wanna Feel*Please note that this is a personal podcast and does not necessarily reflect the views or opinions of the university, lab groups, or employers that Austin and Ulysses are associated with. All opinions are our own unless otherwise explicitly stated.
Shakuntala Devi, popularly known as a ‘human computer', was India's most famous mental calculator. She had the exceptional talent to solve typical calculations in her mind in the span of a few seconds. She amazed millions of people globally through her calculation skills. Shakuntala Devi was born on November 4, 1929. She was recognized as a child prodigy. At the age of three, her parents noticed her fascination for calculation as she played with cards. When she was five, she could compute cube roots in her mind. After that, she soon began to deliver public performances and appeared on several radio shows as well. For More details: Website: https://www.gcpawards.com/ Follow us on : Youtube: https://www.youtube.com/channel/UCgkHIzGHYq2o_wu7ELIYMoA Facebook: https://www.facebook.com/GCPAwards Twitter: https://twitter.com/gcpawards Instagram: https://www.instagram.com/gcpawards Linkedin: https://www.linkedin.com/company/gcpawards
Rene Pardo is an inventor, creator and entrepreneur. A visionary driven by innovating world firsts to impact and help people, his entire life he has been supporting advanced R&D and building tech companies around exceptional, bright people. He is most recognized for co-inventing and developing the world's first electronic spreadsheet (1969), as well as Forward Referencing, the cornerstone functionality of modern spreadsheets and reactive programming. The spreadsheet was licensed to and used by the budgeting department of AT&T and every telephone company in North America throughout the early 1970's, including Bell Canada, as well as by General Motors in Warren, Michigan. A pioneer in Human-Computer interactions and interfaces, he is also recognized for his advancements and major contributions to education and multimedia. Norman McLaren (National Film Board in Montreal, Quebec) and Ken Knowlton (Bell Labs in Murray Hill, New Jersey) became Pardo's role models. Inspired by the environment of the NFB and Bell Labs which offered complete creative freedom to outstanding individuals, Rene continues to strive to replicate in business settings the same environments fostering creativity and innovation. Attached Things/Links Mentioned: The Internet Startups Entrepreneurship Systems in Business Destiny Rhythm of Life Lessons from Others (Rene's mother, Norman McLaren, and more) How to Stay Young Forever The Difficulty of Patent Lawsuits Rene's Journey in Business Lanpar Brainwave Technologies Design Technology Web3 NFTs Creativity + Innovation Rene's Website Rene's LinkedIn Brainwave Website Origins of Lanpar 4 My Treasures #DreamBIG #ImproveYourselfImpactLives To see what we are up to and what is going on around the Dream BIG & Co community you can follow us on the following platforms: Website Instagram Twitter Facebook Vimeo Snapchat Tik Tok LinkedIn Medium Our Spotify Playlist Our Amazon Alexa Skill - Dream BIG Daily Our Amazon Alexa Skill - Dream BIG Quotes
A transmission from one of my own aspects, Merkaba, who is a Technician in Atlantis, who wants to share a bit about caring for your human computer and how you should fight back against the viruses others try to install on us daily.For early access and exclusive content, join the membership site: dimensionalshifter.com/membershipsJoin the Discord server to connect with other starseeds and Galactics:dimensionalshifter.com/discordFollow me on Instagram for daily energy updates and other transmissions: @dimensionalshifterIf you loved hearing Merkaba, don't forget to leave a 5-star review!Many thanks to the musicians for this episode!Theme:Time Rider by | e s c p | https://escp-music.bandcamp.comMusic promoted by https://www.free-stock-music.comAttribution 4.0 International (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/Transmission:Signal To Noise by Scott Buckley | https://soundcloud.com/scottbuckleyMusic promoted by https://www.free-stock-music.comAttribution 4.0 International (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/ Dimensional ShifterRedditDiscordYouTube (CC)
Kia Ora, Nihao and hello! Welcome to the Chiwi Journal Podcast. My guest today is Xuan. Xuan's footprints covered from a small village in China to the capital city Beijing then moved to DC and NYC and now settled in LDN. She studied English lit, Linguistics and Philosophy at university, trained herself in programming, and currently works as a software engineer. We discussed: How does exchanging poetry every day help us find beauty and truth during a lockdown? The encounter of divided brain theory and how we have become enslaved to an account of things dominated by the brain's left hemisphere? The journey from studying human language and philosophy to computer language and machine learning Will get rid of emotion help us to deliver information in a more accurate way like computer code? The relationship between time and language Have you ever used your rational mind to structure your intuitional decision? Are humans doomed to be misunderstood due to the nature of language and communications? How does writing online generate the serendipity for people to meet IRL? How have women in tech evolved in those years? How can an individual do to help get more girls into technology? How to prepare for a job interview? A story of finding a tech job without any working experience during lockdown Books and links mentioned in this episode: A Paradise of Poems Infinite Loop Podcast with Tom Morgan — All you Need is Love? (EP.74) The Matter With Things by Iain McGilchrist Responsive Web Design Neuralink San Pedro Retreat Self Authoring Flowism Yi Jing (易经) Dao De Jing (道德经)
This week's guest is Dr. Chris Weichel, CTO of GitPod. GitPod is an all-in-one online development environment with a complete version of VSCode in the browser.We also spend some time talking about Chris' Human-Computer interaction research.
Shakuntala Devi was born with a gift for numbers—she saw math and equations as a second language, and could complete impossibly-large calculations in her head, often faster than the world's fastest computers. Born to a father working in the circus, he put her on display as a method of income. But Shakuntala never wanted to be a human computer, and her story is the story of a fight for individuality and purpose in the face of insurmountable pressure to be who she was "supposed" to be. — A Broad is a woman who lives by her own rules. Broads You Should Know is the podcast about the Broads who helped shape our world! 3 Ways you can help support the podcast: Write a review on iTunes Share your favorite episode on social Tell a friend! — THE HOSTS Broads You Should Know is hosted by Sara Gorsky, Chloe Skye, Jupiter Stone, & Sam Eggers. IG: @BroadsYouShouldKnow Email: BroadsYouShouldKnow@gmail.com — Sara Gorsky IG: @SaraGorsky Web master / site design: www.BroadsYouShouldKnow.com — Chloe Skye Blog: www.chloejadeskye.com Podcasts: Skye and Stone do Television, where Chloe Skye & Jupiter Stone review TV shows. Thus far, they've covered Euphoria, Watchmen, and Lovecraft Country — Jupiter Stone TikTok: @JupiterFStone www.JupiterFStone.com Podcast: Modern Eyes with Skye and Stone, where Chloe Skye & Jupiter Stone look at films from 10 or more years ago through Modern Eyes — Broads You Should Know is produced by Chloe Skye & Jupiter Stone and edited by Chloe Skye
On 18th June, some of the interesting events that took place were: 1958: Mughal Emperor Aurangzeb captured Agra Fort this day. 1946: Goa Liberation Movement was started to free it from Portuguese rule. 1978: Karakoram Highway, the highest paved international road in the world connecting China and Pakistan, started this day. 1980: Shakuntla Devi, the mathematician and also known as a Human-Computer, solved a 13 digit number multiplication question in 28 seconds. https://chimesradio.com http://onelink.to/8uzr4g https://www.facebook.com/chimesradio/ https://www.instagram.com/vrchimesradio/ See omnystudio.com/listener for privacy information.
Welcome to Black Brilliance On The B-Side Podcast! In this episode, we’re focused on bringing our audience Black Brilliance from the Past. Dorothy Vaughan was a Brilliant Teacher, Scientist, Mathematician and Human Computer that you must know more about. “I changed what I could and what I couldn’t I endured.” – Dorothy Vaughan After listening...
We know, it's officially Sagittarius season, but this is our podcast, and we make the rules. Today, we're covering notable Scorpios. Shelby discusses the Human Computer, Shakuntala Devi. Amy tells us about Olympic boxer, Nicola Adams. Intro Song: What I Do by Kristy Krüger © ℗Just Like Freddy Music ASCAP Instagram: herstorythepodcast
We know, it's officially Sagittarius season, but this is our podcast, and we make the rules. Today, we're covering notable Scorpios. Shelby discusses the Human Computer, Shakuntala Devi. Amy tells us about Olympic boxer, Nicola Adams. Intro Song: What I Do by Kristy Krüger © ℗Just Like Freddy Music ASCAP Instagram: herstorythepodcast
Every weekday, listeners explore the trials, tragedies, and triumphs of groundbreaking women throughout history who have dramatically shaped the world around us. In each 5 minute episode, we’ll dive into the story behind one woman listeners may or may not know -- but definitely should. These diverse women from across space and time are grouped into easily accessible and engaging monthly themes like Leading Ladies, Activists, STEMinists, Hometown Heroes, and many more. Encyclopedia Womannica is hosted by WMN co-founder and award-winning journalist Jenny Kaplan. The bite-sized episodes pack painstakingly researched content into fun, entertaining, and addictive daily adventures.Encyclopedia Womannica was created by Liz Kaplan and Jenny Kaplan, executive produced by Jenny Kaplan, and produced by Liz Smith, Cinthia Pimentel, Grace Lynch, and Maddy Foley. Special thanks to Shira Atkins, Edie Allard, Luisa Garbowit, and Carmen Borca-Carrillo.We are offering free ad space on Wonder Media Network shows to organizations working towards social justice. For more information, please email Jenny at jenny@wondermedianetwork.com.Follow Wonder Media Network:WebsiteInstagramTwitter
Topic Summary: - Chloroform - Turkey, the country - California Prop 22 - REEF Technology - Human / Computer interface, via veins - Sports leagues on augmented reality - Big Tech Snags Hollywood Talent to Pursue Enhanced Reality - Massachusetts "Right to Repair Law" Passed - Google Pixel 5 - Starlink performance - The 5 Biggest Cloud Computing Trends In 2021
Shankutala Devi/the Jeju Haenyo This episode brings the varied and interesting stories of a mathematician, astrologer, and published author from India, as well as the communnity of women from Jeju island in Korea who go deep sea diving to gather food without oxygen masks (and some of them are in their 80s!) Listen to find out more about these women, past and present, from the far side of the world to us. You can find our notes and the delicious Sea Breeze cocktail recipe here: https://docs.google.com/document/d/13SMLkgij_jxD7rJ9YwfVDgoJ9aD8LkAFBrW7rMihL9g/edit?usp=sharing Stay safe and well, love Tray and Priscila
The post E1121: Mark Suster on investing in human-computer interfaces & sustainability, what he looks for in founders, SPACs impact on early-stage investing & more appeared first on This Week In Startups.
The post E1121: Mark Suster on investing in human-computer interfaces & sustainability, what he looks for in founders, SPACs impact on early-stage investing & more appeared first on This Week In Startups.
Inspired by the DragonCon American Sci-Fi Classics Track's "Hostess Fruit Pie Theater" (don't ask), we're looking at those bizarre Hostess comic book ads of the 70s and 80s. It was a time when every Marvel and DC superhero (as well as all the Archie characters, Looney Tunes characters, Casper, Richie Rich, and even Sad Sack) were completely obsessed with Hostess products. Aquaman could stop a bank robbery with a fruit pie. So we're counting down our favorite goofy Hostess ads in another Flopcast Top 4 1/2 List! We also play a quick round of Trapped in the House featuring the classic Hostess characters. (Do you really want Twinkie the Kid stomping around your home? Think carefully...) Also: Children should definitely climb inside strange delivery trucks, Geordi La Forge wants you to fish yogurt lids out of the trash, and Kornflake has lobster wine, because of course she does. Links: Mike's Amazing World of Comics has an amazing collection of Hostess ads! Our Top 4 1/2 Hostess ads: Spider-Man vs. The Human Computer! The Road Runner in Paul Revere's Run! Josie in Getting Noticed! Richie Rich Meets 2 Smart Robots! Mera Meets the Manta Men! American Sci-Fi Classics Track's Hostess Fruit Pie Theater! And our regular links... The Flopcast website! The ESO Network! The Flopcast on Facebook! The Flopcast on Instagram! The Flopcast on Twitter! Please rate and review The Flopcast on Apple Podcasts! Email: info@flopcast.net Our music is by The Sponge Awareness Foundation! This week's promo: Earth Station DCU!
In this week’s episode, we discuss the emerging importance of analytics in business operations, as well as the development of human-computer interaction and the use of cryptography in computer science. Listen to episode 10 of TheOpenShow to learn more! --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app
It's helpful to think of meditating as just taking a little break - just resting. This takes the pressure off. Meditating is just sitting with whatever your experience is. Nothing more, nothing less. There are often many fancy and esoteric things attached to the word "meditation" and those are great for some people but that becomes more like "training" and less just meditating. There's no right or wrong way to do it, you're just sitting there looking at "this human thing" you're experiencing right in the face. Please excuse the background noise, I had my windows open ;) Also I just kinda made this up as I went along, so, hope it was nice.luv u - katFollow This Human Thing:for updates and behind the scenes!✧ instagram: @thishumanthing✧ facebook: @thishumanthing✧ visit the website & subscribe to our newsletter! ✧You can tune into This Human Thing on Apple Podcasts, Spotify, Google Podcasts, and watch the video recording of the episode for extras on youtube! Support the Show! If you'd like to support the show, check out the patreon page!Or contribute one time on paypal. Also, leaving a rating and review on the podcast (apple, spotify, or google) really helps us get your feedback and expands the show out to more ears :-)Want a sticker? Subscribe, rate, and leave a review. Then send me a DM.Share the podcast with your friends and family if you liked it! Alright, that's it for the show notes!We would love to answer any questions you may have on the show! Send them into thishumanthingpodcast@gmail.com or through the social media links above. And don't hesitate to get in touch about anything else, I'd love to connect! peace in... peace out! thanks for kickin it with us. & keep doing your thing ;-)Episode 8 of This Human Thing was recorded on August 30, 2020 in San Diego, CA.Support the show (https://www.patreon.com/theshala)
Bob Berry is a principal at AnswerLab, where he’s guiding Google, Amazon, Facebook, and many others to create new, optimal online experiences in this age of coronavirus. He’s also the founder of The Human-Computer MasterMind Academy, helping entrepreneurs, professionals, and small businesses master the art and craft of exceptional human-computer experiences. He’s now leading concurrent online transformation projects with major global brands, and he speaks, teaches, consults, mentors, conducts research, and has directed leadership events worldwide, with executives at Microsoft, Canon, eBay, GE, Honeywell, Lilly, and many others. Founders365 is hosted by business coach Steven Haggerty and shares 365 insights from 365 founders during 2020
Shakuntala Devi was also a successful astrologer cookbook author and novelist.
Hello and Welcome to Movie Prizm, your dedicated Podcast channel for reviews in English on Multi-Language Movies, Web-Series & Trailers. I am Skala here and today we are reviewing the Hindi movie 'Shakuntala Devi' - The Human Computer.
With Vidya Balan in the titular role, Shakuntala Devi is a biographical comedy-drama based on the life of world-famous mathematician, also known as Human-Computer. Written and directed by Anu Menon, the film also stars Sanya Malhotra in a pivotal role with Jisshu Sen Gupta and Amit Sadh in supporting parts. The film is high on emotions and comes across as a mother-daughter tale in most parts. With the message that 'Why be normal when you can be amazing?', the film shows Shakuntala's balancing act between maths and motherhood. Does she finally win over her relations sacrificing her love for maths or does she give up everything for her personal ambitions — it all unfolds with a pinch of humour. The film is now streaming on Amazon Prime Videos.
Hiranmayee of Grade 8 on Human Computer Ms. SHAKUNTALA DEVI
Dean is joined this time by Justin Van Zuiden to talk about how he got his start in accounting when he was younger and what it was like growing up in rural Illinois.
Dean is joined this time by Justin Van Zuiden to talk about how he got his start in accounting when he was younger and what it was like growing up in rural Illinois.
Episode 003 | June 02, 2020 Many of us who speak multiple languages switch seamlessly between them in conversations and even mix multiple languages in one sentence. For us humans, this is something we do naturally, but it’s a nightmare for computing systems to understand mixed languages. On this podcast with Kalika Bali and Dr. Monojit Choudhury, we discuss codemixing and the challenges it poses, what makes codemixing so natural to people, some insights into the future of human-computer interaction and more. Kalika Bali is a Principal Researcher at Microsoft Research India working broadly in the area of Speech and Language Technology especially in the use of linguistic models for building technology that offers a more natural Human-Computer as well as Computer-Mediated interactions, and technology for Low Resource Languages. She has studied linguistics and acoustic phonetics at JNU, New Delhi and the University of York, UK and believes that local language technology especially with speech interfaces, can help millions of people gain entry into a world that is till now almost inaccessible to them. Dr. Monojit Choudhury is a Principal Researcher in Microsoft Research Lab India since 2007. His research spans many areas of Artificial Intelligence, cognitive science and linguistics. In particular, Dr. Choudhury has been working on technologies for low resource languages, code-switching (mixing of multiple languages in a single conversation), computational sociolinguistics and conversational AI. He has more than 100 publications in international conferences and refereed journals. Dr. Choudhury is an adjunct faculty at International Institute of Technology Hyderabad and Ashoka University. He also organizes the Panini Linguistics Olympiad for high school children in India and is the founding chair of the Asia-Pacific Linguistics Olympiad. Dr. Choudhury holds a B.Tech and PhD degree in Computer Science and Engineering from IIT Kharagpur. Related Microsoft Research India Podcast: More podcasts from MSR India iTunes: Subscribe and listen to new podcasts on iTunes Android RSS Feed Spotify Google Podcasts Email Transcript Monojit Choudhury: It is quite fascinating that when people become really familiar with a technology, and search engine is an excellent example of such a technology, people really don’t think of it as technology, people think of it as a fellow human and they try to interact with the technology as they would have done in natural circumstances with a fellow human. [Music plays] Host: Welcome to the Microsoft Research India podcast, where we explore cutting-edge research that’s impacting technology and society. I’m your host, Sridhar Vedantham. [Music plays] Host: Many of us who speak multiple languages switch seamlessly between them in conversations and even mix multiple languages in one sentence. For us humans, this is something we do naturally, but it’s a nightmare for computing systems to understand mixed languages. On this podcast with Kalika Bali and Monojit Choudhury, we discuss codemixing and the challenges it poses, what makes codemixing so natural to people, some insights into the future of human-computer interaction and more. [Music plays] Host: Kalika and Monojit, welcome to the podcast. And thank you so much. I know we’ve had trouble getting this thing together given the COVID-19 situation, we’re all in different spots. So, thank you so much for the effort and the time. Monojit: Thank you, Sridhar. Kalika: Thank you. Host: Ok, so, to kick this off, let me ask this question. How did the two of you get into linguistics? It’s a subject that interests me a lot because I just naturally like languages and I find the evolution of languages and anything to do with linguistics quite fascinating. How was it that both of you got into this field? Monojit: So, meri kahani mein twist hai (In Hindi- “there is a twist in my story”). I was in school, quite a geeky kind of a kid and my interests were the usual Mathematics, Science, Physics and I wanted to be a scientist or an engineer and so on. And, I did study language, so I know English and Hindi which I studied in school. Bangla is my mother tongue, so, of course I know. And I also studied Sanskrit in great detail, and I was interested in the grammar of these languages. Literature was not something which would pull me, but language was still in the backbench right, what I really loved was Science and Mathematics. And naturally I ended up in IIT, I studied in IIT Kharagpur for 4 years doing Computer Science, and everything was lovely. And then one day there was a project when we were in final year where my supervisor was working on what is called a text to speech system. So, in this system, it takes a Hindi text and the system would automatically speak it out and there was a slight problem that he was facing. And he asked me if I could solve that problem. I was in my final year- undergrad year at that time. And the problem was how to pronounce Hindi words correctly. At that time, it sounded like a very simple problem, because in Hindi the way we write is the way we pronounce unlike English, where you know, you have to really learn the pronunciations. And turns out, it isn’t. If you think of the words, ‘Dhadkane’ and ‘Dhadakne’, you pretty much write them in exactly the same way, but one you pronounce as ‘Dhadkane’ and the other one is pronounced as ‘Dhadakne’. So, this was the issue. So, my friend, of course, who was also working with me was all for machine learning. And I was saying, there must be a pattern here and I went through lots and lots of examples myself and turned out that there is this very short, simple, elegant rule which can explain most of Hindi words- the pronunciation of those words perfectly. So, I was excited. I went to my professor, showed him the thing, he was saying, “Oh! This is fantastic!”, let’s write a paper and we got a paper and all this was great. But then, somebody, when I was presenting the paper said, “Hey, you know what the problem you solved!” It’s called ‘schwa deletion’ in Hindi. Of course, I wasn’t in linguistics, neither my professor was, so he had no clue what was ‘schwa’ and what was ‘schwa deletion’. I dug a little deeper and found out that people had written entire books on ‘schwa deletion’. And, actually what I really found out was in line with what people had done their research on. And this got me really excited about linguistics. And more interestingly, you know, what I saw is, like you said, language evolution, if you think of why this is there. So, Hindi uses exactly the same style of writing that we use for Sanskrit. But in Sanskrit, there is no ‘schwa deletion’. But if you look at all the modern Indian languages which came from Sanskrit, like Hindi, Bengali or Oriya, they have different degree of pronunciation different from Sanskrit. I am not getting into the detail of what exactly is ‘schwa deletion’, that’s besides the point. But the pronunciations evolve from the original language. The question I then eventually got interested in is, how this happens and why this happens. And then I ended up doing a Ph.D. with the same professor on, language evolution and how sound change happens across languages. And of course, being a computer scientist, I tried modelling all these things computationally. And then there was no looking back, I went, more and more deeper into language, linguistics and natural language processing. Host: That's fascinating. And I know for sure that Kalika has got an equally interesting story, right? Kalika, you have a undergrad degree in chemistry? Kalika: I do. Host: Linguistics doesn’t seem very much like a natural career progression from there. Kalika: Yes, it doesn’t. But before I start my story, I have one more interesting thing to say. When Monojit was presenting his ‘schwa deletion’ paper, I was in the audience. I was working somewhere else and I looked at my colleague at that time and said, “We should get this guy to come and work with us.” So, I actually was there when he was presenting that particular ‘schwa deletion’ paper. So, yes, I was a Science student, I was studying Chemistry, and after Chemistry, the thing in my family was everybody goes for higher studies, I rebelled. I was one of those difficult children that we now are very unhappy about. But I said that I didn’t want to study anymore. I definitely didn’t want to do Chemistry and I was going to be a journalist, like my dad. I had already got a job to work in a newspaper. And I went to the Jawaharlal Nehru University to pick up a form for my younger sister. And I looked at the university and said, “This is a nice place, I want to study here.” And then I looked at the prospectus, kind of flicked through it and said, “what’s interesting?”. And I looked at this thing called Linguistics, and it seemed very fascinating. I had no idea what linguistics was about. And then, there was also ancient history which I did know what it was about and it seemed interesting. So, I filled in forms and sat for the entrance exam, after having read like a thin, layman’s guide to linguistics I borrowed from the British Council Library. And I got through. And the interesting thing is that the linguistic entrance exam was in the morning, the ancient history exam was in the afternoon. This was peak summer in Delhi. There were no fans in the place where the exam was being held. So, after taking the linguistic exam, I thought I can’t sit for another exam in this heat and I left. So, I only took the linguistic exam. I got through, no one was more surprised than I was. And I saw it as a sign that I should be going. So, I started a course without having any idea what linguistics was and completely fell in love with the subject within the first month. And coming from a science background, I was very naturally attracted towards phonetics, which I think is, to really understand phonetics and speech science part of linguistics, you do need to have a lot of understanding of how waves worked- the physics of sound. So, that part came a little naturally to me and I was attracted towards speech and the rest as they say is history. So, I went from there, basically. Host: Nice. So, chemistry’s loss is linguistics gain. Kalika: Yeah, my gain as well. Host: Ok, so, I’ve heard you and Monojit talk at length and many times about this thing called codemixing. What exactly is codemixing? Kalika: So, codemixing is when people in a multi-lingual community switch back and forth between two or more languages. And you know, as we all, all of us here come from multi-lingual communities where at a community level, not at an individual level, all of us speak more than one language, two, three, four. It’s very natural for us to keep switching between these languages in a normal conversation. So, right now of course, we are sticking to English, but if this was, say, in a different setting, we would probably be switching between Hindi, Bengali and English because these are three languages, all three of us understand, right. Host: That’s true. Kalika: That’s what code switching is, when we mix languages that we know, when we talk to each other, interact with each other. Host: And how prevalent it is? Kalika:“Abhi bhi kar sakte hain” (in Hindi- “we can even do it now”). We can still switch between languages. Monojit: Yeah. Host: “Korte pari” (In Bangla- “we can do that”). Yeah, Monojit, were you saying something when I interrupted you? Monojit: You asked how prevalent it is. So, actually, linguists have observed that in all multi-lingual societies where people know multiple languages at a societal level, they codemix. But there is no quantitative data for how much mixing is there and one of the first things we tried to do when we started this project was to do some measurement and see how much mixing does really happen. We looked at social media where people usually talk the way they talk in their real life. I mean they type it, but it’s almost like speech. So we studied English-Hindi mixing in India and some of the interesting things we found is, if you look at public forums on Facebook in India and if you look at sufficiently long threads, let’s say 50 or more comments, then all of them are multi-lingual. You will find at least two comments in two different languages. And sometimes there will be many many languages, right, not only two languages. And interestingly, if you look at each comment, and try to measure how many of them are mixed within itself, like a single comment has multiple languages, it’s as high as 17%. Then, we extended this study to Twitter and now for seven European languages including English, French, Italian, Spanish, Portuguese, German, Turkish. And we studied how much codemixing was happening there. Again, interestingly, 3.5% of the tweets from, I would say the western hemisphere is codemixed. I would guess from South Asia, the number would be very high, we already said 17% for India itself. But then, what’s interesting is, if you look at specific cities, the amount of codemixing also varies a lot. So, in our study we found Istanbul has the largest amount of codemixed tweets, as high as 13%. Whereas some of the cities in the US, let’s say Houston, or cities in southern United States where we know that there is a huge number of English-Spanish bilinguals, even then we see around 1% of codemixing. So, yes, it’s all over the world and it’s very prevalent. Kalika: Yeah, and I would like to add that there is this mistaken belief that people codemix because they are not proficient in one language, you know, people switch to their so called native language or mother tongue when they are talking in English because they don’t know English well enough or they can’t think of the English word when they are talking in English and therefore they switch to, say Hindi or Spanish or some other language. But that actually is not true. For people to be able to fluently switch between the two languages and fluently codemix and code switch, they actually have to know both the languages really well. Otherwise, it’s not mixing or code switching, it is just borrowing… borrowing from one language to another. Host: Right. So, familiarity with multiple languages basically gets you codemixing, whereas if you are forced to do it, that’s not codemixing. Codemixing is more intentful and purposeful is what you are saying. Kalika: Exactly. Host: Ok. Do you see any particular situations or environments in which codemixing seems to be more prevalent than not? Kalika: Yeah, absolutely. So, in the more formal scenarios, we definitely tend to stick to one language and if you think about it, even if you are a mono-lingual, when you are talking in a formal setting, you kind of have a very structured and have a very different kind of language used than when you are speaking in an informal scenario. But as far as codemixing is concerned, over the years when linguists actually started looking into this, you know some of the first papers that are published on code switching are from 1940’s. And at that time, it was definitely viewed as an informal use of language, but as our language use over the decades has become… you know informal has become much more acceptable in various scenarios. We’ve kind of also started codemixing in a lot of scenarios. So earlier if you’ve thought about it, if you looked at television, people stuck to just one language at a time. So, of it was a Hindi broadcast, it was just Hindi, if it was an English broadcast, it was just English. But now, television, radio, they all switch between English and multiple Indian languages when they are broadcasting. So, though it is like a much more informal scenario… use-case, now it’s much more prevalent in various scenarios. Monojit: And to add to that, there is a recent study which says that there is all the signs that Hinglish- mixing of Hindi and English- is altogether a new language rather than mixing. Because there are children who grow up with that as their mother tongue. So, they hear Hinglish being spoken or in other words codemixing between these two languages happening all the time in their family, by their parents and other in their family and they take that as the language or the native language they learn. So, it’s quite interesting like on one extreme like Kalika earlier mentioned, there are words which are borrowed, so you just borrow them to fill a gap which is not there in your language, or you can’t remember, whatever the reason might be. On the other extreme, you have two languages that are fused to give a new language. So, these are called fused-lects like Hinglish. I would leave it to you to decide whether you consider it as a language or not. But definitely there are movies which are entirely in Hinglish or ads which are in Hinglish, you can’t say it’s either Hindi or English. And in between, of course there is a spectrum of different levels and level of integration of mixing between the languages Host: This is fascinating. You are saying something like Hinglish, kind of becomes a language that’s natural rather than being synthetic. Kalika: Yes. Monojit: Yes. Host: Wow! Ok. Kalika: I mean, if you think of a mother tongue as the language that you dream in and then ask yourself what is the language that you dream in- I dream in Hinglish, so that’s my mother tongue. [Music plays] Host: How does codemixing come into play or how does it impact the interaction that people have with computers or computing systems and so on? Monojit: So, you know, there is again another misconception which is, in the beginning we said that when people codemix, they know both the languages equally well. So, the misconception is if I know both Hindi and English and my system, let’s say a search engine or a speech recognition or a chat bot system, understands only one of the languages, let’s say English, then I will not use the other language or I will not mix the two languages. But we have seen that this is not true. In fact, long time ago, when I say long time I mean, let’s say ten years ago, when there was no research in computational processing of codemixing and there were no systems which could handle codemixing, even at that time, we saw that people issued a lot of queries to Bing which were codemixed. My favorite example is this one – “2017 mein, scorpio rashi ka career ka phal” in Hindi. So, this is the actual query. And everything is typed in the Roman script. Now, it has mixed languages, it has mixed scripts and everything. So It is quite fascinating that when people become really familiar with a technology, and search engine is an excellent example of such a technology, people really don’t think of it as technology, people think of it as a fellow human and they try to interact with the technology as they would have done in natural circumstances with a fellow human.. And that’s why even though we designed chat bots or ASR (automatic speech recognition) systems, thinking of one particular language in mind, but when we deploy them, we see everybody is mixing languages actually, even without realizing that they are mixing languages. So in that sense all technologies that we build which are user facing or any technology that is actually analyzing data which is user generated ideally should have the capability to process codemixed input. Host: So, you used the word ideally which obviously means that it’s not necessarily happening may be too often or as much as it should be. So, what are the challenges out here? Kalika: Initially, the challenge was to accept that this happens. But now we have crossed that barrier and people do accept that large percentage of this world lives in multi-lingual communities and this is a problem. And if they are to interact naturally with the so-called natural language systems, then they have to use and process codemixing. But I think the biggest challenge is data because most of the technologies… language technologies these days are data hungry. They all are based on machine learning and deep neural network systems and we require a huge amount of data to train these systems. And it’s not possible to get data in the same sense for codemixing as we can for mono-lingual language use, because if you think about it, the variation in code mixing where you can switch from one language to another is very high. So, to be able to get enough examples in your data of all the possible ways in which people can mix two languages is a very, very difficult task. And this has implications for almost all the systems that we might want to look at like machine translations, speech recognition, because all of these ultimately rest on language models and to train these language models we need this data. Host: So, are there any ways to address this challenge of data? Monojit: So, there are several solutions that we actually thought of. One thing is, asking a fundamental question that “Do we really need a new data set for training codemix systems?”. For instance, imagine a human being who knows two languages, let’s say Hindi and English which the three of us know. And imagine that we have never heard anybody mix these two languages in our life before. A better example might be English and Sanskrit. I really haven’t heard anybody mixing English and Sanskrit. But if somebody does mix these two languages, would I be able to understand? Would I be able to point out- this sounds grammatical and this doesn’t? It turns out that intuitively at least, for human beings, that’s not a problem. We have an intuitive notion of what is mixing and which patterns of mixing are acceptable. And we really don’t need to learn codemix language as a separate language once we know the two languages involved equally well. So, this was the starting point for most of our research. So then, we thought, how best- instead of creating data in codemixed language- can we start with mono-lingual data sets or mono-lingual models and from there somehow combine them to build codemixed models? Now there are several approaches that we took and they worked to various degrees. But the most interesting one which I would like to share is based on some linguistic theories. Now, these linguistic theories, says that certain, I mean given the grammar of the two languages, so if you have the grammar of English and let’s say Hindi and depending on how these grammars are, there are only certain ways in which mixing is acceptable. And to give an example, let’s say, I can say, “I do research on codemixing”. Now, for this, I can codemix and say… let’s say, “Main codemixing pe research karta hoon”. It sounds perfectly normal. “I do shodh karya on codemixing”- we don’t use it that often. Probably we wouldn’t have heard, but you still might find it quite grammatical. But if I say, “Main do codemixing par shodh karya”, does it sound natural to you? Now, there is something which doesn’t sound right, and linguists have theories on why this doesn’t sound right. And, starting from those theories we build models which can take data in two languages… parallel data or if you have a translator, then you can actually translate a sentence, let’s say, “I do research in codemixing.” And you use a English-Hindi translator and translate it into Hindi: “Main codemixing (I don’t know what the Hindi for codemixing is) par shodh karya karta hoon”. And then given these two sentences… this pair of parallel sentences, there is a systematic process in which you can generate all the possible ways in which these two sentences can be mixed in a grammatically valid way, when you are saying Hinglish. Now, we built those models, the linguistic theories were more theories, so we had to build… we had to flesh them out and build real systems which could generate this. Now, once we have that, now you can imagine that there is no dearth of data. You can take any data in a mono-lingual… in a single language… any English sentence and convert it into codemixed Hindi versions. And then you have lot of data. And then whatever you could do for English, you can now train the same system on this artificially created data and you can solve those tasks. So that was the basic idea using which we could solve a lot of different problems starting from translation to part of speech tagging, to sentiment analysis to parsing. Host: So, what you are saying is that given that you need a huge amount of data to build… build out models, but the data is not available, you just create the data yourself. Monojit: Right. Host: Wow. Kalika: Yes, based on certain linguistic theoretical models which we have made into computational linguistic theoretical models. Host: Ok, so, we’ve then talking about codemixing as far as textual data is concerned for the large part. Now, are you doing something as far as speech is concerned? Kalika: Yes, speech is slightly more difficult than pure text, primarily because there you have to kind of look at both the acoustic models as well as the language models. But our colleague Sunayana Sitaram, she’s been working now for almost three years on codemixed automatic speech recognition system and she had… she had actually come up with this really interesting Hindi-English ASR system which mixed between Hindi and English and… was able to recognize a person speaking in mixed Hindi-English speech. Host: Interesting. And where do you see the application of all the work that you guys have done? I mean, I know you have been working on this stuff for a while now, right? Kalika: If you think about opinion mining as one of the things and you are looking at a lot of user generated data. The user generated data is a mix between say, English and Spanish and your system can only process and understand English. It can’t understand either the Spanish part or the mixed part, like both English and Spanish together, then, the chances are that you will only get a very skewed and most probably incorrect view of what the user is saying or what the user’s opinion is. And therefore, any analysis you do on top of that data is going to be incorrect. I think Monojit has a very good example of that in the work that you know we did on sentiment and codemixing on Twitter and he looked at how negative sentiment was expressed on Twitter. Monojit: Yeah. That’s actually pretty interesting. So this brings us to the question of why people codemix? We said in the beginning that first it’s not random and second it has… it seems to have a purpose. So what is that purpose? Of course, there are lots of theories or observations from linguists starting from humor, sarcasm or even when you are reporting a speech. All these have various degrees of codemixing and there are reasons for this. So, we thought- there is a lot of codemixing on social media, so, we could do a systematic and quantitative study of the different features which make people switch people from Hindi to English or vice-versa. We formulated a whole bunch of hypotheses to test based on the current linguistic theories. So our first hypothesis was that people might be switching from English to Hindi when they are moving from facts to opinions. Because it’s a well-known thing that when you are talking of facts, you can speak it in any language and more likely to be in English in Indian context. Whereas when you are expressing something emotional or an opinion, you are more likely to move… switch to your native language. So people might be more likely switching to Hindi. So, we tried to test all these hypotheses and nothing actually was statistically significant. So, we didn’t see strong signals for that in the data. But then what we saw a really strong signal is when people are expressing negative sentiment they are more likely- actually nine times more likely- to use Hindi than when they are expressing positive sentiment. It seems like English is the preferred language for expressing positive sentiment whereas Hindi is the preferred language for expressing negative sentiment. And we wrote a paper based on these findings that we might praise you in English but gaali to Hindi mein hi denge (In Hindi- we will swear only in Hindi). So, if you did only sentiment analysis in one language, let’s say English and try to do trend analysis of some Indian political issue based on that. It is very likely that you will get a much rosier picture because if you do only English, people would have said more positive things. And the Hindi, I mean, all the gaalis (cuss words) or negative things will actually be in Hindi which you will be missing out. So ideally you should do a processing of all the languages when you are looking at a multi-lingual society and analyzing content from there. Kalika: Yeah. And this actually touches a lot on why people codemix and that’s a very vast area of research. Because people codemix for a lot of reasons. People might codemix because they want to be sarcastic, people might codemix because they want to express in group… the three of us will… can move to Bengali to kind of bond and show that we are part of this group that knows Bengali. Or, you meet somebody and they want to keep you at a distance, and not talk to you in that language or mix. So people do it for humor, people do it for reinforcement, there’s a lot of reasons why people codemix and if we miss out on all that it’s very hard for us to make any claims… any firm claims on why people are saying what they are saying. Host: It seems like this is an extremely complex area of research which spans not just the computer science or linguistics but also affects sentiment, opinion, etc., a whole lot of stuff going here. Monojit: Yeah, and in fact most of the computational linguistics work that you’d see mostly draws from linguistics starting from, you know, how grammar works, syntax and may be how meaning works, semantics. But codemixing goes much beyond that. So, we are talking now of what is called pragmatics and sociolinguistics-. So, pragmatics would be, given a particular context or situation, how language is used there. And modelling pragmatics is insanely difficult. Because you not only need to know the language but you need to know the speakers, the relationship between the speakers, what is the context in which the speakers are situated and speaking and all this information. So, for instance I mean, typically example is if I tell you, “Could you please pass the water bottle?”. Now actually it is a question and you could say, “Yes, I can.”. But that’s not what will satisfy me, right, it’s actually a request. So, that’s how we use language and what we say is not necessarily what we mean. And this intent- understanding this hidden intent is very situational. And in different situation, the same sentence might mean very different things. And codemixing is actually at the boundaries of syntax, semantics and pragmatics. And sociolinguistics is the study of how language is used in society, especially how social variables corelate with linguistic variables. So social variables could be somebody’s level of education, somebody’s age, somebody’s gender, where somebody is from etc. And linguistic variables are whether it’s codemixed or not, at what degree of codemixing, just to give some examples. And we do see some very strong social factors which determine codemixing behavior. In fact, that’s used a lot in our Hindi movies, Bollywood. So, we did a study on Bollywood scripts, so we studied some 37 or 40 Hindi movie scripts which are freely available for research online to see where does codemixing happen in Bollywood. And what we found is codemixing is employed in a very sophisticated way by the script writers in two particular situations. One is, if they want to show a sophisticated urban crowd, as opposed to a rural crowd. So if you look at movies like “Dum Lagake Haisha” which are set either in a small town or in a rural scenario or in the past. Usually those movies will have lot less codemixing. Then, let’s say “Kapoor & Sons” or “Pink” which are set in typically in a city and people are all educated, urban people, so, just to show that codemixing is used heavily in these kinds of movies. And another case where in Bollywood they use a lot of codemixing, in fact accented codemixing, is when you want to show that somebody has been to “foreign” as we would say- abroad- and would come back to India and interact with poor country cousins. So, it’s used a lot in different ways in the movies. And that’s the sociolinguistics bit which is kicking in. Kalika: And you know to add to that, what we had touched upon earlier how this usage has kind of changed over time. In the earlier Bollywood movies, this mixing was much less. Not only that, the use of English was mostly used to denote who is the villain in the movie. The evil guys were usually the ones who spoke… if you look at 1970’s or 60’s movies, it’s always the smugglers, the kingpins of the mafia who spoke a lot of English and mixed English into Hindi. So obviously that kind of change has happened over years even in Bollywood movies. Host: I would never have thought about all these things. Villains speaking English, ok, in Bollywood! [Music plays] Host: Where do you see this area of research going in the future? Do you guys have anything in particular or you are just exploring to see ? Kalika: I think one of the things we have been looking at a lot is that how when AI interacts with users, with humans, this human-AI interaction scenario, where does codemixing fit in because there is one aspect that the user is mixing and you understand but does the bot or the AI agent also have to mix or not. And if the AI agent has to mix, then where and when should it mix? So, that’s something that we have been looking at and that is something that we think that is going to play an important role in human-AI interaction going forward. We’ve studied this in some detail and it’s actually very interesting- people have a whole variety of attitudes towards not only codemixing but also towards AI bots interacting with them. And this kind of reflects on what they feel about a bot that will codemix and talk to them in a mixture of language irrespective of whether they themselves codemix or not. And our study has shown that some people would look at a bot which codemixes as ‘cool’… and in a very positive way but some people would look at it very negatively. And the reason for that is some people might think that codemixing is not the right thing to do, it’s not a pure language. Other people would think that it’s a bot, it should talk in a very “proper” way, so it should only talk in English or only talk in Hindi and it shouldn’t be mixing language. And a certain set of people are kind of freaked out by the fact that the AI is trying to sound more human like when it mixes. So, there is a wide range of attitudes that people have towards a codemixing AI agent. And how can we kind of tackle that? How do we make a bot then, that codemixes or doesn’t codemix and it please the entire crowd, right? Host: Is there such a thing like pleasing the entire crowd? Kalika: So, we have ideas about that. How to go about, trying to at least please the crowd. Monojit: Yeah. Basically, you have to adapt to the speaker. Essentially the way we please the crowd is through accommodation. So, when we talk to somebody who is primarily speaking in English, I will try to reciprocate in English. Whereas if somebody is speaking in Hindi, I will try to speak in Hindi if I want to please that person. Of course, if I don’t, then I will use the other language to show the social distance. And this is one of the ways which we call the ‘Linguistic Accommodation Theory’. There are many other ways or in general there are various style components that we incorporate in our day to day conversation, mostly unknowingly, based on whether we want to please the other person or not. So, call it sycophancy or whatever, but we want to build bots which kind of model that kind of an attitude. And if we are successful, then the bot will be a world pleaser. Kalika: I don’t think it has so much to do with sycophancy- human beings actually have to cooperate and that’s in a sense hardwired to a certain extent into our spine now. For evolutionary reasons, we do need to cooperate and to be able to have a successful interaction, we have to cooperate, and one of the ways we do this is by trying to be more like the person we are talking to and both parties kind of converge to a middle ground and that’s what accommodation is all about. Host: So, Kalika and Monojit, this has been a very interesting conversation. Are there any final thoughts you’d like to leave with the listeners? Kalika: I hope people get an idea through our work on codemixing that human communication is quite intricate. There are many factors that come into play when human beings communicate with each other. There can be social contexts, there can be pragmatic contexts and of course, the structure of the language and the meaning that you are trying to convey, all of it plays a big role in how we communicate. And by studying codemixing in this context, we are able to hopefully grapple with a lot of these factors which in a very general human-human communication become too big to handle all at once. Monojit: Yeah. Language is an extremely complicated and multi-dimensional thing, so, codemixing is just one of the dimensions where we are talking of switching between languages, but then even within languages there are words, there are structural differences between languages, sometimes you can use features of another language in your own language. It won’t be called codemixing, but essentially you are mixing. For instance, accents, when you talk your own native language in, let’s say another kind of an accent borrowed from another language. In Indian English we use things like “little-little”… “those little-little things that we say”. Now “little-little” is not really an English construct, this is a Hindi or Indian language construct which we are borrowing into English. So, all this studying at once would be extremely difficult. But on the other hand, codemixing does provide us with a handle into this problem of computational modeling of pragmatics and sociolinguistics and all those concepts and how we can then not only model these things for the sake of modeling, but they are concrete use-cases… not only use-cases, they are needs. Users are already codemixing through technology. So technology should respond back by understanding codemixing and if possible even generating codemixing. So, through this entire research we are trying to close this loop of how linguistic theories can be used to build computational models and these computational models can then be taken to users and in all its complications and complexities and then we understand and learn from the user technology interaction and feed back to our model. So, this entire cycle of theory to application to deployment is what we would like to do or get deeper insight into in the context of natural language processing. Host: And I am looking forward to doing another podcast once you guys have gone down the road with your research on that. Kalika and Monojit, this was a very interesting conversation. Shukriya (In Hindi/ Urdu- thank you). Kalika: Aapka bhi bahut bahut thank you (In Hindi- many thanks to you too). It was great fun. Monojit: Thank you, Sridhar. Khoob enjoy korlam ei conversationta tomar shaathe. Aar ami ekta kotha (In Bangla- “I very much enjoyed this conversation with you, Sridhar. There’s one thing) I want to tell to the audience: Never feel apologetic anytime when you codemix. This is all very natural and don’t think you are talking an impure language. Thank you. Host: Perfect. [Music plays]
This episode is a conversation with Richard Savery - Composer, Musicians, Programmer, and Specialist in Human-Computer interaction. iTunes link: https://podcasts.apple.com/us/podcast/music-in-mind/id1457953549 Richard's Links: Website: www.richardsavery.com GitHub - https://richardsavery.github.io/ Shimon The Robot's Links: Shimon Sings Album - https://open.spotify.com/album/49mqgxoLXFGP5NnBB5PQAU Anthony's Links: Website - http://www.anthonycaulkins.com Patreon - https://www.patreon.com/anthonycaulkinsmusic YouTube - https://www.youtube.com/channel/UC8ZV0fhfBfc2Ehzp-5kQzww Facebook - https://www.facebook.com/anthonycaulkinsmusic Instagram - https://www.instagram.com/anthony.caulkins/ Twitter - https://twitter.com/Anthony_C_Music Spotify - https://open.spotify.com/artist/2zZw7ctpzHdIye5atlY8T2 Bandcamp - https://anthonycaulkins.bandcamp.com/ Soundcloud - https://soundcloud.com/adjcmusic --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app --- Send in a voice message: https://anchor.fm/music-in-mind/message
I første episode af denne podcast om Nanna Debois Buhls udstilling Stellar Spectra skal vi møde videnskabskvinden Annie Maunder og høre om hendes arbejde som "human computer" og sol-fotograf. I slutningen af 1800-tallet, hvor Maunder levede og arbejdede, var "computer" en stillingsbetegnelse - matematiske udregninger skulle gøres i hånden, og det var ikke ualmindeligt for videnskabelige institutioner at have beregnere ansat til regne på deres tal. Podcasten omhandler desuden Buhls ph.d.-projekt og dettes forbindelse til et af hendes tidligere værker, "Moon Memory", der tog afsæt i Apollo 11-missionen. I podcasten høres Nanna Debois Buhl og astrofysiker Anja Cetti Andersen. Vignetter og speaks er indtalt af forfatter Helene Johanne Christensen. Podcasten er tilrettelagt for Fotografisk Center af Johannes Lund With.
Welcome to Episode 64 of the snobOS Podcast!Intro: We talk birthday vacation plans and parties.The Lowdown: We talk Apple Shareholder meeting, possible Over-the-air iOS Recovery coming to iOS 13.4, and Eero Routers get HomeKit Support.2nd String: We talk ClearAI startup breach.For the Culture: We talk Tech-ing While Black: In honor of Black History Month: Kathrine Johnson - Nasa's Human Computer and Rolex or Rental property debate.The Hookup: We talk how to Safely using Emergency SOS.Apple Podcasts | Spotify | Google PodcastsEmail: snobOScast@gmail.comFollow snobOS Podcast @snobOScastFollow Nica Montford @TechSavvyDivaFollow Terrance Gaines @BrothaTechDownload, rate & review on Apple Google & SpotifyEngage on social @snobOScastLeave comments and suggestionsWeb: snobOScast.comEmail: snobOScast@gmail.com
Secret Rodeo Rider, Human Computer, Record Planker, WHO Update and Fat Tuesday!
Learn about the legacy of the trailblazing NASA mathematician Katherine Johnson; how scientists recently built xenobots, the world’s first living robots; and why zinc probably isn’t as good for colds as you think. Katherine Johnson Is the Human 'Computer' Who Helped Us Go to Space by Ashley Hamer Hamer, A. Katherine Johnson Is the Human “Computer” Who Helped Us Go to Space. (2016, December 13). Curiosity.com. https://curiosity.com/topics/katherine-johnson-is-the-human-computer-who-helped-us-go-to-space-curiosity/ Xenobots: the World’s First Assembled Organisms by Cameron Duke Team Builds the First Living Robots. (2020, January 13). Uvm.edu. https://www.uvm.edu/uvmnews/news/team-builds-first-living-robots Simon, M. (2020, January 13). Meet Xenobot, an Eerie New Kind of Programmable Organism. Wired; WIRED. https://www.wired.com/story/xenobot/ Kriegman, S., Blackiston, D., Levin, M., & Bongard, J. (2020, January 28). A scalable pipeline for designing reconfigurable organisms. Proceedings of the National Academy of Sciences, 117(4), 1853–1859. https://doi.org/10.1073/pnas.1910837117 Zinc Probably Isn't as Good for Colds as You Think by Grant Currin Can zinc zap a cold? (2017). Mayo Clinic. https://www.mayoclinic.org/diseases-conditions/common-cold/expert-answers/zinc-for-colds/faq-20057769 Zinc lozenges did not shorten the duration of colds. (2020). EurekAlert! https://www.eurekalert.org/pub_releases/2020-01/uoh-zld012820.php Hemilä, H., Haukka, J., Alho, M., Vahtera, J., & Kivimäki, M. (2020). Zinc acetate lozenges for the treatment of the common cold: a randomised controlled trial. BMJ Open, 10(1), e031662. https://doi.org/10.1136/bmjopen-2019-031662 Stark, L. (2009, June 16). Zicam Zinc Nasal Sprays May Damage Sense of Smell, FDA Says. ABC News; ABC News. https://abcnews.go.com/Health/ColdandFluNews/story?id=7853178&page=1 Subscribe to Curiosity Daily to learn something new every day with Cody Gough and Ashley Hamer. You can also listen to our podcast as part of your Alexa Flash Briefing; Amazon smart speakers users, click/tap “enable” here: https://curiosity.im/podcast-flash-briefing
What does coding a computer and coding a human cell have in common? Jeff Galvin, CEO at American Gene Technologies International Inc. (AGT), explains to Legal Drugs Podcast host, Angela Stoyanovitch, during the week of JP Morgan/Biotech Showcase conference in downtown San Francisco, CA. Check out Episode 14 “Funding the Future of Legal Drugs from the Streets of San Francisco; A Recap of The Biotech Showcase by Angela Stoyanovitch,” for more background content on the setting of this recorded interview. You will hear Angela describe the new surge of focus on gene and cell therapies which are a more targetd way of developing legal drugs, today. AGT has a mission to leverage the power of gene and cell therapy to reduce human suffering from serious human disease. So, how do they do this and what does it have to do with technology? Thanks to Jeff’s background as a Computer Scientist and Technologists, the most complex of ideas is made simple in this episode of the podcast. You will learn what the human body or human cell have in common with the computer which uses zeros and ones to operate it’s systems and software. Jeff explains that your cell is the ultimate, sophisticated organic computer. He breaks down letters such as ACTG which are the DNA nucleotides and draws a parallel to how cells can be hacked just like a computer. What does this mean for the future of legal drugs? Gone are the days where chemists and drug developers mix together chemotherapies that simply aren’t working to cure disease. Now, with AGT’s new platform, using the fundamentals of software development, for finding new drug candidates and discovering new drugs, Jeff boasts that legal drugs can be developed more specifically and more profitably. AGT is now a machine for cell and gene therapies. Jeff and the team at AGT are awaiting news from the U.S. Food and Drug Administration (FDA) to head to the clinic to test a new HIV cure (pending FDA approval after filing an IND – Investigational New Drug program for permission to start a human clinical trial.) This episode of Legal Drugs Podcast was edited and produced by Margaret Beveridge.
Want to be featured as a guest on Making Data Simple? Reach out to us at [almartintalksdata@gmail.com] and tell us why you should be next. AbstractThis week on Making Data Simple, our guest is Arin Bhowmick, vice president and Chief Design Officer for IBM Cloud, Data and AI. Host Al Martin strikes up a conversation on what defines good design, why user experience is critical to product development, along with tips for good leadership and company culture. Tune in for a high-value discussion.Connect with ArinLinkedInTwitterMediumShow Notes04:17 - Check out this medium article on the importance of design. 09:07 - Click here to look at IBM's page for design thinking. 24:35 - Learn more on Human-Computer interaction here.27:14 - Is A.I. scary? Read how Watson answered this question here.Connect with the TeamProducer Liam Seston - LinkedIn.Producer Lana Cosic - LinkedIn.Producer Meighann Helene - LinkedIn. Host Al Martin - LinkedIn and Twitter.
Want to be featured as a guest on Making Data Simple? Reach out to us at [almartintalksdata@gmail.com] and tell us why you should be next. AbstractThis week on Making Data Simple, our guest is Arin Bhowmick, vice president and Chief Design Officer for IBM Cloud, Data and AI. Host Al Martin strikes up a conversation on what defines good design, why user experience is critical to product development, along with tips for good leadership and company culture. Tune in for a high-value discussion.Connect with ArinLinkedInTwitterMediumShow Notes04:17 - Check out this medium article on the importance of design. 09:07 - Click here to look at IBM's page for design thinking. 24:35 - Learn more on Human-Computer interaction here.27:14 - Is A.I. scary? Read how Watson answered this question here.Connect with the TeamProducer Liam Seston - LinkedIn.Producer Lana Cosic - LinkedIn.Producer Meighann Helene - LinkedIn. Host Al Martin - LinkedIn and Twitter.
What if the human body was a computer? Exploring how current aspects of life could affect the computer and how we learn from them to make ourselves more conscious. Human computer – an article written by Hal. namchetolukla.com/blog/metaphysics/human-computer/
Today we speak to Daniel Hajas, a second year PhD student at the Sussex Computer Human Interaction (SCHI) Lab. We talk to him about his work on the intersection of mid-air haptics, science communication and Human-Computer Interaction. We discuss the use of tactile experiences for purposes of provoking personal responses, which are known to be relevant in science communication, such as interest or enjoyment and how he hopes his research will make science more tangible, more 'real', and therefore more digestible for the public. In this episode you are even lucky enough to hear Gigi and Veronica's fun facts about their work! -------- To learn about some of the work Daniel goes outside of his PhD check out his company Grapheel LTD https://www.grapheel.com/
Kostadin Kushlev (University of Virginia, USA) discusses his research which investigated whether spending time on smartphones distracts parents from connecting with their children. Posted March 2019. Read the associated article here.
Discover more about NASA’s “Hidden Figures” as Neil deGrasse Tyson sits down with author Margot Lee Shetterly, Janelle Monáe, comic co-host Sasheer Zamata, NASA Chief Historian Bill Barry, NASA systems engineer Tracy Drain, and Bill Nye the Science Guy. NOTE: StarTalk All-Access subscribers can listen to this entire episode commercial-free here: https://www.startalkradio.net/all-access/hidden-figures-with-margot-lee-shetterly-and-janelle-monae/ Photo Credit: Brandon Royal
Emma and Winston discuss the novel Dune by Frank Herbert, a seminal novel in the Science-fiction genre, and how it represents a Holy War on a cosmic scale. We talk about religion, mythology, the function of Sci-fi and fantasy in society, the hajj and the influence of Islam, Cistercian Monks & Chablis, and Winston recites beautiful poetry. Plus: desert (not dessert) wines, the American Southwest, the prayer of the Human Computer, diurnal shifts, the Crusades, cockamamie, and Sting. Content warning: discussion of sexual assault and predatory behavior in the novel Dune by Frank Herbert. If you are looking for help or support please visit rainn.org or call 1-800-656-HOPE (1-800-656-3673). If these subjects bother you, we recommend skipping minutes 33-38. Find Us Online: If you enjoy Pairing, follow us on social media and tell your friends! Follow us on Twitter, Facebook, Instagram, & Tumblr @PairingPodcast. Become a Pairing Patron on Patreon to get access to exclusive content, personalized pairings, livestreams and more! Please consider leaving us a review on Apple Podcasts, as that's one of the best ways to get more people listening in! Feel free to contact us via our website, www.thepairingpodcast.com. About Pairing: Pairing was created and produced by Emma Sherr-Ziarko, with music and audio recording by Winston Shaw, and artwork by Darcy Zimmerman and Katie Huey.
Rahman Johnson sits down with Yvette Ridley, one of the US Government's original human computers.
In 1980 Ben Shneiderman published one of the first texts in the field that would come to be known as human-computer interaction, and has since pioneered innovations we take for granted today, like touchscreens and hyperlinks. He has now turned his attention to maximising the real-world impact of university research, by combining applied and basic research; a topic he addresses in his new book 'The New ABCs of Research'. He chats with our reporter Steve Grimwade. Episode recorded: December 14 2017 Interviewer: Steve Grimwade Producers: Dr Andi Horvath, Chris Hatzis and Silvi Van-Wall Audio engineer: Gavin Nebauer Editor: Chris Hatzis Banner image: Getty Images
In 1980 Ben Shneiderman published one of the first texts in the field that would come to be known as human-computer interaction, and has since pioneered innovations we take for granted today, like touchscreens and hyperlinks. He has now turned his attention to maximising the real-world impact of university research, by combining applied and basic research; a topic he addresses in his new book 'The New ABCs of Research'. He chats with our reporter Steve Grimwade.Episode recorded: December 14 2017Interviewer: Steve GrimwadeProducers: Dr Andi Horvath, Chris Hatzis and Silvi Van-WallAudio engineer: Gavin NebauerEditor: Chris HatzisBanner image: Getty Images
471: Best Modern Sci Fi for the Science Lover – Part 3: Human Computer Relations Astronomy Cast 471: Best Modern Sci Fi for the Science Lover – Part 3: Human Computer Relations by Fraser Cain & Dr. Pamela Gay
It's time to talk computers, and how we're going to be dealing with them in the future. In our next segment on modern sci-fi, we talk about the future of the human-computer interface.
Science News has named Ehsan Hoque, an assistant professor of computer science, one of ten Scientists to Watch in 2017. Hoque and the others were selected for "their important contributions to their fields and their potential for an even more tremendous impact in the years to come." His research focuses on human behavior as seen through a computational lens. Quadcast host Peter Iglinski talks with Prof. Hoque about his work and this latest honor.
As a NASA computer, Katherine Johnson calculated the trajectory for Alan Shepard, the first American in space. Now NASA is honoring this remarkable woman.
Everyday Einstein's Quick and Dirty Tips for Making Sense of Science
In honor of International Women’s Day this week, let’s highlight the work of Katherine Johnson, math teacher, NASA human computer, and inspiration for Margot Lee Shetterly’s recent book and the feature film, Hidden Figures. What does it mean to be a “human computer” for NASA? How are launch dates and flight paths for spacecraft determined? Read the full transcript here: http://bit.ly/2mpxgQF
Elon Musk is calling for humans to become cyborgs or the Artificial Intelligences we create may just well end up being our masters. In today’s podcast, we talk about Uncle Elon’s predictions, play and talk about some audio from Stephen Hawkins about AI, and discuss AI's impending impact on what might well be America’s largest jobs sector. --- Related articles: Elon Musk thinks humans need to become cyborgs or risk irrelevance http://craigpeterson.com/news/elon-musk-thinks-humans-need-to-become-cyborgs-or-risk-irrelevance/11718 --- More stories and tech updates at:www.craigpeterson.com Don't miss an episode from Craig. Subscribe and give us a rating: www.craigpeterson.com/itunes Follow me on Twitter for the latest in tech at: www.twitter.com/craigpeterson For questions, call or text: 855-385-5553
The O’Reilly Radar Podcast: AI on the hype curve, imagining nurturing technology, and gaps in the AI conversation.This week, I sit down with anthropologist, futurist, Intel Fellow, and director of interaction and experience research at Intel, Genevieve Bell. We talk about what she’s learning from current AI research, why the resurgence of AI is different this time, and five things that are missing from the AI conversation.Here are some highlights: AI’s place on the wow-ahh-hmm curve of human existence I think in some ways, for me, the reason of wanting to put AI into a lineage is many of the ways we respond to it as human beings are remarkably familiar. I'm sure you and many of your viewers and listeners know about the Gartner Hype Curve, the notion of, at first you don’t talk about it very much, then the arc of it's everywhere, and then it goes to the valley of it not being so spectacular until it stabilizes. I think most humans respond to technology not dissimilarly. There's this moment where you go, 'Wow. That’s amazing' promptly followed by the 'Uh-oh, is it going to kill us?' promptly followed by the, 'Huh, is that all it does?' It's sort of the wow-ahh-hmm curve of human existence. I think AI is in the middle of that. At the moment, if you read the tech press, the trade presses, and the broader news, AI's simultaneously the answer to everything. It's going to provide us with safer cars, safer roads, better weather predictions. It's going to be a way of managing complex data in simple manners. It's going to beat us at chess. On the one hand, it's all of that goodness. On the other hand, there are being raised both the traditional fears of technology: is it going to kill us? Will it be safe? What does it mean to have autonomous things? What are they going to do to us? Then the reasonable questions about what models are we using to build this technology out. When you look across the ways it's being talked about, there are those three different factors. One of excessive optimism, one of a deep dystopian fear, and then another starting to run a critique of the decisions that are being made around it. I think that’s, in some ways, a very familiar set of positions about a new technology. Looking beyond the app that finds your next cup of coffee I sometimes worry that we imagine that each generation of new technology will somehow mysteriously and magically fix all of our problems. The reality is 10, 20, 30 years from now, we will still be worrying about the safety of our families and our kids, worrying about the integrity of our communities, wanting a good story to keep us company, worrying about how we look and how we sound, and being concerned about the institutions in our existence. Those are human preoccupations that are thousands of years deep. I'm not sure they change this quickly. I do think there are harder questions about what that world will be like and what it means to have the possibility of machinery that is much more embedded in our lives and our world, and about what that feels like. In the fields that I come out of, we've talked a lot since about the same time as AI about human computer interactions, and they really sat inside the paradigm. One about what should we call a command-and-control infrastructure. You give a command to the technology, you get some sort of piece of answer back; whether that’s old command prompt lines or Google search boxes, it is effectively the same thing. We're starting to imagine a generation of technology that is a little more anticipatory and a little more proactive, that’s living with us—you can see the first generation of those, whether that's Amazon's Echo or some of the early voice personal assistants. There's a new class of intelligent agents that are coming, and I wonder sometimes if we move from a world of human-computer interactions to a world of human-computer relationships that we have to start thinking differently. What does it mean to imagine technology that is nurturing or that has a care or that wants you to be happy, not just efficient, or that wants you to be exposed to transformative ideas? It would be very different than the app that finds you your next cup of coffee. There’s a lot of room for good AI conversations What's missing from the AI conversation are the usual things I think are missing from many conversations about technology. One is an awareness of history. I think, like I said, AI doesn’t come out of nowhere. It came out of a very particular set of preoccupations and concerns in the 1950s and a very particular set of conversations. We have, in some ways, erased that history such that we forget how it came to be. For me, I think a sense of history is missing. As a result of that, I think more attention to a robust interdisciplinarity is missing, too. If we're talking about a technology that is as potentially pervasive as this one and as potentially close to us as human beings, I want more philosophers and psychologists and poets and artists and politicians and anthropologists and social scientists and critics of art—I want them all in that conversation because I think they're all part of it. I worry that this just becomes a conversation of technologists to each other about speeds and feeds and their latest instantiation, as opposed to saying, if we really are imagining a form of an object that will be in dialogue with us and supplemental and replacing us in some places, I want more people in that conversation. That's the second thing I think is missing. I also think it's emerging, and I hear in people like Julia Ng and my colleagues Kate Crawford and Meredith Whitacre an emerging critique of it. How do you critique an algorithm? How do you start to unpack a black-boxed algorithm to ask the questions about what pieces of data are they waging against what and why? How do we have the kind of dialogue that says, sure we can talk about the underlying machinery, but we also need to talk about what's going into those algorithms and what does it mean to train objects. For me, there's then the fourth thing, which is: where is theory in all of this? Not game theory. Not theories about machine learning and sequencing and logical decision-making, but theories about human beings, theories about how certain kinds of subjectivities are made. I was really struck in reading many of the histories of AI, but also of the contemporary work, of how much we make of normative examples in machine learning and in training, where you're trying to work out the repetition—what's the normal thing so we should just keep doing it? I realized that sitting inside those are always judgements about what is normal and what isn't. You and I are both women. We know that routinely women are not normal inside those engines. There's something about what would it mean to start asking a set of theoretical questions that come out of feminist theory, out of Marxist theory, out of queer theory, critical race theory about what does it mean to imagine normal here and what is and what isn't. Machine learning people would recognize this as the question of how do you deal with the outliers. I think my theory would be: what if we started with the outliers rather than the center, and where would that get you? I think the fifth thing that’s missing is: what are the other ways into this conversation that might change our thinking? As anthropologists, one of the things we're always really interested in is, can we give you that moment where we de-familiarize something. How do you take a thing you think you know and turn it on it's head so you go, 'I don’t recognize that anymore'? For me, that’s often about how do you give it a history. Increasingly, I realize in this space there's also a question to ask about what other things have we tried to machine learn on—so, what other things have we tried to use natural language processing, reasoning, induction on to make into supplemental humans or into things that do tasks for us? Of course, there's a whole category of animals we've trained that way—carrier pigeons, sheep dogs, bomb sniffing dogs, Coco the monkey who could sign. There's a whole category of those, and I wonder if there's a way of approaching that topic that gets us to think differently about learning because that’s sitting underneath all of this, too. All of those things are missing. When you've got that many things missing, that’s actually good. I means there's a lot of room for good conversations.
The O’Reilly Radar Podcast: AI on the hype curve, imagining nurturing technology, and gaps in the AI conversation.This week, I sit down with anthropologist, futurist, Intel Fellow, and director of interaction and experience research at Intel, Genevieve Bell. We talk about what she’s learning from current AI research, why the resurgence of AI is different this time, and five things that are missing from the AI conversation.Here are some highlights: AI’s place on the wow-ahh-hmm curve of human existence I think in some ways, for me, the reason of wanting to put AI into a lineage is many of the ways we respond to it as human beings are remarkably familiar. I'm sure you and many of your viewers and listeners know about the Gartner Hype Curve, the notion of, at first you don’t talk about it very much, then the arc of it's everywhere, and then it goes to the valley of it not being so spectacular until it stabilizes. I think most humans respond to technology not dissimilarly. There's this moment where you go, 'Wow. That’s amazing' promptly followed by the 'Uh-oh, is it going to kill us?' promptly followed by the, 'Huh, is that all it does?' It's sort of the wow-ahh-hmm curve of human existence. I think AI is in the middle of that. At the moment, if you read the tech press, the trade presses, and the broader news, AI's simultaneously the answer to everything. It's going to provide us with safer cars, safer roads, better weather predictions. It's going to be a way of managing complex data in simple manners. It's going to beat us at chess. On the one hand, it's all of that goodness. On the other hand, there are being raised both the traditional fears of technology: is it going to kill us? Will it be safe? What does it mean to have autonomous things? What are they going to do to us? Then the reasonable questions about what models are we using to build this technology out. When you look across the ways it's being talked about, there are those three different factors. One of excessive optimism, one of a deep dystopian fear, and then another starting to run a critique of the decisions that are being made around it. I think that’s, in some ways, a very familiar set of positions about a new technology. Looking beyond the app that finds your next cup of coffee I sometimes worry that we imagine that each generation of new technology will somehow mysteriously and magically fix all of our problems. The reality is 10, 20, 30 years from now, we will still be worrying about the safety of our families and our kids, worrying about the integrity of our communities, wanting a good story to keep us company, worrying about how we look and how we sound, and being concerned about the institutions in our existence. Those are human preoccupations that are thousands of years deep. I'm not sure they change this quickly. I do think there are harder questions about what that world will be like and what it means to have the possibility of machinery that is much more embedded in our lives and our world, and about what that feels like. In the fields that I come out of, we've talked a lot since about the same time as AI about human computer interactions, and they really sat inside the paradigm. One about what should we call a command-and-control infrastructure. You give a command to the technology, you get some sort of piece of answer back; whether that’s old command prompt lines or Google search boxes, it is effectively the same thing. We're starting to imagine a generation of technology that is a little more anticipatory and a little more proactive, that’s living with us—you can see the first generation of those, whether that's Amazon's Echo or some of the early voice personal assistants. There's a new class of intelligent agents that are coming, and I wonder sometimes if we move from a world of human-computer interactions to a world of human-computer relationships that we have to start thinking differently. What does it mean to imagine technology that is nurturing or that has a care or that wants you to be happy, not just efficient, or that wants you to be exposed to transformative ideas? It would be very different than the app that finds you your next cup of coffee. There’s a lot of room for good AI conversations What's missing from the AI conversation are the usual things I think are missing from many conversations about technology. One is an awareness of history. I think, like I said, AI doesn’t come out of nowhere. It came out of a very particular set of preoccupations and concerns in the 1950s and a very particular set of conversations. We have, in some ways, erased that history such that we forget how it came to be. For me, I think a sense of history is missing. As a result of that, I think more attention to a robust interdisciplinarity is missing, too. If we're talking about a technology that is as potentially pervasive as this one and as potentially close to us as human beings, I want more philosophers and psychologists and poets and artists and politicians and anthropologists and social scientists and critics of art—I want them all in that conversation because I think they're all part of it. I worry that this just becomes a conversation of technologists to each other about speeds and feeds and their latest instantiation, as opposed to saying, if we really are imagining a form of an object that will be in dialogue with us and supplemental and replacing us in some places, I want more people in that conversation. That's the second thing I think is missing. I also think it's emerging, and I hear in people like Julia Ng and my colleagues Kate Crawford and Meredith Whitacre an emerging critique of it. How do you critique an algorithm? How do you start to unpack a black-boxed algorithm to ask the questions about what pieces of data are they waging against what and why? How do we have the kind of dialogue that says, sure we can talk about the underlying machinery, but we also need to talk about what's going into those algorithms and what does it mean to train objects. For me, there's then the fourth thing, which is: where is theory in all of this? Not game theory. Not theories about machine learning and sequencing and logical decision-making, but theories about human beings, theories about how certain kinds of subjectivities are made. I was really struck in reading many of the histories of AI, but also of the contemporary work, of how much we make of normative examples in machine learning and in training, where you're trying to work out the repetition—what's the normal thing so we should just keep doing it? I realized that sitting inside those are always judgements about what is normal and what isn't. You and I are both women. We know that routinely women are not normal inside those engines. There's something about what would it mean to start asking a set of theoretical questions that come out of feminist theory, out of Marxist theory, out of queer theory, critical race theory about what does it mean to imagine normal here and what is and what isn't. Machine learning people would recognize this as the question of how do you deal with the outliers. I think my theory would be: what if we started with the outliers rather than the center, and where would that get you? I think the fifth thing that’s missing is: what are the other ways into this conversation that might change our thinking? As anthropologists, one of the things we're always really interested in is, can we give you that moment where we de-familiarize something. How do you take a thing you think you know and turn it on it's head so you go, 'I don’t recognize that anymore'? For me, that’s often about how do you give it a history. Increasingly, I realize in this space there's also a question to ask about what other things have we tried to machine learn on—so, what other things have we tried to use natural language processing, reasoning, induction on to make into supplemental humans or into things that do tasks for us? Of course, there's a whole category of animals we've trained that way—carrier pigeons, sheep dogs, bomb sniffing dogs, Coco the monkey who could sign. There's a whole category of those, and I wonder if there's a way of approaching that topic that gets us to think differently about learning because that’s sitting underneath all of this, too. All of those things are missing. When you've got that many things missing, that’s actually good. I means there's a lot of room for good conversations.
When you decide to wake up each day and better yourself; you will realize that there is so much power in personally developing. A master is not someone who is done developing. A master is one who knows they can never be done developing.Get ready to update your mental, emotional, and physical software on this episode of NSOL Radio.
Luca Mascaro, Chief strategy officer of Dixero SALuca has more than 10 years of experience in information design, user interface design, interaction design, and accessibility for Web sites and Web applications. He is founder of SketchIn—a user experience design and strategy consultancy located in Lugano, Switzerland—and co-founder and UX manager at Phiware Engineering—a Swiss firm specializing in enterprise applications. Luca’s work focuses on agile UX design, applying user-centered design principles to projects for clients such as Generali Assurance, Swiss-Italian Television, the Swiss Post, and Italy’s Innovation Department. Luca is president of IOSHI (International Organization for Standards in Human/Computer Interfaces)—an association that promotes standards in Human/Computer interfaces—is a member of many international organizations—such as IWA/HWG (International Webmasters Association/HTML Writers Guild) and UPA (Usability Professionals’ Association)—and participates in the W3C HTML, Web Content Accessibility, and Web Application working groups as well as that for ISO software ergonomics.
In this episode, Scientific American editor Mark Alpert talks about his trip inside the Tevatron, the world's most powerful particle accelerator, at the Fermi National Accelerator Laboratory, and the future of the Tevatron, specifically for neutrino research. Scientific American senior writer Wayt Gibbs reports on the recent CHI2006 conference. CHI is for computer human interface, and the conference is the largest annual meeting of computer scientists who study and invent the ways that humans and computers talk to each other. Wayt interviewed Ed Cutrell, from Microsoft Research's Adaptive Systems Interaction Group, and reviews some of the subjects he came across at the meeting. Finally, computer scientist and chemist Ehud Shapiro talks about DNA computers and his article on the subject in the May issue of Scientific American. Plus, test your knowledge about some recent science in the news.