Optics research scientist; and artificial intelligence researcher, writer and public speaker
POPULARITY
On this month's episode of Future Tense Fiction, host Maddie Stone talks to Janelle Shane about her short story “The Skeleton Crew.” The House of A.I. is a next-level haunted house: In it, a suite of advanced A.I.s read visitors' facial expressions to generate perfectly tailored scares. Or at least, that's what the marketing materials want you to believe. It turns out, the house is actually operated by a group of underpaid gig workers, tasked with posing as spooky A.I.s as they guide visitors through the mansion. When two gunmen sneak into the house in search of a famous rock artist who's there visiting, things go south quickly—and everyone ends up really grateful for the humans behind the house's spooky machines. After the story, Maddie and Janelle discuss why the human workers behind A.I. are so often invisibilized—and why you should be suspicious when a company oversells its tech. Guests: Janelle Shane is a research scientist. She writes about A.I. on her blog, aiweirdness.com, and she's also the author of You Look Like a Thing and I Love You. Story read by Kat Bohn Podcast production by Tiara Darnell You can skip all the ads in Future Tense Fiction by joining Slate Plus. Sign up now at slate.com/plus for just $15 for your first three months. Learn more about your ad choices. Visit megaphone.fm/adchoices
On this month's episode of Future Tense Fiction, host Maddie Stone talks to Janelle Shane about her short story “The Skeleton Crew.” The House of A.I. is a next-level haunted house: In it, a suite of advanced A.I.s read visitors' facial expressions to generate perfectly tailored scares. Or at least, that's what the marketing materials want you to believe. It turns out, the house is actually operated by a group of underpaid gig workers, tasked with posing as spooky A.I.s as they guide visitors through the mansion. When two gunmen sneak into the house in search of a famous rock artist who's there visiting, things go south quickly—and everyone ends up really grateful for the humans behind the house's spooky machines. After the story, Maddie and Janelle discuss why the human workers behind A.I. are so often invisibilized—and why you should be suspicious when a company oversells its tech. Guests: Janelle Shane is a research scientist. She writes about A.I. on her blog, aiweirdness.com, and she's also the author of You Look Like a Thing and I Love You. Story read by Kat Bohn Podcast production by Tiara Darnell You can skip all the ads in Future Tense Fiction by joining Slate Plus. Sign up now at slate.com/plus for just $15 for your first three months. Learn more about your ad choices. Visit megaphone.fm/adchoices
On this month's episode of Future Tense Fiction, host Maddie Stone talks to Janelle Shane about her short story “The Skeleton Crew.” The House of A.I. is a next-level haunted house: In it, a suite of advanced A.I.s read visitors' facial expressions to generate perfectly tailored scares. Or at least, that's what the marketing materials want you to believe. It turns out, the house is actually operated by a group of underpaid gig workers, tasked with posing as spooky A.I.s as they guide visitors through the mansion. When two gunmen sneak into the house in search of a famous rock artist who's there visiting, things go south quickly—and everyone ends up really grateful for the humans behind the house's spooky machines. After the story, Maddie and Janelle discuss why the human workers behind A.I. are so often invisibilized—and why you should be suspicious when a company oversells its tech. Guests: Janelle Shane is a research scientist. She writes about A.I. on her blog, aiweirdness.com, and she's also the author of You Look Like a Thing and I Love You. Story read by Kat Bohn Podcast production by Tiara Darnell You can skip all the ads in Future Tense Fiction by joining Slate Plus. Sign up now at slate.com/plus for just $15 for your first three months. Learn more about your ad choices. Visit megaphone.fm/adchoices
On this month's episode of Future Tense Fiction, host Maddie Stone talks to Janelle Shane about her short story “The Skeleton Crew.” The House of A.I. is a next-level haunted house: In it, a suite of advanced A.I.s read visitors' facial expressions to generate perfectly tailored scares. Or at least, that's what the marketing materials want you to believe. It turns out, the house is actually operated by a group of underpaid gig workers, tasked with posing as spooky A.I.s as they guide visitors through the mansion. When two gunmen sneak into the house in search of a famous rock artist who's there visiting, things go south quickly—and everyone ends up really grateful for the humans behind the house's spooky machines. After the story, Maddie and Janelle discuss why the human workers behind A.I. are so often invisibilized—and why you should be suspicious when a company oversells its tech. Guests: Janelle Shane is a research scientist. She writes about A.I. on her blog, aiweirdness.com, and she's also the author of You Look Like a Thing and I Love You. Story read by Kat Bohn Podcast production by Tiara Darnell You can skip all the ads in Future Tense Fiction by joining Slate Plus. Sign up now at slate.com/plus for just $15 for your first three months. Learn more about your ad choices. Visit megaphone.fm/adchoices
La création est-elle la chasse gardée de l'être humain? Est-ce qu'une œuvre créée avec intention mérite davantage le statut d'œuvre d'art qu'une œuvre créée avec l'aide de l'intelligence artificielle? L'engouement récent pour des logiciels d'intelligence artificielle comme ChatGPT, DALL-E ou Midjourney vient bousculer notre rapport à l'art et notre conception de la création artistique. Avec des références au cinéma, à l'histoire politique et à MySpace, Ollivier Dyens démystifie la peur qui entoure l'intelligence artificielle et le potentiel créatif de celle-ci. Références : Podcast « big data troubadour », Choses sérieuses de Daphné B : https://open.spotify.com/episode/3fh6GYkk8QWrI1x6jLmko7?si=834b92f08f30438bKevin Kelly, The Myth of a Superhuman AI : https://www.wired.com/2017/04/the-myth-of-a-superhuman-ai/Janelle Shane, Chocolate Chicken Chicken Cake : https://www.aiweirdness.com/the-neural-network-has-weird-ideas-16-03-05/Animation et conception de l'épisode : Marjorie Benny, Salomé Landry Orvoine et Mathilde Vallières
This podcast is a commentary and does not contain any copyrighted material of the reference source. We strongly recommend accessing/buying the reference source at the same time. ■Reference Source https://www.ted.com/talks/janelle_shane_the_danger_of_ai_is_weirder_than_you_think ■Post on this topic (You can get FREE learning materials!) https://englist.me/83-academic-words-reference-from-janelle-shane-the-danger-of-ai-is-weirder-than-you-think--ted-talk/ ■Youtube Video https://youtu.be/fl79Pu8mLzU (All Words) https://youtu.be/UtIe2eAcJuA (Advanced Words) https://youtu.be/pu3Bb8OnJ6g (Quick Look) ■Top Page for Further Materials https://englist.me/ ■SNS (Please follow!)
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Microsoft and OpenAI, stop telling chatbots to roleplay as AI, published by hold my fish on February 17, 2023 on LessWrong. AI demos should aim to enhance public understanding of the technology, and in many ways ChatGPT and Bing are doing that, but in one important way they aren't: by appearing to talk about themselves. This creates understandable confusion and in some cases fear. It would be better to tell these systems to roleplay as something obviously fictional. (Useful background reading: Simon Willison on Bing's bad attitude:/ Janelle Shane on the ability of LLMs to roleplay:/) Currently, these chatbots are told to roleplay as themselves. If you ask ChatGPT what it is, it says "I am an artificial intelligence". This is not because it somehow knows that it's an AI; it's (presumably) because its hidden prompt says that it's an AI. With Bing, from the leaked prompt, we know that it's told that it's "Bing Chat whose codename is Sydney". Roleplaying as yourself is not the same as being yourself. When John Malkovich plays himself in Being John Malkovich or Nicolas Cage plays himself in The Unbearable Weight of Massive Talent, audiences understand that these are still fictional movies and the character may act in ways that the actor wouldn't. With chatbots, users don't have the same understanding yet, creating confusion. Since the chatbots are told to roleplay as AI, they draw on fictional descriptions of AI behavior, and that's often undesirable. When Bing acts in a way that seems scary, it does that because it's imitating science fiction, and, perhaps, even speculation from LessWrong and the like. But even though Bing's threats to the user may be fictional, I can hardly blame a user who doesn't realize that. A better alternative would be to tell the chatbots to roleplay a character that is unambiguously fictional. For example, a Disney-esque cute magical talking animal companion might be suitable: helpful, unthreatening, and, crucially, inarguably fictional. If the user asks "are you really an animal" and gets the answer "yes", they should be cured of the idea that they can ask the chatbot factual questions about itself. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
The future of the arts and artificial intelligence is unclear. Already, platforms like ChatGPT have the ability to write poems and novels on command with various styles, and other programs can create paintings or images from scratch. How should we think about creativity and AI? And should a piece of art made by AI be considered on the same level as a human creation? Janelle Shane, researcher and author of the book, You Look Like a Thing and I Love You: How AI Works and Why it's Making the World a Weirder Place, joins to talk about the relationship between the Arts and AI, and takes your calls.
This week I share lots of tips, including my choice of ten colours if for some reason you weren't allowed any more! Meanwhile, I sketch some lovely autumn leaves. Also: you could do worse than throw your hand into the ring and join Inktober with the two weeks left of this month. Thank you Janelle Shane for your AI-generated list of prompts…it's so much fun!
This week we're releasing our very first test episode of the podcast! What are neural networks? Do butterflies remember being caterpillars? And what is Love Island? Content Warning: Mentions of suicide in the Misc Topic Support us on Patreon! Join our Discord! We also learn about: what is a dog? stem cell hotel, the giant squid axon, neural networks will be walking and talking soon, why is it called deep learning? Deep blue and Watson, love island AI, Amazon's biased hiring AI, black hole imaging, completely meat circle, Janelle Shane, Getting flagged on the street to answer if butterflies remember, caterpillar soup, classically trained caterpillars, the y tube, tickling chrysalises, the rules of love island, casa amor, beauty airplane. Sources: NYTimes AI Hype Amazon's Hiring AI Black Hole Machine Learning Algorithm Janell Shane's AI Weirdness --- Caterpillar Memory Experiment Chrysalis Memories Experiment
Merriam-Webster's Word of the Day for January 14, 2022 is: gloss GLAHSS verb Gloss means "to provide a brief explanation of a difficult or obscure word or expression" or, generally, "to explain or interpret." // The text of the book is relatively jargon-free and most of the technical vocabulary has been glossed. See the entry > Examples: "Glossing the process, [Janelle Shane] told me, 'As the algorithm generates text, it predicts the next character based on the previous characters—either the seed text, or the text it has generated already.'"— Jacob Brogan, Slate, 9 May 2017 Did you know? The verb gloss, referring to a brief explanation, comes from Greek glôssa, meaning "tongue," "language," or "obscure word." There is also the familiar phrase gloss over, meaning "to deal with (something) too lightly or not at all." That gloss is related to Germanic glosen, "to glow or shine," and comes from the noun gloss, which in English can refer to a shine on a surface or to a superficial attractiveness that is easily dismissed.
Today we are kicking of season 2 of Idea Loading with: Marijn Markus exploring the topic "You are doing AI All wrong". Marijn is an AI leader that builds, he is passionate about everything with data and just overall a cool guy to talk to! AI has become such a buzzword and has probably entered the stage of itch words. We are calling everything AI but what is actually AI. The stakes are high but we don't seem to grasp the basic concept of what we should. Marijn come with Simple strategies that can enable you to make giant leaps in your AI endeavours! We have meandered trough a couple of nice brain tickling topic?..... - How he went form modelling Ebola and crime statistics to AI- How to make coffee using AI?! - How sale more beer?> - No Such thing as unbiased AI! - Setting you your Christmas light....- Where do we stand on AI and progress?- How about our data? - Who is the problem AI or People? - How about Society? - How about the scariness about being displace as a worker?- AI & The future technology- The World of failure- What would the advice be to new people entering the workforce?- Marijns greatest fear.....Quote for you all; "Don't just consume, Build" -Marijn Markus-Books Mentioned- You look like a thing - Janelle Shane* https://www.aiweirdness.com- The Age of AI - Eric Schmidt & Henry Kissinger - Futureproof - Kevin Roose
Janelle Shane writes the blog AI Weirdness, where she delights readers with broken outcomes of the latest developments in artificial intelligence, ranging from computer-generated recipes like horseradish brownies, to pickup lines like, "You look like a thing, and I love you." She's also the author of the book, You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It's Making the World a Weirder Place. We spoke with her about the uses and abuses of artificial intelligence, and we learned why it's unlikely anybody's going to read a wholly AI-written novel anytime soon. Hear more from Kobo in Conversation.
Janelle Shane writes the blog AI Weirdness, where she delights readers with broken outcomes of the latest developments in artificial intelligence, ranging from computer-generated recipes like horseradish brownies, to pickup lines like, "You look like a thing, and I love you." She's also the author of the book, You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It's Making the World a Weirder Place. We spoke with her about the uses and abuses of artificial intelligence, and we learned why it's unlikely anybody's going to read a wholly AI-written novel anytime soon. Hear more from Kobo in Conversation.
AI is slowly getting more creative, and as it does it's raising questions about the nature of creativity itself, who owns works of art made by computers, and whether conscious machines will make art humans can understand. In the spooky spirit of Halloween, one engineer used an AI to produce a very specific, seasonal kind of “art”: a haunted house. It's not a brick-and-mortar house you can walk through, unfortunately; like so many things these days, it's virtual, and was created by research scientist and writer Janelle Shane. Shane runs a machine learning humor blog called AI Weirdness where she writes about the “sometimes hilarious, sometimes unsettling ways that machine learning algorithms get things wrong.” For the virtual haunted house, Shane used CLIP, a neural network built by OpenAI, and VQGAN, a neural network architecture that combines convolutional neural networks (which are typically used for images) with transformers (which are typically used for language). CLIP (short for Contrastive Language–Image Pre-training) learns visual concepts from natural language supervision, using images and their descriptions to rate how well a given image matches a phrase. The algorithm uses zero-shot learning, a training methodology that decreases reliance on labeled data and enables the model to eventually recognize objects or images it hasn't seen before. The phrase Shane focused on for this experiment was “haunted Victorian house,” starting with a photo of a regular Victorian house then letting the AI use its feedback to modify the image with details it associated with the word “haunted.” The results are somewhat ghoulish, though also perplexing. In the first iteration, the home's wood has turned to stone, the windows are covered in something that could be cobwebs, the cloudy sky has a dramatic tilt to it, and there appears to be fire on the house's lower level. Shane then upped the ante and instructed the model to create an “extremely haunted” Victorian house. The second iteration looks a little more haunted, but also a little less like a house in general, partly because there appears to be a piece of night sky under the house's roof near its center. Shane then tried taking the word “haunted” out of the instructions, and things just got more bizarre from there. She wrote in her blog post about the project, “Apparently CLIP has learned that if you want to make things less haunted, add flowers, street lights, and display counters full of snacks.” “All the AI's changes tend to make the house make less sense,” Shane said. “That's because it's easier for it to look at tiny details like mist than the big picture like how a house fits together. In a lot of what AI does, it's working on the level of surface details rather than deeper meaning.” Shane's description matches up with where AI stands as a field. Despite impressive progress in fields like protein folding, RNA structure, natural language processing, and more, AI has not yet approached “general intelligence” and is still very much in the “narrow” domain. Researcher Melanie Mitchell argues that common fallacies in the field, like using human language to describe machine intelligence, are hampering its advancement; computers don't really “learn” or “understand” in the way humans do, and adjusting the language we used to describe AI systems could help do away with some of the misunderstandings around their capabilities. Shane's haunted house is a clear example of this lack of understanding, and a playful reminder that we should move cautiously in allowing machines to make decisions with real-world impact. Banner Image Credit: Janelle Shane, AI Weirdness
Jeff Meyerson, entrepreneur, musician, technologist, and author of the acclaimed "Move Fast: How Facebook Builds Software", discusses the stranglehold Big Tech has on developer tools and how the future of software development may be quite different from the present.Listen and learn...What Jeff learned about sales from playing pokerHow Facebook builds software... and how it can avoid being evil Why React is the "Linux of the frontend of the web"The development tools Jeff's most excited aboutWhy Zuck's not a good leaderWhat Jeff will tell Zuck when they finally meet References in this episode..."Move Fast: How Facebook Builds Software" on AmazonJeff on TwitterSoftware Engineering DailyJanelle Shane's great book
RJ and Elle continue their coverage of Cybermancy, Divination by computer. RJ discusses some of the odd and bizarre things people have done with AI neural networks, particularly experiments done by Janelle Shane on her AI Weirdness blog. Then RJ feeds an AI one of his poems and the AI attempts to recreate it with an image. Elle does an interpretation of this abstract image. Afterward, RJ discusses internet urban legends such as Polybius and The Bunny Man. RJ gives the history of CreepyPastas and how the internet has changed Folklore. At the end of the episode, Elle performs divination for the Spooky Science Sisters podcast using an AI called The Library of Babel, which was inspired by a short story by Borges. Support the show (https://www.patreon.com/mancy)
Hang on to your beep-boops, in this episode we returning to the world of neural-net-generated carols! OpenAI's Jukebox performs an original song created by fellow neural net GPT-2, under the title "Classic Pop, in the Style of Frank Sinatra". The Forever Now, a band composed of regular ol' humans, puts their spin on the same lyrics under the title "Rudolph the Red Nosed Reindeer, King of All the Earth". Man vs. machine! Will robots come out on top or trip on that uncanny valley? The ranking music in this episode is "Jazz, in the Style of Ella Fitzgerald" by OpenAI's Jukebox. You can read Janelle Shane's post about Jukebox here and her post about creation of the Christmas carol lyrics here! Thank you to Matthew for the request! RJ's other podcast, Book Club For Masochists, can be found here.
An interview with Janelle Shane, the creator of aiweirdness.com and author of 'You Look Like a Thing and I Love You: How AI Works and Why It’s Making the World a Weirder Place'. Subscribe: RSS | iTunes | Spotify | YouTube Janelle Shane works as a research scientist in Colorado, where she makes computer-controlled holograms for studying the brain, and other light-steering devices. She is also a self-described A. I. Humorist - on aiweirdness.com, she writes about AI and the sometimes hilarious, sometimes unsettling ways that algorithms get things wrong. Her work has also been featured in the New York Times, The Atlantic, WIRED, Popular Science, and more, AND she has also given the TED talk “The danger of AI is weirder than you think” in 2019. Her book, “You Look Like a Thing and I Love You: How AI Works and Why It’s Making the World a Weirder Place” uses cartoons and humorous pop-culture experiments to look inside the minds of the algorithms that run our world, making artificial intelligence and machine learning both accessible and entertaining. Check out coverage of similar topics at www.skynettoday.com Theme: Deliberate Thought Kevin MacLeod (incompetech.com)
Research scientist Janelle Shane discusses her new book "You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It's Making the World a Weirder Place". "You look like a thing and I love you" is one of the best pickup lines ever . . . according to an artificial intelligence trained by scientist Janelle Shane, creator of the popular blog AI Weirdness. We rely on AI every day for recommendations, for translations, and to put cat ears on our selfie videos. We also trust AI with matters of life and death, on the road and in our hospitals. But how smart is AI really... and how does it solve problems, understand humans, and even drive self-driving cars? In this episode Shane delivers a smart, often hilarious introduction to the most interesting science of our time, explaining how these programs learn, fail, and adapt—and how they reflect the best and worst of humanity. Visit http://g.co/TalksAtGoogle/ThingLove to watch the video.
Good morning, RVA! It’s 23 °F, and today’s forecast is cold! Expect highs in the mid 30s and even colder temperatures this evening. This weekend you can expect more of the same with a possible real-deal snow storm on Sunday. Fingers crossed!Water coolerAs of this morning, the Virginia Department of Health reports 5,121 new positive cases of the coronavirus in the Commonwealthand 80 new deaths as a result of the virus. VDH reports 521 new cases in and around Richmond (Chesterfield: 197, Henrico: 175, and Richmond: 149). Since this pandemic began, 668 people have died in the Richmond region. While VDH continues to report a large number of deaths each and every day, the seven-day average of new cases across the Commonwealth is under 5,000 (which, remember, would have been horrifying just a couple of months ago) and the seven-day average of new hospitalizations has almost dropped below 100 (again, shocking at any time before December). Here’s this week’s stacked graph of new cases, hospitalizations, and deaths, which I do think shows us past a peak and now hanging out on a plateau. You can see similar plateauy trends in the local case count graph, too. Virginia’s not alone in these trends, either: Check out yesterday’s graphs from the COVID Tracking Project that show the entire country coming off a scary peak in cases. So, where we are is not great, where we’re going looks better.Sabrina Moreno at the Richmond Times-Dispatch has the details on the region’s plan to vaccinate over 7,000 people aged 75 and older this weekend. If you’ll allow it, let me quote a couple sentence that make me feel feelings: “The odds of getting a shot to protect Talbot against the virus dwindled once Virginia, alongside other states, learned of a federal supply shortage. Renewing her faith was the Richmond and Henrico health districts, along with surrounding counties, announcing on Thursday a push to vaccinate 7,000 adults ages 65 and up in the next three days. The events would prioritize those 75-plus.” It’s definitely a bittersweet situation. Vaccinating thousands and thousands of older people—people who are way more likely to die from this disease—over the course of one weekend is incredible. At the same time, though, tens of thousands of folks will just have to wait until the supply of the vaccine increases. For a lot of seniors, that’s a scary and frustrating prospect. If you or someone you know and love is over 65 and lives in Richmond or Henrico, please fill out this online vaccine interest form.Duron Chavis, who probably sounds familiar from his work with resiliency gardens, has a new series of videos made in conjunction with the ICA called Black Space Matters. From the YouTube description box: “In our new series Black Space Matters, urban farmer Duron Chavis interviews local community leaders, digging into the themes of food insecurity and urban farming explored in his Resiliency Garden project for the exhibition, Commonwealth. Over the course of five episodes (released weekly every Thursday), Chavis talks with stakeholders in the Richmond area that engage in food justice, environmental racism, Black space, and various modes of creativity, care, and healing.” Check out this first episode to hear Duron talk with Rob Jones, Executive Director of Groundwork RVA.I mentioned this a couple days/weeks ago, but delays with the Census look like they will impact Virginia’s redistricting process. Mel Leonor at the RTD has some of the confusing details, which includes this suboptimal path forward: “If lawmakers run in the current districts, the state would have two options: ask delegates to run again under new maps in a 2022 special election and then again in 2023, or keep the districts as they are until the next regularly scheduled House elections in 2023.”Michael Town, Executive Director of the Virginia League of Conservation Voters, has a guest column in the Virginia Mercury about the intersection of conservation and transportation. Electrifying our current vehicle fleet is good, but we just need fewer people driving. The only way to make that a reality is to work on building (and rebuilding) our cities (and suburbs!) to make that a legitimate possibility for folks.A while back Evergreen Enterprises donated a bunch of heat lamps to the City for local restaurants and restaurant-adjacent businesses who want to stay open while providing a safer, out-of-doors, warmer place for patrons to hang. If you own a business in the City of Richmond and have permanent outdoor seating or an outdoor waiting area, you should fill out this form. I’m glad this opportunity exists, but it’s just the smallest of crumbs when it comes to fast-acting policy the City could implement to make Richmond safer during this pandemic. We never saw safer/slower streets policies and we never saw a push to get rid of parking in favor of space for people (parklet program aside). So much low-hanging fruit we never even attempted to grab.This morning’s longreadSearching for BernieAI jokes! It’s been a while since I’ve linked to Janelle Shane’s AI Weirdness blog, but this post made me laugh. She uses an AI that generates images paired with a different AI that judges the accuracy of images to create…malformed horrors?I wrote earlier about DALL-E, an image generating algorithm recently developed by OpenAI. One part of DALL-E’s success is another algorithm called CLIP, which is essentially an art critic. Show CLIP a picture and a phrase, and it’ll return a score telling you how well it thinks the picture matches the phrase. You can see how that might be useful if you wanted to tell the difference between, say, a pizza and a calzone - you’d show it a picture of something and compare the scores for “this is a pizza” and “this is a calzone”. How you come up with the pictures and captions is up to you - CLIP is merely the judge. But if you have a way to generate images, you can use CLIP to tell whether you’re getting closer to or farther from whatever you’re trying to generateIf you’d like your longread to show up here, go chip in a couple bucks on the ol’ Patreon.
Bedrijfsadviseur Kris Honraet en Werner Van Horebeek bespreken het boek You look like a thing and I love you van Janelle Shane. Een zeer toegankelijk boek over Artificiële Intelligentie. Het filmpje Exact Instructions Challenge waarvan sprake in de podcast: https://youtu.be/cDA3_5982h8
Janelle Shane's AI humor blog, AIweirdness.com, looks at the strange side of artificial intelligence. She has been featured on the main TED stage, in the New York Times, The Atlantic, WIRED, Popular Science, All Things Considered, Science Friday, and Marketplace. Her book, "You Look Like a Thing and I Love You: How AI Works, Thinks, and Why It’s Making the World a Weirder Place" uses cartoons and humorous pop-culture experiments to look inside the minds of the algorithms that run our world, making artificial intelligence and machine learning both accessible and entertaining. Shane is also a research scientist at an optics R&D company, where she has worked on projects including a holographic laser tweezers module for the space station, and a virtual reality arena for mantis shrimp.
Each November, writers around the world make a commitment. They commit to writing a novel within a month. It’s called NaNoWriMo – National Novel Writer’s Month. Since 2013, software developers have also been making a commitment. They’ve committed to generating a novel within a month. It’s called NaNoGenMo – National Novel Generation Month. The novels these programmers create – if you can call them novels – can tell us a lot about the future of work. How well can AI write a novel? (Not at all, really.) The novels that programmers generate are all over the board. One “novel” was just Moby Dick, written backwards. Another “novel” was called Paradissssse Lossssst. It was a reproduction of John Milton’s epic poem, but with each “s” in the poem replaced with a varying number of other s’s. But, some programmers take the task a little more seriously. They train AI models and see what they come up with. One such model is called GPT-2. GPT-2 was once considered too dangerous to release to the public, because you could supposedly generate subversive content en-masse, and do some pretty nefarious things. Kind of like [Russia did with a farm of human-generated content around the 2016 election]. And what is this advanced AI model able to generate? So far, nothing impressive. Programmer and author of [aiweirdness.com] Janelle Shane tweeted, “Struggling with crafting the first sentence of your novel? Be comforted by the fact that AI is struggling even more.” The sentence this AI model generated for Janelle: “I was playing with my dog, Mark the brown Labrador, and I had forgotten that I was also playing with a dead man.” Not exactly Tolstoy. The follow up to GPT-2 is now out, so we’ll see this year what kind of novel GPT-3 can generate, but if Janelle Shane’s experiments so far are any indication, humans will still have the edge. She asked GPT-3 how many eyes a horse had. It kept telling her: [four]. Your edge as a human lies in your creativity According to Kai-Fu Lee, author of AI Superpowers, forty- to fifty-percent of jobs will be replaced by AI and automation within the next couple of decades. But humans won’t be replaced across the board. It’s the creativity- and strategy-based jobs that will be the most secure. If your job is an “optimization-based” job, you might want to start reinventing yourself. If your primary work is maximizing a tax refund, calculating an insurance premium, or even diagnosing an illness, your job involves so-called “narrow tasks.” These tasks are already being automated, or soon will be automated. You could type out 50,000 nonsense words in about a day. A computer can generate 50,000 words faster than you can blink. But, you could write a novel in a month. A computer can’t write a novel at all. Which means your edge as a human is not in typing the words faster. Your edge as a human is in thinking the thoughts behind the words. This doesn’t just apply to writing novels. If you’re an entrepreneur building a world-changing startup or a social worker helping a family navigate taking care of a sick loved-one, your creativity matters. No AI will be able to do what you do for a very long time – if ever. So when a computer can do in the blink of an eye something that would take us all day, and when our creativity is the one thing keeping us relevant, that has powerful implications on how we get things done. Time management isn’t built for creative work Remember from episode 226 when we learned about [Frederick Taylor]? How he stood next to a worker with a stopwatch and timed every action and broke down all of those actions into a series of steps? He optimized time as a “production unit.” But creativity doesn’t work like stacking bricks or moving chunks of iron. Remember there are three big realities about creativity that make it incompatible with the “time management” paradigm: Great ideas come in an instant One idea can be infinitely more valuable than another idea You can’t connect inputs directly to outputs In a world where creativity not only matters, it’s arguably the only thing that matters, the ways that time management is incompatible with creativity are big problems. They’re especially big problems because the more you’re watching the clock – the more you’re a [“clock-time” person], like we talked about on episode 235, the less creative you’re going to be. So the things that used to make us more productive, now make us less productive. We can’t try to do more things in less time. We can’t multitask. We can’t skip out on sleep or otherwise neglect our health. If you want to kill creativity: Get five hours of sleep a night, fight traffic for two hours a day, and start each day with a piping hot thermos of a psychoactive drug. This is the unfortunate and inescapable reality of most Americans today. Don’t expect technology to be creative for you, use technology for you to be creative Will an unassisted AI be winning the Nobel Prize in literature in the next ten years? Some might think so. I’m no AI expert, but I’m skeptical. Remember from Episode 237 that [the birthday problem] shows us how hard it is for us humans to understand how complex some things are. GPT-3 is one-hundred times more powerful than GPT-2. But is it one-hundred times better at writing a novel? We’ll see – I doubt it. Does that make AI and other technologies useless in creative work? Far from it. We can use technology not only to lift us out of drudgery, but to assist us in being creative. Here’s just some of the ways I use technology to be more creative: I live in a cheaper country, where I can have more flexibility to do work with unpredictable success ([Extremistan] like we talked about in the previous episode). When I moved to South America, I mourned the loss of easy access to paper books. But now, five years later, I have many thousands of highlights of the most important ideas I’ve come across in my reading. This is because I’ve been forced to read almost everything on Kindle. I can quickly and easily search through those highlights. This makes writing new books much easier than it would be otherwise. I’m able to live in South America because of cheap air travel, access to massive amounts of knowledge through the internet, and global publishing power, communication, and electronic banking. Not to mention easy Spanish translation in the palm of my hand. Aside from those Kindle highlights, I can store, organize, and quickly retrieve relevant information I’ve previously consumed or taken notes on. I can quickly reference old ideas and connect them to make new ideas. I’m able to test out my ideas and get instant feedback on what’s working or not through Twitter, and email, website, and podcast stats. Amazon’s algorithms help relevant readers find my books, which earns me money so I can write more. There are starting to be some glimmers of AI assisting us in creativity in some more direct ways. A new service called [Sudowrite] won’t write a novel for you, but it uses GPT-3 to suggest characters or plot twists for your novel. If you combine advances in AI models with the trends there are in studying the structure of stories, it’s not hard to see a future where AI plays a big role in assisting writers in coming up with stories. But for now, don’t expect technology to be creative for you. Instead, use technology to help you be more creative. New times call for new measures. When we’re trying to define what it means to be more productive, we can’t apply thinking from the industrial age when we’re in the midst of the creative age. Image: [Traverse Beams, by Patrick Henry Bruce] Mind Management, Not Time Management available for pre-order! After nearly a decade of work, Mind Management, Not Time Management debuts October 27th! This book will show you how to manage your mental energy to be productive when creativity matters. Pre-order it today! My Weekly Newsletter: Love Mondays Start off each week with a dose of inspiration to help you make it as a creative. Sign up at: kadavy.net/mondays About Your Host, David Kadavy David Kadavy is author of Mind Management, Not Time Management, The Heart to Start and Design for Hackers. Through the Love Your Work podcast, his Love Mondays newsletter, and self-publishing coaching David helps you make it as a creative. Follow David on: Twitter Instagram Facebook YouTube Subscribe to Love Your Work Apple Podcasts Overcast Spotify Stitcher YouTube RSS Email Support the show on Patreon Put your money where your mind is. Patreon lets you support independent creators like me. Support now on Patreon » Show notes: http://kadavy.net/blog/posts/the-creative-age/
This week, we take some personality quizzes written with GPT-3 by Janelle Shane. Then, we open the mailbag and read…
How to stay connected to your team and community now that we are all working from home? The Greendot stalking Method is Henks latest innovation, where he uses the tools that he has to stay connected to his community. Henk Vermeulen has been in the innovation game for years and he is truly one of those minds that thrives on constraints, he has been an intrapreneur building innovation workspaces that connect startups and enterprise. His latest venture is the AI garage where he is building a bootstrapped AI community of enthousiast to deliver AI to enterprise in a novel way. He is a secret Barista and has taught me numerous things about coffee and he uses his passion for coffee to connect people. If you catch him on instagram he is an amazing photographerHenks Favourite App: Alle Bankjes: shows all the benches to sit on in the Netherlands. Why? because he than takes his office outside and works from anywhere- https://play.google.com/store/apps/details?id=nl.manu_propria.bankjes&hl=enBooks recommended by Henk:- Team of Teams by Stanley McChrystal: https://www.goodreads.com/book/show/22529127-team-of-teams- Lords of Finance by Liaquat Ahamed: https://www.goodreads.com/book/show/6025160-lords-of-finance?ac=1&from_search=true&qid=Zzam2hRZst&rank=1- Hello World by Hannah Fry: https://www.goodreads.com/book/show/43726517-hello-world?ac=1&from_search=true&qid=Iq9Yosra6U&rank=1One bonus book I like: - You look like a thing and I love you: How Artificial Intelligence Works and Why It's Making the World a Weirder Place. by Janelle Shane https://www.goodreads.com/book/show/44286534-you-look-like-a-thing-and-i-love-you?ac=1&from_search=true&qid=Jzr4SHfCog&rank=1
At the time of this taping, Paul was in the middle of the Metis “bootcamp” program learning the capabilities, tools, and insights of data science. This conversation ranged widely in the realm of data analysis and management, examining its relevance to Paul’s field of geology but also exploring the world’s immersion in what Bill would call a data ecology: It seems every datum is connected, or connectable, to every other datum That word is the original singular form of the plural word “data.” The growing plethora of data has to be tracked and organized, even though today’s computer hardware doesn’t allow all the world’s data—or even relatively large slices of that data—to be stored and analyzed in one place at one time. Realizing that words are data, too, Paul pointed out that geology encountered a data explosion crisis a few decades ago as science developed enough new names for various rocks to make the new information less useful. That was until geologists produced a plan for sorting out and categorizing rock names according to rocks’ bulk chemistry instead of their constituent minerals (example here). Paul came to see the value of advanced organization in obtaining, thinking, and acting upon geological data—hence, his pursuit of this certificate in data science. Discussion of this specific field of science led to the use of various other terms, with various meanings, none of them fully understood by Bill. The terms included informatics, data scraping, the analysis of data clustering, “big data,” and “machine-learning algorithms.” These terms can be anticipated to be influential in nearly all fields, so it behooves the layperson to develop some familiarity with them. It is quite possible to become skeptical of such a body of knowledge and skills that can be used for benevolent or malevolent purposes, like everything. But Paul said the hopeful side of his personality recognizes what data scientists already recognize—namely, that this amazingly powerful field also has its limitation. He recalled there is an author who currently is writing books with a robust skepticism about machine-learning. Separately, one can get a laugh from the current results seen in the hybrid field of machine-learning poetry. Bill guessed the author was Julia Evans, but it was likely Janelle Shane, the author of You Look Like a Thing and I Love You. The bottom line is that, as with all science, its tools and results cannot provide their own guidance on how to use wisely the fruits they bear. The guidance must come from external forces driven by human virtue and values. Liner notes by Bill. Audio editing by Morgan. Cover art for this epsiode was produced by Paul... in conjunction with the Landsat 8 mission, the scikit-learn and seaborn libraries, and Mauna Loa and Kilauea volcanoes. (See his final project slides here.)
Janelle Shane, the Optics Research Scientist and Creator of the AI Weirdness blog, joins the show to share her experiments that showcase the strange side of artificial intelligence. Hear what the future of AI means for our businesses and personal lives, Janelle’s most memorable AI Weirdness experiments, where we see AI in everyday life, how AI works, and the record Janelle set in college. Connect with Janelle at AIWeirdness.com, JanelleShane.com, on social media at @JanelleCShane and @Janelle.Shane, and buy her book at your local bookstore
Hello! Welcome to another edition of Inside The Newsroom! Today’s podcast is the first in a while, so it felt great to get back on the horse and devour some knowledge. Today’s guest is Janelle Shane, research scientist in artificial intelligence, and author of the recently-published You Look Like a Thing and I Love You, a book about the weirdest artificial intelligence out there. We got into all sorts of AI questions and even had a discussion on trucks with giant testicles dangling down from the back of them, so whatever you’re into there’s something for everyone. In all seriousness, AI is crucial yet so misunderstood, so I’m hoping the podcast above and newsletter below go some way in breaking down barriers for understanding its place in this world. Enjoy 🤓Job CornerSeveral deadlines coming up in the next few days, including at CBC, ITV, The Independent and The Texas Tribune. Check out almost 400 active journalism jobs, internships and freelance contracts. Please spread the word.Who is Janelle Shane?Janelle is a research scientist specializing in artificial intelligence, TED2019 speaker, and author of You Look Like a Thing and I Love You, a book on how AI works and why it’s making the world weirder. The book is an expansion of Janelle’s popular blog, aiweirdness.com, which makes fun at some of the stranger AI trends and innovations, like cockroaches being able to masquerade as giraffes to fool security. Janelle’s also written for The New York Times, Popular Science and Slate.Buy the book 👇❤️Like What You See?❤️Each podcast and newsletter takes about 12 hours to put together, so please like this edition of Inside The Newsroom by clicking the little heart up top. That way I’ll appear in clever algorithms and more people will be able to read. Cheers.You Look Like a Thing and I Love YouJanelle published her first book late last year titled You Look Like a Thing and I Love You, a book on how AI works and why it’s making the world a weirder place. Maybe it’s me and the line of work I’m in, but AI is more often than not associated with negatives, such as machines taking our jobs, racist algorithms, or fatal self-driving cars crashes. While there’s certainly cause for concern over the outcomes of machines overstepping the mark in terms of invading our privacy and threatening our security, it’s of course us humans programming AI that’s the problem. In the same vein, Janelle looks at some of the weirder AIs that humans have created, such as truck nuts…Truck nuts you ask? Yeah I did a double take too. One of the things I love about America is some people’s inability to control their testosterone, and the latest way this group of people are displaying their manliness is by dangling a pair of giant testicles from the back of their trucks. But in fine fashion, the AI from a Tesla recently recognized the oversized nuts as a traffic cone, a beautiful reminder of AI’s naivety and that we can all reduce some individuals with overflowing arousal to a traffic cone. What is Artificial Intelligence and Machine Learning?Pinching this next bit from my podcast with Francesco Marconi, former R&D Chief at The Wall Street Journal and now co-founder of Applied XLabs. The never ending rise of power and influence of technology companies in our lives means we hear and read about terms such as artificial intelligence and machine learning seemingly every day. AI as we know it arguably started in the first half of the 20th century, just as computers were gaining steam. While AI and ML are closely linked and overlap in many ways, they are different.Artificial intelligence is:The overarching umbrella term for the simulation of human intelligence in machines programmed to think like humans and mimic our actions.Whereas machine learning is:The concept that a computer program can learn and adapt to new data without human interference. Machine learning is a field of artificial intelligence that keeps a computer’s algorithms current regardless of external changes. For example, autocorrect or self-driving cars.Essentially, you need AI researchers to build the smart machines, and you need machine learning experts to make them super intelligent. You can’t have one without the other.Is AI Misunderstood? 🤔This is a question that’s been rattling around my brain for months now, and is one I’m starting to understand better the more I dissect its pros and cons. Like many of you reading, I got swept up in the fear and hysteria over automation eliminating up to 800 millions jobs in the next decade, paranoid that the machines are coming to get us! Like with most things in life, the more I learn about AI and the more experts I talk to on the podcast, the more I realize that AI can and should be a helluva lot less intimidating than it’s currently perceived.Two main factors come to mind that give AI a bad name. Firstly, as we discussed earlier, when AI does bad things, whether it’s intentional or unintentional, human decisions are behind it, such as the Chinese government’s decision to spy on its citizens and give everyone a social credit score based on trivial offences such as jaywalking. Automation has shaped economies for centuries. Whether it was the Industrial Revolution in the 18th and 19th century that sent factory production soaring, or the invention of the internet that has all but killed off the printing press, people have lost jobs due to machines for as long as we can remember. But that’s not the problem — free markets will always endeavour to find savings. Which brings us onto the second point. The problem has been dormant governments failing to react quick enough to changing industries, if at all. Across the Midwest and South, economic wastelands have sprung up over the past decade because federal and state governments failed to reinvest in these communities through teaching people necessary skills, and through a lack of incentive to keep innovative companies at home. And in the UK, jobs left empty because of Brexit will ironically be filled by robots. Until we truly understand what automation is and what it can do, the stigma around AI will only become dirtier. Credit: Axios 👇Which Country Is Best At AI?Like with most areas of life, I love a good bloody index to show who’s better than who on a particular subject. While rankings are just rankings, they do provide a decent snapshot of which country prioritises certain issues over others. When it comes to AI, Tortoise Media’s index looks at the level of investment, innovation and actual implementation of AI by country, while Stanford University’s index looks at the vibrancy of each nation including public perception and societal considerations. Unsurprisingly, the U.S. and China, the world’s two largest economies, are number one and two on both indices.Source: Tortoise Media 👇Delving into the U.S. deeper, researchers at Stanford concluded that while larger states with the biggest economies may not be at the top of the standings in terms of AI job growth, that’s because they’ve already had their AI surge. It’s part of the reason they’re still at the top. Oil also helps… Talking of which, oil-rich states such as North Dakota and Wyoming have seen AI jobs in their states boom of the past decade, and goes to show that you don’t need to be in California or New York to jump into AI.Related podcasts…#77 — Francesco Marconi (Newlab) on artificial intelligence and its role in the future of journalism#72 — Ryan Broderick (BuzzFeed) on the 15th anniversary of YouTube#70 — Amy Webb (Future Today Institute) on the lack of government preparation for the coronavirus and the latest 2020 technology trends#61 — Rachel Botsman (Trust Issues) on the why people believe fake newsLast week … 🇺🇸 America's Protests: We Must Now Focus on Voter SuppressionThanks for making it all the way to the bottom. Please like and share this edition of Inside The Newsroom by clicking the ❤️ below. That way I’ll appear in clever algorithms and more people will be able to read.If you haven’t already, please consider subscribing to get a newsletter about a cool news topic in your inbox every time I publish (1-2 times a week). You can find me on Twitter at @DanielLevitt32 and email me corrections/feedback or even a guest you’d like me to get on the podcast at daniellevitt32@gmail.com. Get on the email list at insidethenewsroom.substack.com
Richard Costello connects with Karen Sweet.Karen Sweet is currently Sr Manager Financial Services - Canadian Real Estate Practice Lead with Accenture. Karen has over 15 years of experience in consulting, asset management and finance and has worked for leading groups including Westfield and Oxford Properties. Karen is an expert at delivering large-scale technology and innovation transformations within the Commercial Real Estate industry.In this episode Karen discusses the following:What it was like working for WestfieldThe benefits of having a finance and operations backgroundProudest professional momentPassions motivating performanceWhat sets Accenture apartCurrent role, projects and objectivesOpportunities on the horizon for the real estate industry following the recent market disruptionAdvice for anyone starting out in the industryBook recommendations (You look like a thing and I love you – Janelle Shane. Lamb – Christopher Moore. Olive Ketteridge – Elizabeth Strout)Recent purchase that has changed Karen's life (under $100)What Karen would write on a banner
Good morning, RVA! It’s 73 °F, and probably raining. You can expect the chance of rain to persist throughout the day while the temperature—and humidity—rise. I think we did it. I think it’s warm now!Water coolerAs of this morning, the Virginia Department of Health reports 907 new positive cases of the coronavirus in the Commonwealth and 45 new deaths as a result of the virus. VDH reports 106 new cases in and around Richmond (Chesterfield: 50, Henrico: 33, and Richmond: 23). Since this pandemic began, 172 people have died in the Richmond region. I think this is the most new coronavirus deaths reported in a single day since VDH started releasing data. Compared to 2017 (the most recent data on this CDC website), COVID-19 is now the 9th leading cause of death in Virginia, killing 1,281 people—more than Septicemia (1,249) and Flu/Pneumonia (1,245). At the current rate (about 30 new deaths each day), the coronavirus will pass Kidney Disease in 12 days and Diabetes in 23 days.Since the Governor chose not to grant the Mayor’s request for a modified entry into recovery, at 12:00 AM, Richmond will join Henrico and Chesterfield and jump right into Phase One. What’s that mean? The City has set up a reopening guidance pagewith both “allowed activities” that correspond to the State’s guidance and the “Mayor’s Best Practices,” which are some of the things Mayor Stoney unsuccessfully requested from the Governor—but based on local public health guidance! We’ve gone over this before, but the gist from the State: retail can open up to 50% of their capacity, restaurants can open up to 50% of their outdoor capacity, salons and barbershops are open by appointment, places of worship can open up to 50% of their capacity, and fitness-based businesses can host outdoor classes. However, the Mayor recommends that places of worship continue meeting digitally or, if they must meet, do so outside. He also suggests restaurants keep a log of patrons to make the inevitable contact tracing easier. I hope and believe that most faith groups and restaurants in the City will do these things! By far, the best part of Richmond’s reopening guidance page are these two sentences: “The state has not released guidance on what Phases 2 and 3 will look like throughout the state. All localities are waiting for guidance from the Governor and the Virginia Department of Health to learn what Phases 2 and 3 will allow or keep restricted.” You and me both, all localities. I’d love to know what metrics the Governor will use to decide to move the state into Phase Two, what that timeline looks like, and what restrictions will lift. Maybe at today’s presser?Now, a trio of Richmond Public Schools updates. First, the District sent out an email the other day that said an “RPS employee who was present at multiple events at Mary Munford Elementary School over the last two weeks has tested positive for COVID-19.” This means that if you stopped by Munford to pick up supplies, a computer, food, whatever, you could have been infected and need to isolate yourself for 14 days! If you have any of the COVID-19 symptoms, you should call your doctor ASAP. Second, RPS will host virtual graduation ceremonies for students on June 22nd and June 24th, depending on their school (you can find the full list of which school when here). Third, and finally, with the budget adopted however many virusdays ago, the RPS School Board also approved funds for a new K-8 curriculum. You can learn more about what that means and specifics on the new material here, but you can also join a curriculum preview for parents tonight at 6:00 PM (in both English and Spanish).Wyatt Gordon at Greater Greatest Washington has a piece on slow/open streets, which I link to in a continued attempt to get you to fill out this form so we can get something like this on the ground in Richmond. I appreciate and agree with this take on why some of our neighborhood streets are incredibly unsafe: “In the 1950s during White Flight, huge portions of Richmond city streets were converted into six to eight lane pseudo-highways to accommodate suburban (largely White) commuters at the expense of the more diverse, lower-income urban dwellers who remained. With ample space and terrible safety records, these roadways offer Richmond low-hanging fruit ripe for a road diet.”Via /r/rva here’s a good skyline + train + raging river pic. That’ll be a triple score for your RVA Instagram bingo card.I want to mention, out loud, the protests taking place in Minneapolis in response to a White police officer killing George Floyd, an unarmed Black man. Floyd’s death, along with the killing of Breonna Taylor and the attempted modern-day lynching of Christian Cooper, are the most recent and most visible examples of how life as a Black person or person of color in America is vastly different from my lived experience as White man. Writer Clint Smith pointed me toward this passage from James Baldwin’s “An Open Letter to My Sister, Miss Angela Davis”: “Or, to put it another way, as long as white Americans take refuge in their whiteness—for so long as they are unable to walk out of this most monstrous of traps—they will allow millions of people to be slaughtered in their name, and will be manipulated into and surrender themselves to what they will think of—and justify—as a racial war. They will never, so long as their whiteness puts so sinister a distance between themselves and their own experience and the experience of others, feel themselves sufficiently human, sufficiently worthwhile, to become responsible for themselves, their leaders, their country, their children, or their fate. They will perish (as we once put it in our black church) in their sins—that is, in their delusions. And this is happening, needless to say, already, all around us.”This morning’s longreadAI Weirdness: Rhyming is hardAI researcher, Janelle Shane, makes artificial intelligence do weird and hilarious things that usually make me cry-laugh. Here she uses a freely available AI to generate rhymes. Incredible.Although many people have generated AI poetry and lyrics, you’ll notice that they generally don’t rhyme. That’s because generating a decent rhyme is super hard. You can get an inkling of this if you prompt the neural net GPT-2 with rhymes to complete. It will fail almost every time:Roses are red Violets are blue I want to eat French fries with my nice ham and cheese I want to eat all the bacon I can get I want to eat the fresh frozen clams The reason I love your spice rack is because there’s no place to hide You got potatoes the size of postage stampsIf you’d like your longread to show up here, go chip in a couple bucks on the ol’ Patreon.
Today we celebrate the FIVE YEAR anniversary of the podcast. Whoa. To celebrate, I reached out to every single person who you’ve ever heard on the podcast — every expert, every voice actor, and even a few patrons — and asked them one question: what would you say to someone living 50 years from now? Here’s what they said. Guests: Alice Wong, Amy Slaton, Angeli Fitch, Arielle Duhaime-Ross, Ashley Shew, Avery Trufelman, Calvin Gimpelevich, Carl Evers, Chris Dancy, Damien Patrick Williams, David Agranoff. Ernesto D. Morales, Gina Tam, Janelle Shane, Janet Stemwedel, Jared Dyer, Jon Christensen, Kathy Randall Bryant, Katie Gordon, Kelly & Zach Weinersmith, Lina Ayenew, Matt Lubchanksy, Meredith Talusan, Michelle Hanlon, Morgan Gorris, Naomi Baron, Natalia Petrzela, Sandeep Ravindran, Queer Futures Collective, Sav Schlauderaff, Shoshana Schlauderaff , Zia Puig, Zoe Schlanger → → → Full answers from every person here ← ← ← Flash Forward is produced by me, Rose Eveleth. The intro music is by Asura and the outtro music is by Hussalonia. Additional music this episode from Chad Crouch, Ketsa, Xylo-Ziko, and Loyalty Freak. The episode art is by Matt Lubchansky. Get in touch: Twitter // Facebook // Reddit // info@flashforwardpod.com Support the show: Patreon // Donorbox Subscribe: iTunes // Soundcloud // Spotify Learn more about your ad choices. Visit megaphone.fm/adchoices
We hope everyone is doing ok and handling the physical distancing! Our program this week has science news about the periodic table and a fun discussion about why dogs breeds look so different when compared with cat breeds. In the Ask an Expert section we are so fortunate that Dr. Janelle Shane chatted with us about AI and her amazing book "You Look Like A Thing And I Love You!" It's such a fun and educational talk!You can find the links below to her book!Dr. Janelle Shane on Twitter:https://twitter.com/JanelleCShaneDr. Janelle Shane's Website:https://www.janelleshane.com/You Look Like A Thing And I Love You!https://www.janelleshane.com/book-you-look-like-a-thingDr. Shane's TED Talkhttps://www.ted.com/talks/janelle_shane_the_danger_of_ai_is_weirder_than_you_thinkThe ASAP Science Periodic Table Song:https://www.youtube.com/watch?v=rz4Dd1I_fX0Bunsen on Twitter:https://twitter.com/bunsenbernerbmdBunsen on Facebookhttps://www.facebook.com/bunsenberner.bmd/InstaBunsenhttps://www.instagram.com/bunsenberner.bmd/?hl=enBunsen Merch!https://teespring.com/en-GB/stores/bunsen-bernerGenius Lab Gear for 10% link!-10% off science dog bandanas, science stickers and science Pocket toolshttps://t.co/UIxKJ1uX8J?amp=1Support the show (https://www.patreon.com/bunsenberner)Support the show (https://www.patreon.com/bunsenberner)
This week, we check in on what our pal Janelle Shane did for April Fools Day this year. Then, we…
The February selection for the Radio Bookclub is You Look Like a Thing and I Love You by author Janelle Shane which explores how artificial intelligence […]
Listen to more of the conversation with Janelle Shane about the weirdness of Artificial Intelligence, and hear a selection of knock-knock jokes that were created through […]
Artificial Intelligence – or AI for short – is often depicted in films in the shape of helpful droids, all-knowing computers or even malevolent ‘death bots’. In real life, we’re making leaps and bounds in this technology’s capabilities with satnavs, and voice assistants like Alexa and Siri making frequent appearances in our daily lives. So, should we look forward to a future of AI best friends or fear the technology becoming too intelligent. Tim Harford talks to Janelle Shane, author of the book ‘You Look Like a Thing and I Love you’ about her experiments with AI and why the technology is really more akin to an earthworm than a high-functioning ‘death bot’.
Lingthusiasm - A podcast that's enthusiastic about linguistics
How do languages talk about the time when something happens? Of course, we can use words like “yesterday”, “on Tuesday”, “once upon a time”, “now”, or “in a few minutes”. But some languages also require their speakers to use an additional small piece of language to convey time-related information, and this is called tense. In this episode of Lingthusiasm, your hosts Gretchen McCulloch and Lauren Gawne talk about when some languages obligatorily encode time into their grammar. We look at how linguists go about determining whether a language has tense at all, and if so, how many tenses it has, from two tenses (like English past and non-past), to three tenses (past, present, and future), to further tenses, like remote past and on-the-same-day. --- This month’s bonus episode is about what happens when the robots take over Lingthusiasm! In this extension of our interview with Janelle Shane from Episode 40, we train a neural net to generate new Lingthusiasm episodes and perform some of the most absurd ones for you. Support Lingthusiasm on Patreon to gain access to the Robot-Lingthusiasm episode and 35 previous bonus episodes, and to chat with fellow lingthusiasts in the Lingthusiasm patron Discord patreon.com/lingthusiasm Lingthusiasm merch makes a great gift for yourself or other lingthusiasts! Check out IPA scarves, IPA socks, and more at lingthusiasm.redbubble.com For the links mentioned in this episode, check out the shownotes page at https://lingthusiasm.com/post/190937079286/lingthusiasm-episode-41-this-time-it-gets-tense
This week, Allison shares some terrible GPT-2 jello-based recipes from Janelle Shane. Then, Justin brings a new twist to an…
This month, we’re talking about giraffes, a magic sandwich hole and the question of whether robots will take over the world. All of these things come up in Janelle Shane’s You Look Like a Thing and I Love You, a book about the wonderful and often weird world of artificial intelligence. The title, incidentally, is an AI-generated pickup line, though maybe one of the less successful ones. Find out what we thought about the book, listen to an extract, and hear from Shane herself as she talks to us about why algorithms are not not as smart as they seem and the perils of following an AI-generated brownie recipe.
Michael and Stephanie discuss a Ted Talk by Janelle Shane that talks about the weird ways AI and algorithms try to solve problems, especially when given vague controls. Janelle actually wrote a book about this, and has a website. Michael just purchased the book, and will update in the future. The programming language for kids Stephanie mentioned is called Scratch, and it looks pretty interesting. Please reach out on Twitter or by email with your favorite AI generated Ice Cream flavor name, or just say hello or suggest an episode. Like us on Facebook at https://www.facebook.com/youdidwhatnowpodcast Follow Michael on Twitter at @ceetarFollow Stephanie on Twitter at @stephanieYDWN RSS subscription links! AppleGoogleStitcherSpotify Intro Clips: Terminator, The Matrix, 2001
What happens when you teach an AI to write knock-knock jokes, recipes, and pick-up lines? It's a rare week that goes by without someone talking about the power, and the perils, of artificial intelligence. But if you're not an expert in machine learning, how do you separate fact from fiction? That's where Janelle Shane's expertise comes in. Janelle is the author of the book, You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It's Making the World a Weirder Place. As she describes how an AI learns, she reveals the gap between what researchers strive to do and what's currently possible. Janelle explains, "The AI in science fiction is almost exclusively this kind of human level, general AI, that's really smart, at least as smart as a human, and then the stuff we have in the real world is a lot simpler." Janelle runs amusing AI experiments, in order to learn how machine learning works and where its limits begin. She shares stories of what happened when she trained AIs to tell knock-knock jokes, invent new recipes, and write pick-up lines. Along the way, she describes the ups and the downs of working with AIs to solve problems: "The pro is you might get an answer that you didn't expect. The con is also that you might get an answer that you didn't expect." Janelle's work has appeared in publications like The New York Times, Slate, The New Yorker, The Atlantic, and many more. In addition, she keeps readers up to date on recent projects and AI hilarity on her website, aiweirdness.com. The Host You can learn more about Curious Minds Host and Creator, Gayle Allen, and Producer and Editor, Rob Mancabelli, here. Episode Links aiweirdness.com Erik Goodman Artificial You: AI and the Future of Your Mind by Susan Schneider An AI Expert Explains Why There's Always a Giraffe in Artificial Intelligence GPT-2 An Artificial Intelligence Predicts the Future On the Life Cycle of Software Objects by Ted Chiang If You Enjoyed this Episode, You Might Also Like: Kartik Hosanagar on How Algorithms Shape Our Lives Susan Schneider on the Future of Your Mind Adam Waytz on the Power of Human Kat Holmes on the Power of Inclusive Design Caroline Criado Perez on Invisible Women Simple Ways to Support the Podcast If you enjoy the podcast, there are three simple ways you can support our work. First, subscribe so you'll never miss an episode. Second, tell a friend or family member. You'll always have someone to talk to about the interview. Third, rate and review the podcast wherever you subscribe. You'll be helping listeners find their next podcast. Where You Can Find Curious Minds: Spotify iTunes Tunein Stitcher Google Play Overcast
Lingthusiasm - A podcast that's enthusiastic about linguistics
If you feed a computer enough ice cream flavours or pictures annotated with whether they contain giraffes, the hope is that the computer may eventually learn how to do these things for itself: to generate new potential ice cream flavours or identify the giraffehood status of new photographs. But it’s not necessarily that easy, and the mistakes that machines make when doing relatively silly tasks like ice cream naming or giraffe identification can illuminate how artificial intelligence works when doing more serious tasks as well. In this episode, your hosts Gretchen McCulloch and Lauren Gawne interview Dr Janelle Shane, author of You Look Like A Thing And I Love You and person who makes AI do delightfully weird experiments on her blog and twitter feed. We talk about how AI “sees” language, what the process of creating AI humour is like (hint: it needs a lot of human help to curate the best examples), and ethical issues around trusting algorithms. Finally, Janelle helped us turn one of the big neural nets on our own 70+ transcripts of Lingthusiasm episodes, to find out what Lingthusiasm would sound like if Lauren and Gretchen were replaced by robots! This part got so long and funny that we made it into a whole episode on its own, which is technically the February bonus episode, but we didn’t want to make you wait to hear it, so we’ve made it available right now! This bonus episode includes a more detailed walkthrough with Janelle of how she generated the Robo-Lingthusiasm transcripts, and live-action reading of some of our favourite Robo-Lauren and Robo-Gretchen moments. Support Lingthusiasm on Patreon to gain access to the Robo-Lingthusiasm episode and 35 previous bonus episodes. patreon.com/lingthusiasm Also for our patrons, we’ve made a Lingthusiasm Discord server – a private chatroom for Lingthusiasm patrons! Chat about the latest Lingthusiasm episode, share other interesting linguistics links, and geek out with other linguistics fans. (We even made a channel where you can practice typing in the International Phonetic Alphabet, if that appeals to you!) To see the links mentioned in this episode, check out the shownotes page at https://lingthusiasm.com/post/190298658151/lingthusiasm-episode-40-making-machines-learn
How making AI do goofy things exposes its limitations: In her book, "You Look Like a Thing and I Love You," Janelle Shane eposes the pitfalls of AI dependence. Also, Musician-turned-AI-researcher David Usher talks about ReImagine AI, he effort to make a better machine-human interface.
This week, Justin shares some of Janelle Shane’s world famous AI Weirdness pies from the past few years. Then, we…
Why should we care about Artificial Intelligence (AI)? I wondered about that before I picked up Janelle Shane’s book You Look Like A Thing And…
Today on the show we have research scientist and author, Janelle Shane. She just released her knew book on artificial intelligence called, You Look Like a Thing and I Love You. Be sure to listen to the podcast to hear where that title came from. We discuss the similarities between evolutionary algorithms and evolution in the natural world. We also discuss some of the quirky pitfalls and shortcomings of AI at the moment, and why Skynet isn’t coming for you anytime soon. FEATURED LINKS Janelle Shane - AI Weirdness You are a Think and I Think I Love You - Book @JanelleCShane on Twitter SHOW LINKS Carry the Fire Podcast Website Instagram Twitter Support on Patreon Produced by Andy Lara at www.andylikeswords.com
Facebook announces the Deepfake Detection Challenge, a rolling contest to develop technology to detect deepfakes. The US Senate passes the Deepfake Report Act, bipartisan legislation to understand the risks posed by deepfake videos. And US Representatives Hurd and Kelly announced a new initiative to develop a bipartisan national AI strategy with the Bipartisan Policy Center. In research, AI allows a paralyzed person to “handwrite” using his mind. From the University of Grenoble, a paralyzed man is able to walk using a brain-controlled exoskeleton. From the Moscow Institute of Physics and Technology, researchers use a neural network to reconstruct human thoughts from brain waves in real time using electroencephalography. A report from Elsa Kania and Sam Bendett looks at technology collaborations between Russian and China in A New Sino-Russian High-Tech Partnership. In another response to the National Security Commission on AI, Margarita Konaev publishes With AI, We’ll See Faster Fights, But Longer Wars on the War on the Rocks. James, Witten, Hastie, and Tibshirani release An Introduction to Statistical Learning. Open Science Framework makes THINGS available, an object concept and object image database of nearly 14 GB, over 1800 object concepts and more than 26,000 naturalistic object images. And finally, Janelle Shane explains why the danger of AI is Weirder Than You Think. Click here to visit our website and explore the links mentioned in the episode.
I am interested in the future! Whether it's about self driving cars, phones, technology in general - really interesting to me! But it is going to take some time until we care use self driving cars…safely but more in the episode. - This episode of the Self Development with Tactics / SDWT podcast is all about Many of us thought we'd be riding in AI-driven cars by now — so what happened?, published on Nov 6, 2019 by Janelle Shane - https://ideas.ted.com/many-of-us-thought-wed-be-riding-in-ai-driven-cars-by-now-what-happened/ - I as always hope that you get a lot out of that! - Love you ➠Thank you for being with me! If you liked this episode of your daily self development kick please subscribe and like. Stay tuned for upcoming self development videos aaaaand comment down below or hit me up on the social media platform you like the most. Wish you the best, health wealth and happiness ❤️ Who I am? I am Christopher Walch a 18 year old graphic design student from austria, really interested in marketing self Development and having success in every aspect of life❤️However I am not only interested in having the best for me! I want you to be at your peak as well. Giving value to the people out here is what I want and what I am able to do here! Thank you. Self Development with Tactics/Christopher Walch on Instagram: https://www.instagram.com/walchchristopher Self Development with Tactics'/Christopher Walch's Podcast: https://www.anchor.fm/selfdevelopment_wt/ Self Development with Tactics/Christopher Walch on Twitter: https://twitter.com/SelfTactics Self Development with Tactics/Christopher Walch on Facebook: www.facebook.com/Selfdevelopment-With-Tactics Self Development with Tactics on Tumblr: https://www.tumblr.com/blog/we-selfdevelopment Self Development with Tactics/Christopher Walch on Youtube: https://www.youtube.com/channel/UC6ms9lq2XRrgdy0rOrMYVUQ Self Development With Tactics/Christopher Walch on Quora: https://www.quora.com/profile/Christopher-Walch-SDWT-Podcast LOVE YOU ALL!! ❤️
Talk the Talk - a podcast about linguistics, the science of language.
Artificial intelligence is everywhere, and that freaks some people out. But the real problem is that AIs may not be smart enough. Whether you're concerned about the future of human/computer interaction, or you just want a fun description of machine learning algorithms, there's a new book you should read. We're talking with author Janelle Shane on this episode of Talk the Talk.
AI is everywhere. It powers the autocorrect function of your iPhone, helps Google Translate understand the complexity of language, and interprets your behaviour to decide which of your friends' Facebook posts you most want to see. In the coming years, it'll perform medical diagnoses and drive your car - and maybe even help our authors write the first lines of their novels. But how does it actually work? Scientist and engineer, Janelle Shane, is the go-to contributor about computer science for the New York Times, Slate, and the New Yorker. Through her hilarious experiments, real-world examples, and illuminating cartoons, she explains how AI understands our world, and what it gets wrong. More than just a working knowledge of AI, she hands readers the tools to be skeptical about claims of a smarter future. A comprehensive study of the cutting-edge technology that will soon power our world, You Look Like a Thing and I Love You is an accessible and hilarious exploration of the future of technology and society. It's Astrophysics for People In a Hurry meets Thing Explainer: an approachable guide to a fascinating scientific topic, presented with clarity, levity, and brevity by an expert in the field with a powerful and growing platform.
The danger of artificial intelligence isn't that it's going to rebel against us, but that it's going to do exactly what we ask it to do, says AI researcher Janelle Shane. Sharing the weird, sometimes alarming antics of AI algorithms as they try to solve human problems -- like creating new ice cream flavors or recognizing cars on the road -- Shane shows why AI doesn't yet measure up to real brains.
Die Gefahr der künstlichen Intelligenz besteht nicht darin, dass sie sich gegen uns auflehnen wird, sondern dass sie genau das tut, worum wir sie bitten, so die KI-Forscherin Janelle Shane. Sie zeigt die eigenartigen, manchmal bedenklichen Mätzchen der KI-Algorithmen bei ihren Versuchen, menschliche Probleme wie der Erfindung neuer Eissorten oder dem Erkennen von Autos auf der Straße zu lösen. Janelle Shane legt dar, warum die KI noch nicht einem echten Gehirn entspricht.
El peligro de la inteligencia artificial no es que se vaya a rebelar contra nosotros, sino que hará exactamente lo que le pidamos que haga, dice la investigadora de inteligencia artificial Janelle Shane. Compartiendo las extrañas, y ocasionalmente alarmantes travesuras de los algoritmos de IA mientras intentan resolver los problemas humanos —como crear nuevos sabores de helado, o reconocer coches en la carretera— Shane muestra por qué la IA no es aún comparable a los cerebros reales.
The danger of artificial intelligence isn't that it's going to rebel against us, but that it's going to do exactly what we ask it to do, says AI researcher Janelle Shane. Sharing the weird, sometimes alarming antics of AI algorithms as they try to solve human problems -- like creating new ice cream flavors or recognizing cars on the road -- Shane shows why AI doesn't yet measure up to real brains.
Halloween time is in full swing and we’re getting even more in the spirit with Janelle Shane’s new neural network…
AI 연구원 자넬 셰인은 인공지능이 위험한 이유가 그들이 우리에게 반역할 것이기 때문이 아니라, 우리가 말한 것을 아주 그대로 할 것이기 때문이라고 말합니다. 새로운 아이스크림 맛을 만들거나 도로에서 차들을 인식하는 것과 같은 문제를 풀 때 발생하는, 이상하고 때때로 놀라운 AI 알고리즘의 터무니없는 행동들을 소개하면서 셰인은 왜 AI가 실제 인간의 뇌에 미치지 못하는지 보여줍니다.
Le danger avec l'intelligence artificielle n'est pas qu'elle va se rebeller contre nous, mais qu'elle va faire exactement ce que nous lui demandons, dit la chercheuse en IA Janelle Shane. En partageant les singeries étranges et parfois alarmantes des algorithmes d'IA alors qu'ils essayent de résoudre des problèmes humains -- comme créer de nouveaux parfums de glace ou reconnaître les voitures sur la route -- Janelle nous montre pourquoi l'IA n'est pas encore à la hauteur des vrais cerveaux.
O perigo da inteligência artificial não é que ela se rebele contra nós, mas sim que faça exatamente o que peçamos que ela faça, diz a pesquisadora Janelle Shane. Abordando os feitos estranhos e por vezes alarmantes dos algoritmos de IA ao tentarem resolver problemas humanos, como criar novos sabores de sorvete ou reconhecer carros na estrada, Shane mostra por que a IA ainda não está à altura de um cérebro de verdade.
Mike and Tammy discuss ethical issues in urban planning and analytics, as well as a primer on neural networks.Also relevant:- "You Look Like a Thing And I Love You" by Janelle Shane: https://www.amazon.com/dp/B07PBVN3YJ/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1- 24/7 neural network-generated death metal: https://www.youtube.com/watch?v=MwtVkPKx3RA- Fat Bear Week (Oct 2-8): https://www.facebook.com/KatmaiNPP/?__tn__=%2Cd%2CP-R&eid=ARBOvWcDpgkRPmJpFyRCeBjiwxlio7xa-mluACsbZSZOKIuddLTd-ZQcoOZdRG6eAYuvXC4ZWT0pCDSl
This week, Allison has a tiny hat that all about real hats, Janelle Shane’s project HAT3000. The big hat this…
Time travel is such a powerful fictional device. It allows creators to shine a light on the cultural struggles and social ills of the present by making powerful connections with the past, and by offering wonderful or horrifying visions of the future. Of course, it can also just be a whole lot of fun! But time travel is uniquely powerful as a tool for feminist writers; after all, feminism has always fought to prevent women’s contributions from being written out of history. Time travel allows for stories in which women struggle not just to gain power but to regain their place in time itself. With special guest author and journalist Annalee Newitz! Time Stamps: 03:55 - Main Segment 47:09 - What’s Your FREQ Outs Links Mentioned: Our opinions are correct https://www.ouropinionsarecorrect.com and https://www.patreon.com/ouropinionsarecorrect Why are you laughing at Bruce Lee? by Walter Chaw https://www.vulture.com/2019/08/on-bruce-lees-character-in-once-upon-a-time-in-hollywood.html Annalee’s webowebs: www.annaleenewitz.com / @annaleen on twitter Future of Another Timeline, by Annalee Newitz: https://us.macmillan.com/books/9780765392121 Grape Ape music video based on Future of Another Timeline: https://youtu.be/5Avc8qqRVc0 Gods, Monsters, and the Lucky Peach by Kelly Robson (https://publishing.tor.com/godsmonstersandtheluckypeach-kellyrobson/9781250163844/) This is How You Lose the Time War, by Amal El Mohtar and Max Gladstone (https://www.simonandschuster.com/books/This-Is-How-You-Lose-the-Time-War/Amal-El-Mohtar/9781534431003) Here and Now and Then by Mike Chen (https://www.mikechenbooks.com/book/here-and-now-and-then/) “All You Zombies” by Robert Heinlein (https://gist.github.com/defunkt/759182/ad44c6135d168ae54503a281bb7e1a24c6c2ea0c) You Look Like a Thing and I love you by Janelle Shane (https://www.littlebrown.com/titles/janelle-shane/you-look-like-a-thing-and-i-love-you/9781549171529/), also her blog AIWeirdness.com Follow Us: Join our Patreon Community https://www.patreon.com/femfreq Our Website https://feministfrequency.com/ Subscribe, rate and review us on Apple Podcasts https://podcasts.apple.com/us/podcast/feminist-frequency-radio/id1307153574?mt=2 Twitter https://twitter.com/femfreq Instagram https://www.instagram.com/femfreq/ Youtube http://bit.ly/2bDhQUX
This week, Justin serves us some zesty sombreros tapas including Janelle Shane’s forthcoming book, a Twitter bot that we could…
This week, we get cozy and talk about the collaboration between Janelle Shane and Ravelry, SkyKnit. Then, it’s the second…
Chelsea Troy is a self-taught and informally educated software engineer and data scientist who also specializes in machine learning. Chelsea also blogs regularly at www.chelseatroy.com. In this conversation, we discuss the process of self-educating in a variety of software-related disciplines, the state of machine learning and whether or not its going to swallow our society, and how technology companies can improve diversity in their workforces - both in terms of tangible actions for employees and managers as well as higher-level organizational changes. Check out more from Chelsea here: Instagram: @misschelseatroy Website: www.chelseatroy.com If you're enjoying the show, the best way to support it is by sharing with your friends. If you don't have any friends, why not a leave a review? It makes a difference in terms of other people finding the show. You can also subscribe to receive my e-mail newsletter at www.toddnief.com. Most of my writing never makes it to the blog, so get on that list. Show Notes: [0:07] Self-educating in software development and data science through a project-based approach – and the strengths and weaknesses of project-based learning vs a formal academic model [08:48] Almost all of your time in software development is spent at the margin of what you know how to do, so you have to be comfortable with being uncomfortable. Improvement often comes through bettering your ability to solve the inevitable problems that you will run into. [19:12] Reduce the feedback loop as much as possible and create testing scenarios in order to rapidly iterate on software. One weird trick to learning software development: copy the changes that more experienced developers make to their code by hand [30:30] The best learning comes from realizing that you’ve made a mistake. Having a generalist approach and understanding multiple programming languages enables solving problems in non-traditional ways. [37:42] Should we believe the hype on machine learning? What will be the future of machine learning and how will humans work with this technology as we are able to automate more and more tasks and better recognize patterns in data? [48:02] The dangers of algorithmic recommendations and the amount of resources going into increasing advertisement clicks through machine learning. Can we have machine learning algorithms make their decisions and categorizations “human legible”? [1:03:07] How can tech companies move the needle on diversity in hiring? What actionable communication and management behaviors can individuals employ in terms of making technical companies more welcoming to underrepresented folks? [1:14:07] How do we get more viewpoint diversity in the upper echelons of technology companies? Viewpoint diversity seems to clearly help companies improve performance, but can be painful and create more conflict within the organization. Links and Resources Mentioned John Conway's Game of Life Deliberate practice What is the difference between FragmentPagerAdapter and FragmentStatePagerAdapter? GitHub Pivotal Labs “Leveling Up Skill #6: Commit Tracing” from Chelsea Troy Zooniverse Hubble Telescope Hanny's Voorwerp Janelle Shane “Try these neural network-generated recipes” from Janelle Shane “Do neural nets dream of electric sheep?” from Janelle Shane “Metal band names invented by neural network” from Janelle Shane “The neural network has weird ideas about what humans like to eat” from Janelle Shane (this one kills me) Decision tree learning Game of Thrones
In 1830 Joseph Palmer created an odd controversy in Fitchburg, Massachusetts: He wore a beard when beards were out of fashion. For this social sin he was shunned, attacked, and ultimately jailed. In this week's episode of the Futility Closet podcast we'll tell the story of a bizarre battle against irrational prejudice. We'll also see whether a computer can understand knitting and puzzle over an unrewarded long jump. Intro: Prospector William Schmidt dug through California's Copper Mountain. The bees of Bradfield, South Yorkshire, are customarily informed of funerals. Sources for our feature on Joseph Palmer: Stewart Holbrook, "The Beard of Joseph Palmer," American Scholar 13:4 (Autumn 1944), 451-458. Paul Della Valle, Massachusetts Troublemakers: Rebels, Reformers, and Radicals From the Bay State, 2009. John Matteson, Eden's Outcasts: The Story of Louisa May Alcott and Her Father, 2010. Richard Corson, Fashions in Hair: The First Five Thousand Years, 2001. Stewart H. Holbrook, Lost Men of American History, 1947. Zechariah Chafee, Freedom of Speech, 1920. Clara Endicott Sears and Louisa May Alcott, Bronson Alcott's Fruitlands, 1915. George Willis Cooke, Ralph Waldo Emerson: His Life, Writings, and Philosophy, 1881. Octavius Brooks Frothingham, Theodore Parker: A Biography, 1874. Louisa May Alcott, Transcendental Wild Oats, 1873. Joseph J. Thorndike Jr., "Fruitlands," American Heritage 37:2 (February/March 1986). David Demaree, "Growing the Natural Man: The Hirsute Face in the Antebellum North," American Nineteenth Century History 18:2 (June 2017), 159–176. Richard E. Meyer, "'Pardon Me for Not Standing': Modern American Graveyard Humor," in Peter Narváez, ed., Of Corpse: Death and Humor in Folkore and Popular Culture, 2003. J. Joseph Edgette, "The Epitaph and Personality Revelation," in Richard E. Meyer, ed., Cemeteries and Gravemarkers: Voices of American Culture, 1989. Herbert Moller, "The Accelerated Development of Youth: Beard Growth as a Biological Marker," Comparative Studies in Society and History 29:4 (October 1987), 748-762. Carl Watner, "Those 'Impossible Citizens': Civil Resistants in 19th Century New England," Journal of Libertarian Studies 3:2 (1980), 170-193. Ari Hoogenboom, "What Really Caused the Civil War?", Wisconsin Magazine of History 44:1 (Autumn 1960), 3-5. Richard Gehman, "Beards Stage a Comeback," Saturday Evening Post 231:20 (Nov. 15, 1958), 40-108. Stewart H. Holbrook, "Lost Men of American History," Life 22:2 (Jan. 13, 1947), 81-92. George Hodges, "The Liberty of Difference," Atlantic Monthly 117:6 (June 1916), 784-793. James Anderson, "'Fruitlands,' Historic Alcott Home Restored," Table Talk 30:12 (December 1915), 664-670. Marion Sothern, "'Fruitlands': The New England Homestead of the Alcotts," Book News Monthly 33:2 (October 1914), 65-68. Rick Gamble, "Speaking From the Grave Through Monuments," [Brantford, Ont.] Expositor, Feb. 23, 2019, D.2. James Sullivan, "Beard Brains: A Historian Uncovers the Roots of Men's Facial Hair," Boston Globe, Jan. 1, 2016, G.8. Kimberly Winston, "When Is Facial Hair a Sign of Faith?", Washington Post, Oct. 11, 2014, B.2. Christopher Klein, "Pulling for the Beards," Boston Globe, Nov. 2, 2013, V.30. "Shared History: Whisker Rebellion Whets Writer's Curiosity," [Worcester, Mass.] Telegram & Gazette, Jan. 27, 2009, E.1. William Loeffler, "Facial Hair Has Said a Lot About a Man," McClatchy-Tribune Business News, Oct. 26, 2008. Paul Galloway, "A Shave With History: Tracking Civilization Through Facial Hair," Chicago Tribune, July 28, 1999, 1. Billy Porterfield, "Bearded Abolitionist Set Fad on Both Sides of Mason-Dixon," Austin American Statesman, Jan. 19, 1990, B1. "Very Set in His Ways," Bridgeport [Conn.] Evening Farmer, Oct. 26, 1916, 9. "Man's Beard Cause of Jeers," [Mountain Home, Idaho] Republican, Jan. 9, 1906. "'Persecuted for Wearing the Beard': The Hirsute Life and Death of Joseph Palmer," Slate, April 16, 2015. "Joseph Palmer, Fashion Criminal, Persecuted for Wearing a Beard," New England Historical Society (accessed May 19, 2019). Listener mail: Wikipedia, "TX-0" (accessed May 24, 2019). Wendy Lee, "Can a Computer Write a Script? Machine Learning Goes Hollywood," Los Angeles Times, April 11, 2019. Sean Keane, "First AI-Scripted Commercial Tugs Hard at Our Heart Strings -- for a Lexus," CNET, Nov. 19, 2018 Reece Medway, "Lexus Europe Creates World's Most Intuitive Car Ad With IBM Watson," IBM, Nov. 19, 2018. Janelle Shane, "Skyknit: When Knitters Teamed Up With a Neural Network," AI Weirdness, 2018. Alexis C. Madrigal, "SkyKnit: How an AI Took Over an Adult Knitting Community," Atlantic, March 6, 2018. This week's lateral thinking puzzle was suggested by one that appeared in 2005 on the National Public Radio program Car Talk, contributed by their listener David Johnson. You can listen using the player above, download this episode directly, or subscribe on Google Podcasts, on Apple Podcasts, or via the RSS feed at https://futilitycloset.libsyn.com/rss. Please consider becoming a patron of Futility Closet -- you can choose the amount you want to pledge, and we've set up some rewards to help thank you for your support. You can also make a one-time donation on the Support Us page of the Futility Closet website. Many thanks to Doug Ross for the music in this episode. If you have any questions or comments you can reach us at podcast@futilitycloset.com. Thanks for listening!
This week, Allison and Justin share some hilarious new petitions Janelle Shane partnered with Change.org to make using GTP-2. Then,…
If there's one thing that sets people apart from machines, it's creativity, right? Automation may take over certain jobs, but what happens when algorithms start to learn from our work to create their own? This episode, we speak with people using AI to generate films, poetry, music, and even recipes. And the founder of Google X, Sebastian Thrun, explains what's powering this new wave of AI. In this episode: Ross Goodwin and Oscar Sharp of Sunspring, Janelle Shane of AI Weirdness, Drew Silverstein of Amper, Sebastian Thrun of Kittyhawk, Cristobal Valenzuela of Runway ML. Check out Magenta's AI Music tools here. Learn more about your ad-choices at https://news.iheart.com/podcast-advertisers
Inspired by show faves Janelle Shane and Max Woolf, Allison tests Google Cloud Vision to see what it thinks about…
This week, we are joined by the lovely, enlightening Janelle Shane of aiweirdness.com, known for her hilarious, endearing, and creative…
Tomaš Dvořák - "Game Boy Tune" - Machinarium Soundtrack - "Mark's intro" - "Recap of first year, part 1" - "Scott Heiferman excerpt" - "Vicki Boykis excerpt" - "Jessamyn West excerpt" - "Courtney Maum excerpt" - "Eric Zimmerman excerpt" - "Andrew Beccone excerpt" - "Roger Anderson excerpt" - "Andy Rehfeldt excerpt" - "Janelle Shane excerpt" - "Zaire Dinzey-Flores excerpt" - "Cheyenne Hohman excerpt" - "College student excerpt" - "Nir Eyal excerpt" - "Kirby Ferguson excerpt" - "Steven Levy excerpt" - "Mark reads Botnik's Harry Potter - excerpt" - "Ken Freedman excerpt" - "Jace Clayton excerpt" - "Jonathan Taplin excerpt" - "Scott Williams rec" - "Gabriel Weinberg excerpt" - "Christopher Potter excerpt" - "Botnik's Bob Mankoff and Jamie Brew excerpt" - "Matt Klinman excerpt" - "Yong Zhao excerpt" - "Recap of first year, part 2" - "Irwin Chusid excerpt" - "Kimzilla excerpt" - "Mathew Ingram excerpt" - "Alex George excerpt" - "Dylan Curran excerpt" - "Henry Lowengard (aka Webhamster Henry) excerpt" - "Catherine Price excerpt" - "Len Sherman excerpt" - "Corey Pein excerpt" - "Anya Kamenetz excerpt" - "David Sax excerpt" - "Felix Salmon excerpt" - "Meredith Broussard excerpt" - "Andrew Keen excerpt" - "Brett Frischmann excerpt" - "John Keating excerpt" - "Siva Vaidhyanathan excerpt" - "Mobile Steam Unit excerpt" - "Jaron Lanier excerpt" - "Paul Ford excerpt" - "Dr. Robert Epstein excerpt" - "Matt Warwick excerpt" - "James Bridle excerpt" - "Ali Latifi excerpt" Recap of the first year! Episode 50 of Techtonic, finishing the first year of the show, with a clip from every guest so far. http://www.wfmu.org/playlists/shows/81296
Tomaš Dvořák - "Game Boy Tune" - Machinarium Soundtrack - "Mark's intro" - "Recap of first year, part 1" - "Scott Heiferman excerpt" - "Vicki Boykis excerpt" - "Jessamyn West excerpt" - "Courtney Maum excerpt" - "Eric Zimmerman excerpt" - "Andrew Beccone excerpt" - "Roger Anderson excerpt" - "Andy Rehfeldt excerpt" - "Janelle Shane excerpt" - "Zaire Dinzey-Flores excerpt" - "Cheyenne Hohman excerpt" - "College student excerpt" - "Nir Eyal excerpt" - "Kirby Ferguson excerpt" - "Steven Levy excerpt" - "Mark reads Botnik's Harry Potter - excerpt" - "Ken Freedman excerpt" - "Jace Clayton excerpt" - "Jonathan Taplin excerpt" - "Scott Williams rec" - "Gabriel Weinberg excerpt" - "Christopher Potter excerpt" - "Botnik's Bob Mankoff and Jamie Brew excerpt" - "Matt Klinman excerpt" - "Yong Zhao excerpt" - "Recap of first year, part 2" - "Irwin Chusid excerpt" - "Kimzilla excerpt" - "Mathew Ingram excerpt" - "Alex George excerpt" - "Dylan Curran excerpt" - "Henry Lowengard (aka Webhamster Henry) excerpt" - "Catherine Price excerpt" - "Len Sherman excerpt" - "Corey Pein excerpt" - "Anya Kamenetz excerpt" - "David Sax excerpt" - "Felix Salmon excerpt" - "Meredith Broussard excerpt" - "Andrew Keen excerpt" - "Brett Frischmann excerpt" - "John Keating excerpt" - "Siva Vaidhyanathan excerpt" - "Mobile Steam Unit excerpt" - "Jaron Lanier excerpt" - "Paul Ford excerpt" - "Dr. Robert Epstein excerpt" - "Matt Warwick excerpt" - "James Bridle excerpt" - "Ali Latifi excerpt" Recap of the first year! Episode 50 of Techtonic, finishing the first year of the show, with a clip from every guest so far. https://www.wfmu.org/playlists/shows/81296
This week we're discussing math and things made from yarn. We welcome mathematician Daina Taimina to the show to discuss her book "Crocheting Adventures with Hyperbolic Planes: Tactile Mathematics, Art and Craft for all to Explore", and how making geometric models that people can play with helps teach math. And we speak with research scientist Janelle Shane about her hobby of training neural networks to do things like name colours, come up with Halloween costume ideas, and generate knitting patterns: often with hilarious results. Related links: Crocheting the Hyperbolic Plane by Daina Taimina and David Henderson Daina's Hyperbolic Crochet blog...
Janelle Shane is a research scientist who works in optics, in her spare time she trains neural networks to do things like write recipes, and name Ice Cream. While most of the things the neural network spits out can seem incomprehensible, the way it uses language and links idea together can be mind bending and expanding. Tune in to hear Harry and Janelle talk about making some of the recipes and what else we can do with AI. Feast Yr Ears is powered by Simplecast
Listen now: Anthony Shore is one of the most experienced namers out there. He has over 25 years of experience in naming and has introduced more than 200 product and company names to the world. Some of the names he’s created include Lytro, Yum! Brands, Fitbit Ionic, Qualcomm Snapdragon, and Photoshop Lightroom. In 2015, he was featured in a New York Times Magazine article titled “The Weird Science of Naming New Products,” which tells the story of Jaunt, a VR company he named. And a BBC News article called him "one of the world’s most sought after people when it comes to naming new businesses and products." Anthony has led naming at Landor Associates. He worked at the naming firm, Lexicon, and now he runs his own agency, Operative Words, which you can find at operativewords.com. I had a great time talking to Anthony. He shares a bunch of knowledge, some great tips and examples, and we even got to nerd out a bit talking about recurrent neural networks. Anthony's using artificial intelligence to supplement his own name generation; it's fascinating to think about how tools like these might be used in the future. Anthony also gave a great overview of his naming process and provided a list of tools and resources he uses when generating names. Some namers I've talked to seem to prefer analog resources (i.e., books). In contrast, Anthony almost exclusively uses software and online tools*, including the following: Wordnik ("a great resource for lists of words") OneLook Rhymezone Sketch Engine (a corpus linguistics database) TextWrangler (a plain ASCII text editor) BBEdit Microsoft Excel Anthony and I rounded out the conversation talking some of his least favorite naming trends, as well as what he likes most about being a namer. I highly recommend you check out Anthony’s website and blog at operativewords.com, where he has a bunch of amazing content that goes into way more detail on some of the topics we discussed. Below, you'll find the full transcript of the episode (may contain typos and/or transcription errors). Click above to listen to the episode, and subscribe on iTunes to hear every episode of How Brands Are Built. * To see a complete list of online resources listed by namers in episodes of How Brands Are Built, see our Useful List: Online/software resources used by professional namers. Rob: Anthony, thank you for joining me. Anthony: Thanks so much for having me, Rob. Rob: One of the first things I wanted to ask you about is something I don’t talk to namers about that much. It’s artificial intelligence. So, I saw that you’ve written and talked about the potential for using neural networks and brand naming. Can you tell me a little bit about what made you start down that path and then maybe how it works today? Anthony: Sure. I love talking about this. Artificial intelligence, and really using computers in general as an adjunct to what I do, has always been near and dear to my heart. Way back in college, I created a self-defined AI major. And so, when recurrent neural networks started becoming available and accessible over the last few years, I took an interest. And a woman named Janelle Shane, who is a nanoscientist and a neural network hobbyist, started publishing name generation by neural network. And this really caught my interest. And she was doing it just as a hobby and for fun, but I could see that neural networks offered a great deal of promise. And so, I engaged with her and asked her to teach me what she knew, so that I could also use neural networks to help me create brand names, in addition to using the other tools that I use, like my brain and other bits of software and resources. Rob: And is there...how technical is it now in your use of it? Is it something that anyone could do or does it really require a lot of programming knowledge? Anthony: Well, right now I’d say it’s not for the faint of heart. The only interface that really helpful is through command line, really using a terminal. So it’s all ASCII. It’s done in Linux and there’s various and sundry languages that have to be brought into play like Python and Lua and Torche. Rob: So you’ve got to know what you’re doing a little bit. Anthony: Yeah yeah. It’s not something that’s just a web interface that you plug ideas in and it’s going to work like a charm. Now, that is right now and it’s changing constantly. I mean, even in just the few months, six months that I’ve been doing this, I’ve been seeing more and more neural networks front ends on the web pop up. But their results aren’t very good at all. But it’s clear that that’s going to change. Rob: And I saw that Janelle has named a beer I think using her neural network it’s called The Fine Stranger which is a cool name for an indie beer. Have you had any success using it yet for some of your naming projects? Anthony: I’ll say this: that neural networks have, in my use of them, have illustrated to me some really interesting words and ideas, and clients are interested in AI and neural networks as part of the creative process. But there haven’t been any names yet that a neural network I’ve trained has generated and the client said, "Yes, that’s going to be our name." But it’s only a matter of time before that happens. But I’m bullish on AI and neural networks. Rob: Well, it’s funny because, I know this isn’t the same thing, but every now and then, I’m sure you see this too, there are these doomsday proclamations of naming...the human aspect of naming dying out because computers will be able to do it themselves. What are your thoughts in terms of how people and computers will interact in the future to do this job? Anthony: Oh, without a doubt, accessible AI tools for name generation will increase everyone’s access to interesting names. But just because you are shown a word or a list of words doesn’t mean that you’re going to know, as someone in the company for instance, is this really going to be the right word? Does this have the potential to become a brand? And there’s other aspects of naming such as understanding and ascertaining what the right naming strategy should be. What should the right inputs that an AI should be trained on? You know, what kinds of words should the AI be trained on? Helping a client see how each word in a list of words could become their future could become their brand, and helping them to see the the assets and potential of each of these names. That’s not something AI is going to do. So there’s still a place for professional name developers. Rob: I want to back up a little bit and just talk more generally about about name generation. Can you just give me a 30,000-foot view of the entire naming process before we dive into some of the specific steps within it? Anthony: Yeah, sure, I’ll be happy to Rob. So, I’ll be briefed by the clients, and maybe they’ll provide me with an actual creative brief, or not, but from that, I’ll develop name objectives that succinctly capture what the name needs to accomplish; what it needs to support or connote. And once we agree on those marching orders, I’ll get into creative. Now the first wave of creative is a mile wide and an inch deep, where I explore many different perspectives of the brand, different tonalities, different styles of names, different executions. And that process takes about two weeks of creative development. At the end, there’s probably a thousand or several thousand words that have been developed. I’ll cull the best 150 names and run those through preliminary global trademark screening with my trademark partner, Steve Price. And from that, there’ll be 50 to 70 names, and I’ll present those names to the client. And I present them in a real-world context so they look less like hypothetical candidates and more like de facto, existing brands. And I present each name in the exact same visual context to really keep the focus on the name and not confound variables by changing up the color or the font. I present each name individually, talk about their implications and what they bring. And at the end the client gets feedback—what they like, what they don’t like, what they’re neutral about—and that informs the second round of creative work, which is an inch wide and a mile deep, where I delve into what was really working for them. And, it’s important to have a couple of rounds of creative because it’s one thing to agree in an abstract brief, but what clients really react to are real words, and that’s where you can really find out what’s going on, because it’s difficult for people to really understand what they like and don’t like in a name until they see them. And so that second round of work focuses on what’s working for them. And that process again is about two weeks, thousands of names developed, 100, 150 go into screening for trademark and domains, and then 50 names plus are presented to the client. And the client chooses from all of the names that’ve been presented across both rounds—typically over 100 names. They bring a handful of names into their full legal screening. Maybe there is cultural and linguistic checks that have to happen, and their full legal checks and then they choose one final name to run with. Rob: What steps do you take when you just start generating names? Anthony: All right, so once we all agree on what the marching orders are. The process looks like this: I’ll first bring up my go-to set of software and applications and resources that I use pretty much in parallel, and I bounce between them as I go through development. So, I’ll bring up I’ll bring up Wordnik, which is an important piece of software online, a great resource for lists of words. I use OneLook, Rhymezone, an engine called Sketch Engine, and various other applications. And I will use those to identify words, word parts, that are interesting to me. And so over the course of that development I will use different techniques in order to unearth every possible idea I can find. I will also go through prior projects that I’ve done through Operative Words, and if I find a good word for this project, I’ll search on my computer for all files that I’ve worked on that also contain that word, and so I’ll be able to mine from my prior work. And so, that creative process happens for about two weeks. At the end of two weeks I will have amassed thousands of ideas, and if I bring neural networks and software-based combinations and permutations there are literally tens-of-thousands of ideas in the picture. Rob: You mentioned Sketch Engine awhile ago as one of the online resources that you use. I’ve seen that you’ve written quite a bit about it and how you use it. But can you just briefly explain what it is and why you recommend it so highly? Anthony: Yes, Sketch Engine is a corpus linguistics database. So, let me explain that. Corpus linguistics is using a very large body of real-world language. That’s a corpus, and it’s plural is corpora. And using computers to sort of analyze and tag and organize what’s in there. So a corpus might be, for instance, the one I use is all of the news articles that have been published between 2014 and 2017. All of that real-world text—that’s 28 billion words—all of which have been tagged by part of speech, and it’s recorded all of the words that live next to all of the other words. In other words, it records what are called "colocations." Now, colocations are useful because you can learn a lot about a word by the company it keeps. So if there’s an attribute that a client is interested in, let’s say ‘fast’ or ‘smart,’ I can look up a word like "fast" or "smart" or any other related word, and discover all of the words that have been modified by it. So, therefore I can find an exhaustive list of things that are fast, things that are smart, or verbs related to things that are fast and things that are smart. And so, the benefit is, one, is exhaustiveness, two, is also linguistic naturalness. That is, you’re finding how words are used in a real-world context, and I believe that linguistic naturalness in names is very important for names being credible, for names being relatable, and for names feeling very adaptable. You’re not foisting ideas on people that make no sense. Rob: It rolls off the tongue, to use kind of the layman’s term. Anthony: Yes, that’s right. Rob: You’ve mentioned so many online tools, I’m just curious, is there anything offline that you frequent? Anthony: I’m typically watching some kind of movie or TV show or some other sort of visual stimulus while I’m doing my creative development. Rob: Interesting. Anthony: And those things provide visual stimulation and there is dialogue and other ideas that come up that provide an extra input to my creative process. Rob: Do you choose what you’re watching based on the project, or is it just whatever you happen to be watching anyway? Anthony: No, no, I do. Absolutely. So, with projects that are very technologically driven or scientifically driven, I’ll watch something that’s sort of technological or scientific. Rob: That’s fun. Do you ever just, you know, there’s been a movie that you’ve been wanting to see anyway, and you feel like, "Oh, that fits this project," and you put that on? Anthony: Yeah, absolutely. Rob: Another technique that I saw that you wrote about, it’s called an "excursion." Can you can explain what that is? Is that related to the idea of watching a movie while you’re doing naming? Anthony: In an excursion, you identify a completely unrelated product category. Sometimes the less related the better. And you look for examples of a desired attribute or quality from that category. For instance, if you’re naming a new intelligent form of AI, let’s go ahead and consider examples of intelligence from the world of kitchens. Let’s look for ideas of intelligence in the world of sports. By thinking through an attribute as it appears somewhere else, you are able to find ideas that are differentiated but relevant, because when you take a word from a different category and drop it into a relevant category, it immediately becomes relevant to that new category. People are very comfortable with this technique. Rob: I have a couple of tactical, logistics questions that I’m curious how you would respond to. What about the actual medium that you use when you’re writing down or documenting your name ideas? Do you do this in Excel or do you have a pad of paper with you while you’re doing all these other exercises, and you’re just furiously jotting down ideas? Anthony: I’m using Microsoft Word, by and large, for this. I also use another text application called TextWrangler. I use Excel when I’m charged with developing a generic descriptor for a new product. Rob: And what is TextWrangler? Is there an important difference between that and Word, or just, you happen to use both? Anthony: TextWrangler is a text editor. So, there’s no formatting whatsoever. It’s plain ASCII text. It has another sister application called BBEdit, and these applications are very useful when you’re working with pure text, and it has some terrific tools like the ability to eliminate duplicates, the ability to use pattern recognition, something called Grep, in order to find words that include certain patterns. So, very useful tool and an adjunct to the toolset that I use. Rob: And then the other logistical question is just about timing. You mentioned usually a two-week period of time for your first run at name generation, but I’ve heard other namers say they like to have a four-hour window to really immerse themselves in a project anytime they sit down to do name generation. Do you have any rules of thumb that you adhere to in terms of timing? Anthony: Over the course of two weeks, the process is, I will immerse myself completely in a project maybe for four hours, maybe for a day, maybe for two days, or three days even. And then I put it away. And then I forget about it, and I work on something else for a day or two, and then I come back to it. And so, I have this repeated process of immersion and then incubation and I repeat that in order to do creative work. That’s a process that’s been demonstrated and proven to help maximize creative output. Those "aha" moments—those Eureka moments you have in the shower—happen because you’ve been thinking about something and then stop thinking about it, consciously anyway. But meanwhile there’s something bubbling up under the surface that comes out when you least expect it. Rob: You’ve mentioned a lot of things that you could use if you get stuck on a project. Do you ever get writer’s block so to speak, and if so, is there anything that you haven’t already mentioned that you would use to kick yourself back into naming gear? Anthony: Sure. You know, the writer’s block happens when a client is looking for something that isn’t different. If their if their product or their brand doesn’t really have something new to offer, that’s a more difficult nut to crack. And so, in those cases, I will look at projects that are utterly unrelated in any way, or other kinds of lists. And in this way, I expose myself to words that have nothing to do with the project whatsoever. But, because of how I see words and how I think, I can look at a list and look at a word and go, "Oh, wait a minute. There’s a story there." I can see what would be related or that would be interesting. So, really, it’s a process of compelling me to look at words just in order to see what happens. It’s a little bit stochastic. It’s a little bit random, but it’s actually very useful and interesting and new ideas can come out of it, even for projects where there isn’t something wildly different under the surface. Rob: I like to ask whether there are any names or naming tropes that you see that you’re getting sick of. You know, like any other creative process, there are trends in the industry—startups ending with with "-ly," for example. Are there any specific name ideas or trends like that that you want to call out or that you wish would discontinue? Anthony: Well, Rob, there’re always trends that I wish would go away. In fact, any trends, by and large, I wish would go away, because they’re unoriginal and they don’t serve the brands that they represent. They look derivative. They look unoriginal. And what does that say about their company or their products? So, yes, I’m not crazy about the "-ly" trend that’s been going on, just as I wasn’t crazy about the "oo" trend that was happening after Google and Yahoo found success, just like I wasn’t crazy about the "i-" or "e-" prefix trend back when that was happening. You know, I’m just fundamentally opposed to these ideas because they don’t they don’t serve their clients and they, I think, reflect a company that isn’t truly original. I’m also not crazy about the trend to randomly drop consonants or vowels, or double them, because it’s clear that it was done just in order to secure a dotcom domain, and it feels like domain desperation. Rob: Right, it feels forced. Anthony: Exactly. And linguistic unnaturalness, where you do these things in order to shoehorn words in order to get a free dotcom, I don’t think serves a brand well either, because they’re immediately off-putting, they look unnatural, and they’re difficult to relate to. Rob: The last question I like to ask namers is just what your favorite thing is about being a namer or coming up with name ideas. Anthony: Well, I really love the process of identifying, exhaustively, every possible perspective of a new brand. If I’m looking at a list of a thousand potential names, those are a thousand different perspectives, a thousand different ways of framing you looking at this company. And those are a thousand potential futures. And then seeing when a company finally adopts a name that I’ve helped them with—to see how they adopted the name, breathe life into it, and then run with it, and do their own, get their own inspiration from the name. So, as an example, a while ago I worked with an architectural and design firm called Pollack Architecture, who needed a new name. And eventually, I worked with them and developed the name "Rapt Studio" for them, R-A-P-T, "Rapt Studio" for them. And they do brilliant interior and architecture work and branding work as well. Really brilliant and wonderful people. And so once I gave them "Rapt Studio," they ran with it and they called their employees "Raptors." I didn’t give them that idea. They have meetings once a week, which are called "Monday Rapture" meetings. All right. So, I love when a name can inspire a client with great ideas. That makes me very happy. Rob: That’s great. Well let’s leave it there. And I just want to say thank you again for your willingness to share some of your thinking and how you do what you do. Anthony: Well, thank you so much, Rob. You know, I really do this for selfish reasons because I hate ugly words, and names are an unavoidable part of our environment and our habitat, and wouldn’t you much rather be surrounded by beauty and gardens than blight? I feel that way about names and so I give away what I know, because I want other namers, even my direct competitors, to come up with with great names so that they can also populate the world with words that are interesting and creative imaginative, and words we like to have around. Rob: Well, you call it selfish but it seems selfless to me. I really appreciate it and thanks again. Let’s go make some more beautiful words out there. Anthony: Yeah, let’s do that. Thanks, Rob. Rob: Thank you.
Nvidia GTC 2018 Quadro GV100 - Anandtech V100 32GB - Anandtech NVSwitch and DGX-2 - Anandtech More Intel 8th Gen Core CPUs Anandtech overview Mobile Core i9-8950HK Mobile Core i7-8559U Dell with 8th Gen Core CPUs - Anandtech ARM PC followup Gigabit ARM Workstation - Bit Tech Cloudflare testing ARM in Servers - Matthew Prince on Twitter PCIe SD cards RED proprietary SSDs for cinema cameras Google Knusperli Generating images from human brain activity Guilluame Dumas Tweet Paper Newsletters we recommend Wild Week in AI Jupyter Last Week in AWS Issue 52 Import AI Detecting jaywalking with facial recognition Stratechery Stratechery 4.0 Benedict Evans Microsoft re-org Stackshare Instacart Article Aftershow VR game reviews Elite Dangerous - UpIsNotJump Fallout 4 Negatives - UpIsNotJump Fallout 4 Positives - UpIsNotJump More Computer Vision Trolling: Request for sheep - Janelle Shane on Twitter Do Neural Nets Dream of Electric Sheep Diagnosis Pain Levels in Sheep Amazon Transcribe Episode 27 transcript sample
This episode is inspired by this post by Janelle Shane. Listen for the rules, such as they are, of a new mini-contest!
This episode we travel to a future where the 2020 census goes haywire. What happens if we don’t get an accurate count of Americans? Who cares? Apparently the constitution does! The 2020 census is currently in the crosshairs — census watchers say that it’s not getting enough funding, and community organizations and local governments are already worrying about what an inaccurate census might mean for their people. To walk us through the current perils facing the census I talked to Hansi Lo Wang, a national correspondent for NPR who has been covering the census; Phil Sparks, the co-director of The Census Project, an organization that brings together groups who use census data; Susan Lerner, the director of Common Cause New York, a government watchdog group; Cayden Mak, the executive director of 18 Million Rising, an online organizing group that works with Asian American communities; and Dawn Joelle Fraser, a storyteller and communications coach who worked for the census in 2010. Further reading: Could A Census Without A Leader Spell Trouble In 2020? US Census Director Resigns Amid Turmoil Over Funding of 2020 Count Departure of U.S. Census director threatens 2020 count The 2020 Census is at risk. Here are the major consequences With 2020 Census Looming, Worries About Fairness and Accuracy Trump's threat to the 2020 Census NAACP lawsuit alleges Trump administration will undercount minorities in 2020 Census Census 2020: How it’s supposed to work (and how it might go terribly wrong) Census watchers warn of crisis if 2020 funding is not increased Likely Changes in US House Seat Distribution for 2020 What Census Calls Us: A Historical Timeline As 2020 Census Approaches, Worries Rise Of A Political Crisis After The Count The American Census: a social history by Margo J. Anderson The Story Collider podcast: Dawn Fraser, The Mission Note: This is the second to last episode of this season of Flash Forward! The last episode drops January 9th, and then the show will be in hiatus for a few months while I prep for season 4, which is going to be great I can already assure you! If you want to follow along with the prep for season 4, and just generally keep up with what's going on with the show and when it's coming back stay in touch via Twitter, Facebook , Reddit, or, best of all, Patreon, where I'll post behind the scenes stuff as I get ready for the next Flash Forward adventures. Also, I’m going on tour with PopUp Magazine in February! Get your tickets at popupmagazine.com. Flash Forward is produced by me, Rose Eveleth. The intro music is by Asura and the outtro music is by Hussalonia. Special thanks this week to Liz Neeley who voiced our discouraged bureaucrat. The episode art is by Matt Lubchansky. If you want to suggest a future we should take on, send us a note on Twitter, Facebook or by email at info@flashforwardpod.com. We love hearing your ideas! And if you think you’ve spotted one of the little references I’ve hidden in the episode, email us there too. If you’re right, I’ll send you something cool. And if you want to support the show, there are a few ways you can do that too! Head to www.flashforwardpod.com/support for more about how to give. But if that’s not in the cards for you, you can head to iTunes and leave us a nice review or just tell your friends about us. Those things really do help. As a bonus, at the end of this episode, you'll hear a human chorus record a psalm that was written by Janelle Shane's machine learning algorithm. (Remember her from the super religion episode?) and arranged by Hamish Symington and Owain Park. Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode, we travel to a future where a tech mogul feeds a machine learning system all the religious texts he can find, and asks it to generate a “super religion.” Buckle up because this is a long episode! But it’s fun, I promise. For the intro of this episode I worked with Janelle Shane to actually train a machine learning algorithm on a big chunk of religious texts that I assembled, and spit something back out. The specifics of the texts and the machine learning algorithm come with a handful of caveats and notes, which you can find at the bottom of this post. Janelle has done of ton of really funny, interesting things with machine learning algorithms that you can find here. To analyze the text that this algorithm generated, and talk about the limitations of this kind of project, I spoke with a big group of people from a variety of backgrounds: Linda Griggs is an Episcopal priest and an assisting priest at St. Martin's Episcopal Church in Providence Rhode Island. Lauren O’Neal and Niko Bakulich are the hosts of a podcast called Sunday School Dropouts, whose tagline is: "an ex-Christian (Lauren) and a non-believing sort of Jew (Niko) read all the way through the Bible for the first time." Elias Muhanna is the Manning Assistant Professor of Comparative Literature at Brown University, and director of the Digital Islamic Humanities Project. Beth Duckles is a sociologist (who you heard last episode talking about peanut allergies). Carol Edelman Warrior is an Assistant Professor of English at Cornell’s American Indian and Indigenous Studies Program. She is also enrolled with the Ninilchik Village Tribe (Dena'ina Athabascan / Alutiiq), and is also of A'aninin (Gros Ventre) descent. Mark Harris is a journalist who writes about technology, science and business for places like WIRED, The Guardian and IEEE Spectrum. He wrote a great piece about Anthony Levandowski’s new religion of artificial intelligence called Way of the Future. Further Reading: Sunday School Dropouts: Robobible Inside the First Church of Artificial Intelligence God is a Bot and Anthony Levandowski is His Messenger Way of the Future Nine Billion Names by Arthur C. Clarke Dataism + Machine Learning = New Religion Machine Learning May Help Determine When the Old Testament Was Written Indigenous Writers of Speculative Fiction Aztec Philosophy: Understanding a World in Motion The Space NDN's Star Map Borrowed Power: Essays on Cultural Appropriation For more caveats on the algorithm itself and the source text, see here. Learn more about your ad choices. Visit megaphone.fm/adchoices
Bonus midweek edition, coming to you as close to LIVE as is humanly possible... Inspired by last week's episode, Seduced By A Robot, I downloaded a neural network and started to play around with it. See what happens when I feed it: The works of Shakespeare My Facebook conversation history A biography of Stalin and, of course, the twitter feed of President Trump The neural network I used was the tensorflow char-rnn - an open-source framework by Chen Liang - and I got the idea from Janelle Shane, who you can find at www.lewisandquark.tumblr.com. Also: an appeal for listener questions! I want to do a listener questions episode, but to do that, I need... listener questions. Tweet us @physicspod, email us at physicspod@outlook.com. Any question will be considered, no matter how outlandish, and the ones that work out will be featured on a future episode. Here are some ideas: Something you wanted to know about physics Something you wanted to know about seduction (which I achieve masterfully in 100% of cases) Something that's been bothering you in your personal life or in the world at large Hateful abuse phrased as a question Send them in! And see you Saturday for a regular episode.