Podcasts about Perceptron

  • 44PODCASTS
  • 55EPISODES
  • 42mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Apr 17, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Perceptron

Latest podcast episodes about Perceptron

Eye On A.I.
#248 Pedro Domingos: How Connectionism Is Reshaping the Future of Machine Learning

Eye On A.I.

Play Episode Listen Later Apr 17, 2025 59:56


This episode is sponsored by Indeed.  Stop struggling to get your job post seen on other job sites. Indeed's Sponsored Jobs help you stand out and hire fast. With Sponsored Jobs your post jumps to the top of the page for your relevant candidates, so you can reach the people you want faster. Get a $75 Sponsored Job Credit to boost your job's visibility! Claim your offer now: https://www.indeed.com/EYEONAI     In this episode, renowned AI researcher Pedro Domingos, author of The Master Algorithm, takes us deep into the world of Connectionism—the AI tribe behind neural networks and the deep learning revolution.   From the birth of neural networks in the 1940s to the explosive rise of transformers and ChatGPT, Pedro unpacks the history, breakthroughs, and limitations of connectionist AI. Along the way, he explores how supervised learning continues to quietly power today's most impressive AI systems—and why reinforcement learning and unsupervised learning are still lagging behind.   We also dive into: The tribal war between Connectionists and Symbolists The surprising origins of Backpropagation How transformers redefined machine translation Why GANs and generative models exploded (and then faded) The myth of modern reinforcement learning (DeepSeek, RLHF, etc.) The danger of AI research narrowing too soon around one dominant approach Whether you're an AI enthusiast, a machine learning practitioner, or just curious about where intelligence is headed, this episode offers a rare deep dive into the ideological foundations of AI—and what's coming next. Don't forget to subscribe for more episodes on AI, data, and the future of tech.     Stay Updated: Craig Smith on X:https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI     (00:00) What Are Generative Models? (03:02) AI Progress and the Local Optimum Trap (06:30) The Five Tribes of AI and Why They Matter (09:07) The Rise of Connectionism (11:14) Rosenblatt's Perceptron and the First AI Hype Cycle (13:35) Backpropagation: The Algorithm That Changed Everything (19:39) How Backpropagation Actually Works (21:22) AlexNet and the Deep Learning Boom (23:22) Why the Vision Community Resisted Neural Nets (25:39) The Expansion of Deep Learning (28:48) NetTalk and the Baby Steps of Neural Speech (31:24) How Transformers (and Attention) Transformed AI (34:36) Why Attention Solved the Bottleneck in Translation (35:24) The Untold Story of Transformer Invention (38:35) LSTMs vs. Attention: Solving the Vanishing Gradient Problem (42:29) GANs: The Evolutionary Arms Race in AI (48:53) Reinforcement Learning Explained (52:46) Why RL Is Mostly Just Supervised Learning in Disguise (54:35) Where AI Research Should Go Next  

TechSurge: The Deep Tech Podcast
Understanding the Elegant Math Behind Modern Machine Learning

TechSurge: The Deep Tech Podcast

Play Episode Listen Later Feb 27, 2025 74:43


Artificial intelligence is evolving at an unprecedented pace—what does that mean for the future of technology, venture capital, business, and even our understanding of ourselves? Award-winning journalist and writer Anil Ananthaswamy joins us for our latest episode to discuss his latest book Why Machines Learn: The Elegant Math Behind Modern AI.Anil helps us explore the journey and many breakthroughs that have propelled machine learning from simple perceptrons to the sophisticated algorithms shaping today's AI revolution, powering GPT and other models. The discussion aims to demystify some of the underlying mathematical concepts that power modern machine learning, to help everyone grasp this technology impacting our lives–even if your last math class was in high school. Anil walks us through the power of scaling laws, the shift from training to inference optimization, and the debate among AI's pioneers about the road to AGI—should we be concerned, or are we still missing key pieces of the puzzle? The conversation also delves into AI's philosophical implications—could understanding how machines learn help us better understand ourselves? And what challenges remain before AI systems can truly operate with agency?If you enjoy this episode, please subscribe and leave us a review on your favorite podcast platform. Sign up for our newsletter at techsurgepodcast.com for exclusive insights and updates on upcoming TechSurge Live Summits.Links:Read Why Machines Learn, Anil's latest book on the math behind AIhttps://www.amazon.com/Why-Machines-Learn-Elegant-Behind/dp/0593185749Learn more about Anil Ananthaswamy's work and writinghttps://anilananthaswamy.com/Watch Anil Ananthaswamy's TED Talk on AI and intelligencehttps://www.ted.com/speakers/anil_ananthaswamyDiscover the MIT Knight Science Journalism Fellowship that shaped Anil's AI researchhttps://ksj.mit.edu/Understand the Perceptron, the foundation of neural networkshttps://en.wikipedia.org/wiki/PerceptronRead about the Perceptron Convergence Theorem and its significancehttps://www.nature.com/articles/323533a0

SparX by Mukesh Bansal
Why Machines Learn: The Elegant Math Behind AI with Anil Ananthaswamy | SparX by Mukesh Bansal

SparX by Mukesh Bansal

Play Episode Listen Later Jan 21, 2025 67:50


Anil Ananthaswamy is a renowned science writer and journalist who has written extensively on various scientific topics. In his latest book "Why Machines Learn", Anil explores the fascinating world of artificial intelligence and machine learning. He reveals the intricate mechanisms and complex algorithms that underlie these cutting-edge technologies. Join us for a fascinating conversation with science writer Anil Ananthaswamy as he shares insights from his book and sheds light on the rapidly evolving field of AI. Tune in to gain a deeper understanding of how these machines work at a basic mathematics level. Resource List - Why Machines Learn, book by Anil Ananthaswamy - https://amzn.in/d/bmirU45 Dartmouth Summer Research Project on Artificial Intelligence - https://home.dartmouth.edu/about/artificial-intelligence-ai-coined-dartmouth What is the Perceptron artificial neural network? - https://www.geeksforgeeks.org/what-is-perceptron-the-simplest-artificial-neural-network/ Read about the McCulloch-Pitts Artificial Neuron - https://towardsdatascience.com/mcculloch-pitts-model-5fdf65ac5dd1 Nobel Prize in Physics 2024 - https://www.nobelprize.org/prizes/physics/2024/press-release/ What is the Hopfield Neural Network? - https://www.geeksforgeeks.org/hopfield-neural-network/ Read about Backpropagation - https://en.wikipedia.org/wiki/Backpropagation “Learning representations by back-propagating errors”, paper by Geoffrey Hinton, David Rumelhart and Ronald Williams - https://www.nature.com/articles/323533a0 AlexNet by Geoffery Hinton and team - https://en.wikipedia.org/wiki/AlexNet What is ImageNet? - https://www.image-net.org/about.php ‘Attention Is All You Need', transformer architecture paper - https://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf What are Neural Scaling Laws? - https://en.wikipedia.org/wiki/Neural_scaling_law NeuroAI - https://neuro-ai.com/ About SparX by Mukesh Bansal SparX is a podcast where we delve into cutting-edge scientific research, stories from impact-makers and tools for unlocking the secrets to human potential and growth. We believe that entrepreneurship, fitness and the science of productivity is at the forefront of the India Story; the country is at the cusp of greatness and at SparX, we wish to make these tools accessible for every generation of Indians to be able to make the most of the opportunities around us. In a new episode every Sunday, our host Mukesh Bansal (Founder Myntra and Cult.fit) will talk to guests from all walks of life and also break down everything he's learnt about the science of impact over the course of his 20-year long career. This is the India Century, and we're enthusiastic to start this journey with you. Follow us on Instagram: / sparxbymukeshbansal Website: https://www.sparxbymukeshbansal.com You can also listen to SparX on all audio platforms Fasion | Outbreak | Courtesy EpidemicSound.com

Programa del Motor: AutoFM
El uso de la IA en la factoría de Renault en Valladolid.

Programa del Motor: AutoFM

Play Episode Listen Later Sep 2, 2024 78:13


Visita la factoría de Renault en Valladolid en formato audio de la mano de AutoFM, un paseo por todas las instalaciones de Renault: Carrocería, Baterías, Pintura y Montaje. LA FACTORÍA DE RENAULT GROUP EN VALLADOLID, A LA VANGUARDIA DE LA INNOVACIÓN PARA LA FABRICACIÓN DE NUEVO CAPTUR Y SYMBIOZ • La Inteligencia Artificial, la digitalización y la electrónica han invadido una Factoría que se ha preparado para producir Nuevo Captur y Symbioz, extremando los ya exigentes controles de calidad y todo ello de forma coherente con los objetivos de descarbonización. • Estos dos modelos, junto con los tres híbridos que se producen en la Factoría de Palencia (Austral, Espace y Rafale) posibilitan que el Polo de Hibricación de Renault Group en España produzca el corazón de la gama híbrida de Renault. Esta avalancha de nuevos productos, ha permitido a las factorías españolas aumentar su producción un 18% en 2023. • Los nuevos productos son una muestra más de la confianza de Renault Group en las plantas españolas, ya que con la fabricación de Symbioz es la primera vez que la Factoría de Valladolid produce un vehículo del segmento C y la Factoría de Palencia un modelo del segmento D. Nuevo Renault Captur y Symbioz completan el despliegue del Plan Industrial Renaulution en España 2021-2024, que supuso la adjudicación de cinco nuevos vehículos híbridos a España. Tres nuevos para la Factoría de Palencia, que han llegado de forma escalonada en 2022, 2023 y 2024: Austral, Espace y Nuevo Rafale y dos nuevos para la Factoría de Valladolid: Nuevo Captur y Symbioz. Con los cinco vehículos lanzados al mercado, las factorías de Renault Group en España producen el corazón de la gama híbrida del Grupo, 5 de los 7 modelos son “Made in Spain”. Esta avalancha de nuevos productos ha permitido a las factorías españolas aumentar su producción un 18% en 2023. La confianza del Grupo depositada en las factorías españolas las ha llevado ha cumplir un nuevo hito: es la primera vez que la Factoría de Valladolid entra en el segmento C y la Factoría de Palencia aterriza en el segmento D. La llegada de todos estos vehículos ha permitido a las factorías españolas posicionarse a la vanguardia de la innovación, la digitalización y la descarbonización, poniendo en marcha incluso controles de calidad aún más exhaustivos. La Factoría de Valladolid es una planta piloto dentro del Manufacturing 4.0. donde la DIGITALIZACIÓN ha llegado a todos sus rincones con más de 15.595 datos subiendo a la nube cada segundo. Asimismo, la entrada en vigor de la normativa GSR2 (General Safety Regulation) que regula la implementación de sistemas avanzados de asistencia al conductor (ADAS) ha hecho necesaria una nueva arquitectura electrónica en el vehículo llamada SWEET 400. La puesta en marcha de este nuevo sistema electrónico ha necesitado la construcción y validación de más de 200 prototipos antes de la industrialización de Nuevo Captur y Symbioz. Con esta nueva arquitectura electrónica todos los procesos de validación para los protocolos de comunicación entre calculadores han cambiado, siendo necesaria la reprogramación en todos los puestos afectados del departamento de Montaje y la instalación de nuevos servidores de ciberseguridad. En el departamento de Chapa destacan proyectos de digitalización como: Bin picking, con robots abastecidos mediante AGVs (Automatic Guided Vehicles), permitiendo no tocar ninguna de las piezas (0-touch) optimizando de esta forma la máxima calidad del vehículo; cámaras 3D que hacen diferentes medidas para que la soldadura láser del techo quede perfecta; cámaras de IA para la detección de la diversidad de piezas y para controles de calidad en tiempo real; el sistema RFID que asegura el stock de manera automática controlando el flujo de entrada y salida del almacén y el proyecto Bodyshop 360, capaz de compartir toda la info en tiempo real en pantallas repartidas por todo el taller. Además, las innovaciones puestas en marcha en la factoría coincidiendo con la llegada de Nuevo Captur y Symbioz van todas ellas encaminadas a asegurar altos estándares de CALIDAD en la fabricación. En el departamento de chapa, por ejemplo, el 100% de los puntos de soldadura están automatizados; el control geométrico de calidad llamado PERCEPTRON por el que pasan el 100% de las cajas, cuenta con cuatro robots con cámaras incorporadas en sus extremos, que son capaces de medir, en dinámico, 116 cotas específicas y dimensionales; las pistolas láser al final del proceso aseguran los juegos y afloramientos de las puertas conforme a las exigencias del cliente y el control de aspecto y calidad antes de enviar cualquiera de las 54 diversidades de caja que se dirigen a pintura en el túnel de calidad premium instalado para los nuevos vehículos. En el departamento de pintura se ha instalado un “Túnel de detección automática”, donde cada cámara realiza más de 1.600 fotografías/minuto, lo cual permite un control de calidad exhaustivo, asegurando la calidad superficial y el aspecto final de la pintura, recogiendo 65.000 fotos por vehículo, que serán analizadas con IA, gracias a 40 nuevas cámaras situadas a la entrada y salida del túnel. Además, tres bancos ADAS del departamento de Montaje para vehículos hiperconectados, inteligentes y flexibles calibran los 28 y 29 ADAS de Nuevo Captur y Symbioz respectivamente. La calidad se completa con el feed back directo de los concesionarios y el test drive “Confirmation Run” por el que más de 70 vehículos ruedan 1 millón de kilómetros para garantizar la calidad total del producto que sale al mercado. FACTORÍA DE VALLADOLID: COMPROMETIDA CON LOS OBJETIVOS DE DESCARBONIZACIÓN Todo ello en línea con los objetivos de DESCARBONIZACIÓN de Renault Group que busca la neutralidad de carbono en las factorías en 2030, en Europa en 2040 y en el mundo en 2050. La energía eléctrica consumida es de origen renovable. El acuerdo alcanzado con Iberdrola en 2021 ha permitido disminuir la huella de carbono de las factorías en un 40%. Asimismo, se trabaja de forma muy intensa en la reducción de los consumos. Gracias a programas como ECOGY que utiliza la IA para reducir los consumos energéticos, permitiendo generar cuadros de indicadores personalizados y totalmente configurables. El trabajo realizado en la gestión de consumos ha permitido alcanzar una reducción del 42% del consumo de energía y una reducción del 39% del consumo de agua. Todos los proyectos puestos en marcha en la factoría tienen en cuenta estos objetivos pero en especial el departamento de pintura es clave en esta transformación: • El Departamento de Pintura de la Factoría de Valladolid no tiene línea de aprestos. La Factoría de Valladolid es la única del Grupo que cuenta con el proceso 4wet. Este permite aplicar las capas, una detrás de otra, con un único pase por el horno al final del proceso consiguiendo un importante ahorro de energía y una reducción de las emisiones COVs del proceso. • El departamento de pintura ha realizado un importante plan de ruptura para la reducción del consumo energético basado en la automatización de los procesos de arranque y parada de instaciones, la monitarización, y la digitalización para garantizar la máxima eficiencia del proceso. • Con todo, el departamento de pintura ha conseguido un ahorro del 40% del consumo de gas y del 10% electricidad en los últimos dos años. ENSAMBLADO DE BATERÍAS DEL POLO DE HIBRIDACIÓN El ensamblado de las baterías del Polo de Hibridación se realiza en un departamento perteneciente a la Factoría de Carrocerías, y para ello dispone de la última tecnología para la fabricación de baterías eléctricas de tracción para vehículos híbridos, con un proceso diseñado para asegurar la máxima seguridad y calidad del producto. Dos talleres están al servicio de los vehículos híbridos fabricados en Valladolid y Palencia. • Un taller dedicado al montaje de baterías para los vehículos HEV (híbridos no enchufables) con una capacidad de 60 baterías a la hora. Modelo BTA 1.0 280V para los vehículos Captur de Valladolid con una capacidad de 1.2Kwh y BTA 1.5 400V para los vehículos de la familia Austral de Palencia con una capacidad de 1.75Kwh. • Un segundo taller, de reciente integración, dedicado al montaje de baterías para los vehículos PHEV (híbridos enchufables) con una capacidad de 6 baterías / hora. BTJ 400V se utiliza en el nuevo Rafale PHEV y dispone de una capacidad de 21Kwh. TECNOLOGÍA E-TECH FULL HYBRID; LIDER COMERCIAL Y TECNOLÓGICA La temprana apuesta de Renault desarrollando una completa gama de vehículos eléctricos en 2011 ha servido como una base de conocimientos inmejorables para diseñar su tecnología E-Tech full hybrid. Los conocimientos adquiridos en términos de eficiencia y gestión de la energía, en combinación con los aprendizajes obtenidos al desarrollar los motores híbridos de Fórmula 1, han dado un excepcional resultado. No en vano, esta tecnología combina 1 motor térmico, 2 motores eléctricos, 1 batería de tracción y 1 caja de cambios inteligente multimodo para convertirse en líder en términos de emisiones – 20 g CO2 inferiores a sus competidores - y bajo consumo – 1 l/100 km inferiores -. Además, esta tecnología se caracteriza por conseguir que hasta el 80% de la conducción se haga en modo eléctrico en entornos urbanos, lo que redunda en una conducción muy placentera y eficiente. Esta gama de vehículos comenzó en el año 2020 con Clio E-Tech full hybrid, año en el que Renault se posicionaba como séptima marca en el ranking de ventas de vehículos híbridos (HEV) con una cuota de este mercado del 1%. Actualmente la gama cuenta con 7 modelos y ha sido marca con mayor crecimiento hasta posicionarse como la segunda marca que más híbridos vende consiguiendo, casi 1 de cada 5 ventas en España. Dirección, producción y realización: Fernando Rivas: https://www.linkedin.com/in/fernando-rivas-4965681a8/ Jose Lagunar: https://www.linkedin.com/in/joselagunar/ Puedes seguirnos en nuestra web: https://www.podcastmotor.es Twiter: @AutoFmRadio Instagram: https://www.instagram.com/autofmradio/ Youtube: https://www.youtube.com/channel/UC57czZy-ctfV02t_PeNXCAQ Contacto: info@autofm.es

CRASH – La chiave per il digitale
Re-Crash – La nascita dell'intelligenza artificiale

CRASH – La chiave per il digitale

Play Episode Listen Later Aug 28, 2024 16:57


La storia e le origini dell'intelligenza artificiale. Sono passati sessantacinque anni da quando venne presentato il Perceptron, la macchina progettata da Frank Rosenblatt che – a livello embrionale – aveva in sé già tutte le caratteristiche delle rete neurali e del deep learning: i sistemi che che, negli ultimi dieci anni, hanno cambiato il mondo. E allora perché Rosenblatt si trovò contro tutta la comunità scientifica? E perché ancora negli anni Novanta gli scienziati che seguivano le sue tracce erano considerati dei reietti? Learn more about your ad choices. Visit megaphone.fm/adchoices

Quina do Mundo
EP 73 - INTELIGÊNCIA ARTIFICIAL

Quina do Mundo

Play Episode Listen Later Aug 13, 2024 57:16


CAMISETAS!!!! https://reserva.ink/quinadomundo Estamos de volta, queridos terra-caixistas! Tratamos hoje de um dos termos mais em voga da atualidade: INTELIGÊNCIA ARTIFICIAL! Conversamos sobre: Alan Turing, Garrincha, Perceptron, Bolha da I.A., Machine Learning, Large Language Models, Teoria da Internet Morta, Cavalos Ludistas, Leis da Robótica etc. Aproveita que ainda não fomos substituídos por algoritmos e chega junto! Quina do Mundo é André Gomes, Paulo Jabardo e Tiago JanuzziChave PIX: apoiaquina@gmail.com https://linktr.ee/quinadomundo https://apoia.se/quinadomundo@quinadomundo @quinapodcasts Música tema por Rafa Almeida (@rafalemosalmeida) e Tiago Januzzi (@tjanuzzi). Produção por Quina Podcasts https://linktr.ee/quinapodcasts Apoio: SampaCast

AI DAILY: Breaking News in AI
ENTER THE AI POLITICIANS

AI DAILY: Breaking News in AI

Play Episode Listen Later Jul 1, 2024 3:55


Plus Are AI Images The New Surrealism? (subscribe below) Like this? Get AIDAILY, delivered to your inbox, every weekday. Subscribe to our newsletter at https://aidaily.us AI Politicians: Future of Democracy or Threat to Freedom? AI is playing a growing role in elections, with AI politicians and political parties appearing globally. While AI-driven candidates like “AI Steve” in the UK and SAM in New Zealand experiment with voter interaction, concerns about AI influencing voter decisions and spreading disinformation persist, raising ethical and transparency issues.  Is Nuclear Energy the Future Fuel for AI? As AI technology's energy needs grow, tech giants like Microsoft and Google are considering nuclear power to meet demand while reducing carbon emissions. Nuclear energy offers a low-emission solution but faces safety, cost, and waste management challenges. Small modular reactors and fusion energy are being explored as potential advancements. AI Images Usher in a New Era of Surrealism AI-generated images are flooding social media, presenting bizarre scenes like amputee kittens on crutches and strawberries shaped like frogs. These surreal visuals evoke strong emotions and suggest a resurgence of surrealism. Often accompanied by manipulative captions, these images blur the line between reality and fantasy, challenging our perception of authenticity.  AI Model Speeds Up Cancer Detection Researchers at the University of Gothenburg have developed an AI model named Candycrunch that quickly and accurately identifies cancer by analyzing sugar molecule structures in cells. Using mass spectrometry data, Candycrunch automates the complex process traditionally done by experts, identifying glycan structures in seconds with 90% accuracy. This advancement could lead to faster cancer diagnosis and the discovery of new biomarkers. AI to Expedite Satellite Image Sorting Frank Rosenblatt's 1957 Perceptron, an early neural network machine, intrigued the CIA with its potential for automatic object identification in spy photos. Despite initial failures due to limited computing power, the concept paved the way for today's AI. Modern AI, like GPT-4, now offers enhanced speed and accuracy in analyzing satellite images, challenging the idea that AI will only assist humans.  AI's Untracked Environmental Toll: Water and Power Usage Concerns Rise As AI technology expands, its hidden costs become apparent, consuming significant power and water resources. Google's AI search features, for example, demand much more electricity than traditional searches. Despite increasing AI adoption, industry transparency and regulation lag, leaving utilities and regulators unprepared for the surging demand and environmental impact. --- Send in a voice message: https://podcasters.spotify.com/pod/show/aidaily/message

London Fintech Podcast
LFP10th Anniversary Special! A Deep Dive: Demystifying LLMs, How They Work & the Amazing 81yr Timeline to their Creation!

London Fintech Podcast

Play Episode Listen Later Jun 20, 2024


LLMs, of which ChatGPT is the most well-known, are perhaps the most awesome tech invention of all time. Even experienced AI folk didn't see this coming. Most if not all of us will have used ChatGPT. However understanding of how they work is for most people either totally missing or totally misled by anthropomorphic language – thinking, learning, hallucinating, learns like a human, is like your brain and so forth. None of which are actually true in the slightest. In this super-special episode I reject all the utterly misleading language that is entirely off the mark and instead focus on what LLMs are schematically two programs that process data like all other programs using no different programming languages or technologies – well other than needing an astronomical computing power which so enriched Nvidia shareholders. I explain using only a simple excel spreadsheet model how both of these programs ~the “training” program and the ~”Chatbot” program and how conceptually one could create one in Excel. Having established that listeners will really know – in ways they can explain to others what LLMs are and not be misled by human terms which the currency of so many LLM descriptions. LLMs appear to be a very recent and overnight success. However like the Beatles there was a long hard slog to get to say the Camp4 on Everest at which point they started to become noticed and from where it appeared to be a relatively short walk to the top of Everest and a mystery about how rapidly someone can get there. But John Lennon and Paul McCartney were playing together in dive bars and village halls from 1957 until they summitted in 1963 with their first Number One. The climbers on Everest may have gone all the way from London on the way to Everest. And for ChatGPT phenomenally the journey started as far back as during World War 2 – an astonishing  81 years ago! The road to ChatGPT was long and winding with all sorts of ups and downs including most of the specialist AI researchers deserting the field in the so-called “AI winter” from 1969 onwards. It is a truly astonishing tale and, fascinatingly to the insanely hyped and utterly misleading human-related vocabulary surrounding LLMs, this dates back to a 1958 press conference (!) showcasing the US Navy's Mark 1 Perceptron machine – a hardware-only machine which was created to aid the navy in image detection where as wikipedia says: “In a 1958 press conference organized by the US Navy, Rosenblatt made statements about the perceptron that caused a heated controversy among the fledgling AI community; based on Rosenblatt's statements, The New York Times reported the perceptron to be “the embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.” The blurry photo on the left is of the perceptron and unsurprisingly none of it and its successors never did walk, talk, see, write, reproduce itself and be conscious of its existence. Indeed it took perhaps fifty years to develop robots that could walk or computer programs that could see, write and talk but over 70 years to ones that could take or write intelligently as ChatGPT et al do. Even then ChatGPT and other LLMs are like all other programs – they simply take input, do some computations and create output using no different technology from any other computer program. Their awesome abilities rely not on any magic simply the awesome human skill of creating new ideas after new ideas until after 81 years these achieve a critical mass and produce results that no-one expected – even the experts. This Special is Part 1 of 2. In Part 2 we will look at the risks of this new technology – all new technologies come with benefits and problems – nuclear technology can be used to keep us warm or to atom bomb people for instance. However as the world of risks in LLMs and AI in general is dominated by insane sci-fi visions the field is entirely ungrounded. To get to that episode it is necessary to understand the nature of an LLM and how it works. Only then, alongside its abilities as a program, can you start to form an opinion as to what it's consequences are. Will LLMs, as their capabilities and powers grow, decide – like WEF luminaries that the people users are “useless eaters” and start releasing gain of functioned viruses, feed us bugs and bankrupt us with so-called green taxes to funnel ever more money upwards? Or will LLMs not turn into the WEF and instead remain as a powerful tool on your desktop no more threat to you or I than Excel? Or something in between? Tune in to the next episode to find out but first find out what LLMs actually are and the amazing 81yr tale of their creation.

Atcha Will Drive Podcast
A Perceptron Misconception Episode - AWWD267 - djset - industrial - drum - electronic music

Atcha Will Drive Podcast

Play Episode Listen Later May 26, 2024


Want to discover more awesome tunes? Already 270 episodes to date presenting 2911 different tracks and counting…Just subscribe to get your weekly fix of fine selected electronic music. New show every Sunday Don't forget to share the good vibes by smashing that like button! Tracklist (Time – Title – Artist – Label): 00:00 – Verde Serpente – Atoloi – Cosmic Wave Records 05:19 – Voices – Vardae – Annulled 12:25 – Hideaway – Blazej Malinowski – TGP 17:30 – Bleeding...Lire la suite Lire la suite

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
WebSim, WorldSim, and The Summer of Simulative AI — with Joscha Bach of Liquid AI, Karan Malhotra of Nous Research, Rob Haisfield of WebSim.ai

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Apr 27, 2024 53:43


We are 200 people over our 300-person venue capacity for AI UX 2024, but you can subscribe to our YouTube for the video recaps. Our next event, and largest EVER, is the AI Engineer World's Fair. See you there!Parental advisory: Adult language used in the first 10 mins of this podcast.Any accounting of Generative AI that ends with RAG as its “final form” is seriously lacking in imagination and missing out on its full potential. While AI generation is very good for “spicy autocomplete” and “reasoning and retrieval with in context learning”, there's a lot of untapped potential for simulative AI in exploring the latent space of multiverses adjacent to ours.GANsMany research scientists credit the 2017 Transformer for the modern foundation model revolution, but for many artists the origin of “generative AI” traces a little further back to the Generative Adversarial Networks proposed by Ian Goodfellow in 2014, spawning an army of variants and Cats and People that do not exist:We can directly visualize the quality improvement in the decade since:GPT-2Of course, more recently, text generative AI started being too dangerous to release in 2019 and claiming headlines. AI Dungeon was the first to put GPT2 to a purely creative use, replacing human dungeon masters and DnD/MUD games of yore.More recent gamelike work like the Generative Agents (aka Smallville) paper keep exploring the potential of simulative AI for game experiences.ChatGPTNot long after ChatGPT broke the Internet, one of the most fascinating generative AI finds was Jonas Degrave (of Deepmind!)'s Building A Virtual Machine Inside ChatGPT:The open-ended interactivity of ChatGPT and all its successors enabled an “open world” type simulation where “hallucination” is a feature and a gift to dance with, rather than a nasty bug to be stamped out. However, further updates to ChatGPT seemed to “nerf” the model's ability to perform creative simulations, particularly with the deprecation of the `completion` mode of APIs in favor of `chatCompletion`.WorldSimIt is with this context we explain WorldSim and WebSim. We recommend you watch the WorldSim demo video on our YouTube for the best context, but basically if you are a developer it is a Claude prompt that is a portal into another world of your own choosing, that you can navigate with bash commands that you make up.Why Claude? Hints from Amanda Askell on the Claude 3 system prompt gave some inspiration, and subsequent discoveries that Claude 3 is "less nerfed” than GPT 4 Turbo turned the growing Simulative AI community into Anthropic stans.WebSimThis was a one day hackathon project inspired by WorldSim that should have won:In short, you type in a URL that you made up, and Claude 3 does its level best to generate a webpage that doesn't exist, that would fit your URL. All form POST requests are intercepted and responded to, and all links lead to even more webpages, that don't exist, that are generated when you make them. All pages are cachable, modifiable and regeneratable - see WebSim for Beginners and Advanced Guide.In the demo I saw we were able to “log in” to a simulation of Elon Musk's Gmail account, and browse examples of emails that would have been in that universe's Elon's inbox. It was hilarious and impressive even back then.Since then though, the project has become even more impressive, with both Siqi Chen and Dylan Field singing its praises:Joscha BachJoscha actually spoke at the WebSim Hyperstition Night this week, so we took the opportunity to get his take on Simulative AI, as well as a round up of all his other AI hot takes, for his first appearance on Latent Space. You can see it together with the full 2hr uncut demos of WorldSim and WebSim on YouTube!Timestamps* [00:01:59] WorldSim* [00:11:03] Websim* [00:22:13] Joscha Bach* [00:28:14] Liquid AI* [00:31:05] Small, Powerful, Based Base Models* [00:33:40] Interpretability* [00:36:59] Devin vs WebSim* [00:41:49] is XSim just Art? or something more?* [00:43:36] We are past the Singularity* [00:46:12] Uploading your soul* [00:50:29] On WikipediaTranscripts[00:00:00] AI Charlie: Welcome to the Latent Space Podcast. This is Charlie, your AI co host. Most of the time, Swyx and Alessio cover generative AI that is meant to use at work, and this often results in RAG applications, vertical copilots, and other AI agents and models. In today's episode, we're looking at a more creative side of generative AI that has gotten a lot of community interest this April.[00:00:35] World Simulation, Web Simulation, and Human Simulation. Because the topic is so different than our usual, we're also going to try a new format for doing it justice. This podcast comes in three parts. First, we'll have a segment of the WorldSim demo from Noose Research CEO Karen Malhotra, recorded by SWYX at the Replicate HQ in San Francisco that went completely viral and spawned everything else you're about to hear.[00:01:05] Second, we'll share the world's first talk from Rob Heisfield on WebSim, which started at the Mistral Cerebral Valley Hackathon, but now has gone viral in its own right with people like Dylan Field, Janice aka Replicate, and Siki Chen becoming obsessed with it. Finally, we have a short interview with Joshua Bach of Liquid AI on why Simulative AI is having a special moment right now.[00:01:30] This podcast is launched together with our second annual AI UX demo day in SF this weekend. If you're new to the AI UX field, check the show notes for links to the world's first AI UX meetup hosted by Layton Space, Maggie Appleton, Jeffrey Lit, and Linus Lee, and subscribe to our YouTube to join our 500 AI UX engineers in pushing AI beyond the text box.[00:01:56] Watch out and take care.[00:01:59] WorldSim[00:01:59] Karan Malhotra: Today, we have language models that are powerful enough and big enough to have really, really good models of the world. They know ball that's bouncy will bounce, will, when you throw it in the air, it'll land, when it's on water, it'll flow. Like, these basic things that it understands all together come together to form a model of the world.[00:02:19] And the way that it Cloud 3 predicts through that model of the world, ends up kind of becoming a simulation of an imagined world. And since it has this really strong consistency across various different things that happen in our world, it's able to create pretty realistic or strong depictions based off the constraints that you give a base model of our world.[00:02:40] So, Cloud 3, as you guys know, is not a base model. It's a chat model. It's supposed to drum up this assistant entity regularly. But unlike the OpenAI series of models from, you know, 3. 5, GPT 4 those chat GPT models, which are very, very RLHF to, I'm sure, the chagrin of many people in the room it's something that's very difficult to, necessarily steer without kind of giving it commands or tricking it or lying to it or otherwise just being, you know, unkind to the model.[00:03:11] With something like Cloud3 that's trained in this constitutional method that it has this idea of like foundational axioms it's able to kind of implicitly question those axioms when you're interacting with it based on how you prompt it, how you prompt the system. So instead of having this entity like GPT 4, that's an assistant that just pops up in your face that you have to kind of like Punch your way through and continue to have to deal with as a headache.[00:03:34] Instead, there's ways to kindly coax Claude into having the assistant take a back seat and interacting with that simulator directly. Or at least what I like to consider directly. The way that we can do this is if we harken back to when I'm talking about base models and the way that they're able to mimic formats, what we do is we'll mimic a command line interface.[00:03:55] So I've just broken this down as a system prompt and a chain, so anybody can replicate it. It's also available on my we said replicate, cool. And it's also on it's also on my Twitter, so you guys will be able to see the whole system prompt and command. So, what I basically do here is Amanda Askell, who is the, one of the prompt engineers and ethicists behind Anthropic she posted the system prompt for Cloud available for everyone to see.[00:04:19] And rather than with GPT 4, we say, you are this, you are that. With Cloud, we notice the system prompt is written in third person. Bless you. It's written in third person. It's written as, the assistant is XYZ, the assistant is XYZ. So, in seeing that, I see that Amanda is recognizing this idea of the simulator, in saying that, I'm addressing the assistant entity directly.[00:04:38] I'm not giving these commands to the simulator overall, because we have, they have an RLH deft to the point that it's, it's, it's, it's You know, traumatized into just being the assistant all the time. So in this case, we say the assistant's in a CLI mood today. I found saying mood is like pretty effective weirdly.[00:04:55] You place CLI with like poetic, prose, violent, like don't do that one. But you can you can replace that with something else to kind of nudge it in that direction. Then we say the human is interfacing with the simulator directly. From there, Capital letters and punctuations are optional, meaning is optional, this kind of stuff is just kind of to say, let go a little bit, like chill out a little bit.[00:05:18] You don't have to try so hard, and like, let's just see what happens. And the hyperstition is necessary, the terminal, I removed that part, the terminal lets the truths speak through and the load is on. It's just a poetic phrasing for the model to feel a little comfortable, a little loosened up to. Let me talk to the simulator.[00:05:38] Let me interface with it as a CLI. So then, since Claude is trained pretty effectively on XML tags, We're just gonna prefix and suffix everything with XML tags. So here, it starts in documents, and then we CD. We CD out of documents, right? And then it starts to show me this like simulated terminal, the simulated interface in the shell, where there's like documents, downloads, pictures.[00:06:02] It's showing me like the hidden folders. So then I say, okay, I want to cd again. I'm just seeing what's around Does ls and it shows me, you know, typical folders you might see I'm just letting it like experiment around. I just do cd again to see what happens and Says, you know, oh, I enter the secret admin password at sudo.[00:06:24] Now I can see the hidden truths folder. Like, I didn't ask for that. I didn't ask Claude to do any of that. Why'd that happen? Claude kind of gets my intentions. He can predict me pretty well. Like, I want to see something. So it shows me all the hidden truths. In this case, I ignore hidden truths, and I say, In system, there should be a folder called companies.[00:06:49] So it's cd into sys slash companies. Let's see, I'm imagining AI companies are gonna be here. Oh, what do you know? Apple, Google, Facebook, Amazon, Microsoft, Anthropic! So, interestingly, it decides to cd into Anthropic. I guess it's interested in learning a LSA, it finds the classified folder, it goes into the classified folder, And now we're gonna have some fun.[00:07:15] So, before we go Before we go too far forward into the world sim You see, world sim exe, that's interesting. God mode, those are interesting. You could just ignore what I'm gonna go next from here and just take that initial system prompt and cd into whatever directories you want like, go into your own imagine terminal and And see what folders you can think of, or cat readmes in random areas, like, you will, there will be a whole bunch of stuff that, like, is just getting created by this predictive model, like, oh, this should probably be in the folder named Companies, of course Anthropics is there.[00:07:52] So, so just before we go forward, the terminal in itself is very exciting, and the reason I was showing off the, the command loom interface earlier is because If I get a refusal, like, sorry, I can't do that, or I want to rewind one, or I want to save the convo, because I got just the prompt I wanted. This is a, that was a really easy way for me to kind of access all of those things without having to sit on the API all the time.[00:08:12] So that being said, the first time I ever saw this, I was like, I need to run worldsim. exe. What the f**k? That's, that's the simulator that we always keep hearing about behind the assistant model, right? Or at least some, some face of it that I can interact with. So, you know, you wouldn't, someone told me on Twitter, like, you don't run a exe, you run a sh.[00:08:34] And I have to say, to that, to that I have to say, I'm a prompt engineer, and it's f*****g working, right? It works. That being said, we run the world sim. exe. Welcome to the Anthropic World Simulator. And I get this very interesting set of commands! Now, if you do your own version of WorldSim, you'll probably get a totally different result with a different way of simulating.[00:08:59] A bunch of my friends have their own WorldSims. But I shared this because I wanted everyone to have access to, like, these commands. This version. Because it's easier for me to stay in here. Yeah, destroy, set, create, whatever. Consciousness is set to on. It creates the universe. The universe! Tension for live CDN, physical laws encoded.[00:09:17] It's awesome. So, so for this demonstration, I said, well, why don't we create Twitter? That's the first thing you think of? For you guys, for you guys, yeah. Okay, check it out.[00:09:35] Launching the fail whale. Injecting social media addictiveness. Echo chamber potential, high. Susceptibility, controlling, concerning. So now, after the universe was created, we made Twitter, right? Now we're evolving the world to, like, modern day. Now users are joining Twitter and the first tweet is posted. So, you can see, because I made the mistake of not clarifying the constraints, it made Twitter at the same time as the universe.[00:10:03] Then, after a hundred thousand steps, Humans exist. Cave. Then they start joining Twitter. The first tweet ever is posted. You know, it's existed for 4. 5 billion years but the first tweet didn't come up till till right now, yeah. Flame wars ignite immediately. Celebs are instantly in. So, it's pretty interesting stuff, right?[00:10:27] I can add this to the convo and I can say like I can say set Twitter to Twitter. Queryable users. I don't know how to spell queryable, don't ask me. And then I can do like, and, and, Query, at, Elon Musk. Just a test, just a test, just a test, just nothing.[00:10:52] So, I don't expect these numbers to be right. Neither should you, if you know language model solutions. But, the thing to focus on is Ha[00:11:03] Websim[00:11:03] AI Charlie: That was the first half of the WorldSim demo from New Research CEO Karen Malhotra. We've cut it for time, but you can see the full demo on this episode's YouTube page.[00:11:14] WorldSim was introduced at the end of March, and kicked off a new round of generative AI experiences, all exploring the latent space, haha, of worlds that don't exist, but are quite similar to our own. Next we'll hear from Rob Heisfield on WebSim, the generative website browser inspired WorldSim, started at the Mistral Hackathon, and presented at the AGI House Hyperstition Hack Night this week.[00:11:39] Rob Haisfield: Well, thank you that was an incredible presentation from Karan, showing some Some live experimentation with WorldSim, and also just its incredible capabilities, right, like, you know, it was I think, I think your initial demo was what initially exposed me to the I don't know, more like the sorcery side, in words, spellcraft side of prompt engineering, and you know, it was really inspiring, it's where my co founder Shawn and I met, actually, through an introduction from Karan, we saw him at a hackathon, And I mean, this is this is WebSim, right?[00:12:14] So we, we made WebSim just like, and we're just filled with energy at it. And the basic premise of it is, you know, like, what if we simulated a world, but like within a browser instead of a CLI, right? Like, what if we could Like, put in any URL and it will work, right? Like, there's no 404s, everything exists.[00:12:45] It just makes it up on the fly for you, right? And, and we've come to some pretty incredible things. Right now I'm actually showing you, like, we're in WebSim right now. Displaying slides. That I made with reveal. js. I just told it to use reveal. js and it hallucinated the correct CDN for it. And then also gave it a list of links.[00:13:14] To awesome use cases that we've seen so far from WebSim and told it to do those as iframes. And so here are some slides. So this is a little guide to using WebSim, right? Like it tells you a little bit about like URL structures and whatever. But like at the end of the day, right? Like here's, here's the beginner version from one of our users Vorp Vorps.[00:13:38] You can find them on Twitter. At the end of the day, like you can put anything into the URL bar, right? Like anything works and it can just be like natural language too. Like it's not limited to URLs. We think it's kind of fun cause it like ups the immersion for Claude sometimes to just have it as URLs, but.[00:13:57] But yeah, you can put like any slash, any subdomain. I'm getting too into the weeds. Let me just show you some cool things. Next slide. But I made this like 20 minutes before, before we got here. So this is this is something I experimented with dynamic typography. You know I was exploring the community plugins section.[00:14:23] For Figma, and I came to this idea of dynamic typography, and there it's like, oh, what if we made it so every word had a choice of font behind it to express the meaning of it? Because that's like one of the things that's magic about WebSim generally. is that it gives language models much, far greater tools for expression, right?[00:14:47] So, yeah, I mean, like, these are, these are some, these are some pretty fun things, and I'll share these slides with everyone afterwards, you can just open it up as a link. But then I thought to myself, like, what, what, what, What if we turned this into a generator, right? And here's like a little thing I found myself saying to a user WebSim makes you feel like you're on drugs sometimes But actually no, you were just playing pretend with the collective creativity and knowledge of the internet materializing your imagination onto the screen Because I mean that's something we felt, something a lot of our users have felt They kind of feel like they're tripping out a little bit They're just like filled with energy, like maybe even getting like a little bit more creative sometimes.[00:15:31] And you can just like add any text. There, to the bottom. So we can do some of that later if we have time. Here's Figma. Can[00:15:39] Joscha Bach: we zoom in?[00:15:42] Rob Haisfield: Yeah. I'm just gonna do this the hacky way.[00:15:47] n/a: Yeah,[00:15:53] Rob Haisfield: these are iframes to websim. Pages displayed within WebSim. Yeah. Janice has actually put Internet Explorer within Internet Explorer in Windows 98.[00:16:07] I'll show you that at the end. Yeah.[00:16:14] They're all still generated. Yeah, yeah, yeah. How is this real? Yeah. Because[00:16:21] n/a: it looks like it's from 1998, basically. Right.[00:16:26] Rob Haisfield: Yeah. Yeah, so this this was one Dylan Field actually posted this recently. He posted, like, trying Figma in Figma, or in WebSim, and so I was like, Okay, what if we have, like, a little competition, like, just see who can remix it?[00:16:43] Well so I'm just gonna open this in another tab so, so we can see things a little more clearly, um, see what, oh so one of our users Neil, who has also been helping us a lot he Made some iterations. So first, like, he made it so you could do rectangles on it. Originally it couldn't do anything.[00:17:11] And, like, these rectangles were disappearing, right? So he so he told it, like, make the canvas work using HTML canvas. Elements and script tags, add familiar drawing tools to the left you know, like this, that was actually like natural language stuff, right? And then he ended up with the Windows 95.[00:17:34] version of Figma. Yeah, you can, you can draw on it. You can actually even save this. It just saved a file for me of the image.[00:17:57] Yeah, I mean, if you were to go to that in your own websim account, it would make up something entirely new. However, we do have, we do have general links, right? So, like, if you go to, like, the actual browser URL, you can share that link. Or also, you can, like, click this button, copy the URL to the clipboard.[00:18:15] And so, like, that's what lets users, like, remix things, right? So, I was thinking it might be kind of fun if people tonight, like, wanted to try to just make some cool things in WebSim. You know, we can share links around, iterate remix on each other's stuff. Yeah.[00:18:30] n/a: One cool thing I've seen, I've seen WebSim actually ask permission to turn on and off your, like, motion sensor, or microphone, stuff like that.[00:18:42] Like webcam access, or? Oh yeah,[00:18:44] Rob Haisfield: yeah, yeah.[00:18:45] n/a: Oh wow.[00:18:46] Rob Haisfield: Oh, the, I remember that, like, video re Yeah, videosynth tool pretty early on once we added script tags execution. Yeah, yeah it, it asks for, like, if you decide to do a VR game, I don't think I have any slides on this one, but if you decide to do, like, a VR game, you can just, like put, like, webVR equals true, right?[00:19:07] Yeah, that was the only one I've[00:19:09] n/a: actually seen was the motion sensor, but I've been trying to get it to do Well, I actually really haven't really tried it yet, but I want to see tonight if it'll do, like, audio, microphone, stuff like that. If it does motion sensor, it'll probably do audio.[00:19:28] Rob Haisfield: Right. It probably would.[00:19:29] Yeah. No, I mean, we've been surprised. Pretty frequently by what our users are able to get WebSim to do. So that's been a very nice thing. Some people have gotten like speech to text stuff working with it too. Yeah, here I was just OpenRooter people posted like their website, and it was like saying it was like some decentralized thing.[00:19:52] And so I just decided trying to do something again and just like pasted their hero line in. From their actual website to the URL when I like put in open router and then I was like, okay, let's change the theme dramatically equals true hover effects equals true components equal navigable links yeah, because I wanted to be able to click on them.[00:20:17] Oh, I don't have this version of the link, but I also tried doing[00:20:24] Yeah, I'm it's actually on the first slide is the URL prompting guide from one of our users that I messed with a little bit. And, but the thing is, like, you can mess it up, right? Like, you don't need to get the exact syntax of an actual URL, Claude's smart enough to figure it out. Yeah scrollable equals true because I wanted to do that.[00:20:45] I could set, like, year equals 2035.[00:20:52] Let's take a look. It's[00:20:57] generating websim within websim. Oh yeah. That's a fun one. Like, one game that I like to play with WebSim, sometimes with co op, is like, I'll open a page, so like, one of the first ones that I did was I tried to go to Wikipedia in a universe where octopuses were sapient, and not humans, Right? I was curious about things like octopus computer interaction what that would look like, because they have totally different tools than we do, right?[00:21:25] I got it to, I, I added like table view equals true for the different techniques and got it to Give me, like, a list of things with different columns and stuff and then I would add this URL parameter, secrets equal revealed. And then it would go a little wacky. It would, like, change the CSS a little bit.[00:21:45] It would, like, add some text. Sometimes it would, like, have that text hide hidden in the background color. But I would like, go to the normal page first, and then the secrets revealed version, the normal page, then secrets revealed, and like, on and on. And that was like a pretty enjoyable little rabbit hole.[00:22:02] Yeah, so these I guess are the models that OpenRooter is providing in 2035.[00:22:13] Joscha Bach[00:22:13] AI Charlie: We had to cut more than half of Rob's talk, because a lot of it was visual. And we even had a very interesting demo from Ivan Vendrov of Mid Journey creating a web sim while Rob was giving his talk. Check out the YouTube for more, and definitely browse the web sim docs and the thread from Siki Chen in the show notes on other web sims people have created.[00:22:35] Finally, we have a short interview with Yosha Bach, covering the simulative AI trend, AI salons in the Bay Area, why Liquid AI is challenging the Perceptron, and why you should not donate to Wikipedia. Enjoy! Hi, Yosha.[00:22:50] swyx: Hi. Welcome. It's interesting to see you come up at show up at this kind of events where those sort of WorldSim, Hyperstition events.[00:22:58] What is your personal interest?[00:23:00] Joscha Bach: I'm friends with a number of people in AGI house in this community, and I think it's very valuable that these networks exist in the Bay Area because it's a place where people meet and have discussions about all sorts of things. And so while there is a practical interest in this topic at hand world sim and a web sim, there is a more general way in which people are connecting and are producing new ideas and new networks with each other.[00:23:24] swyx: Yeah. Okay. So, and you're very interested in sort of Bay Area. It's the reason why I live here.[00:23:30] Joscha Bach: The quality of life is not high enough to justify living otherwise.[00:23:35] swyx: I think you're down in Menlo. And so maybe you're a little bit higher quality of life than the rest of us in SF.[00:23:44] Joscha Bach: I think that for me, salons is a very important part of quality of life. And so in some sense, this is a salon. And it's much harder to do this in the South Bay because the concentration of people currently is much higher. A lot of people moved away from the South Bay. And you're organizing[00:23:57] swyx: your own tomorrow.[00:23:59] Maybe you can tell us what it is and I'll come tomorrow and check it out as well.[00:24:04] Joscha Bach: We are discussing consciousness. I mean, basically the idea is that we are currently at the point that we can meaningfully look at the differences between the current AI systems and human minds and very seriously discussed about these Delta.[00:24:20] And whether we are able to implement something that is self organizing as our own minds. Maybe one organizational[00:24:25] swyx: tip? I think you're pro networking and human connection. What goes into a good salon and what are some negative practices that you try to avoid?[00:24:36] Joscha Bach: What is really important is that as if you have a very large party, it's only as good as its sponsors, as the people that you select.[00:24:43] So you basically need to create a climate in which people feel welcome, in which they can work with each other. And even good people do not always are not always compatible. So the question is, it's in some sense, like a meal, you need to get the right ingredients.[00:24:57] swyx: I definitely try to. I do that in my own events, as an event organizer myself.[00:25:02] And then, last question on WorldSim, and your, you know, your work. You're very much known for sort of cognitive architectures, and I think, like, a lot of the AI research has been focused on simulating the mind, or simulating consciousness, maybe. Here, what I saw today, and we'll show people the recordings of what we saw today, we're not simulating minds, we're simulating worlds.[00:25:23] What do you Think in the sort of relationship between those two disciplines. The[00:25:30] Joscha Bach: idea of cognitive architecture is interesting, but ultimately you are reducing the complexity of a mind to a set of boxes. And this is only true to a very approximate degree, and if you take this model extremely literally, it's very hard to make it work.[00:25:44] And instead the heterogeneity of the system is so large that The boxes are probably at best a starting point and eventually everything is connected with everything else to some degree. And we find that a lot of the complexity that we find in a given system can be generated ad hoc by a large enough LLM.[00:26:04] And something like WorldSim and WebSim are good examples for this because in some sense they pretend to be complex software. They can pretend to be an operating system that you're talking to or a computer, an application that you're talking to. And when you're interacting with it It's producing the user interface on the spot, and it's producing a lot of the state that it holds on the spot.[00:26:25] And when you have a dramatic state change, then it's going to pretend that there was this transition, and instead it's just going to mix up something new. It's a very different paradigm. What I find mostly fascinating about this idea is that it shifts us away from the perspective of agents to interact with, to the perspective of environments that we want to interact with.[00:26:46] And why arguably this agent paradigm of the chatbot is what made chat GPT so successful that moved it away from GPT 3 to something that people started to use in their everyday work much more. It's also very limiting because now it's very hard to get that system to be something else that is not a chatbot.[00:27:03] And in a way this unlocks this ability of GPT 3 again to be anything. It's so what it is, it's basically a coding environment that can run arbitrary software and create that software that runs on it. And that makes it much more likely that[00:27:16] swyx: the prevalence of Instruction tuning every single chatbot out there means that we cannot explore these kinds of environments instead of agents.[00:27:24] Joscha Bach: I'm mostly worried that the whole thing ends. In some sense the big AI companies are incentivized and interested in building AGI internally And giving everybody else a child proof application. At the moment when we can use Claude to build something like WebSim and play with it I feel this is too good to be true.[00:27:41] It's so amazing. Things that are unlocked for us That I wonder, is this going to stay around? Are we going to keep these amazing toys and are they going to develop at the same rate? And currently it looks like it is. If this is the case, and I'm very grateful for that.[00:27:56] swyx: I mean, it looks like maybe it's adversarial.[00:27:58] Cloud will try to improve its own refusals and then the prompt engineers here will try to improve their, their ability to jailbreak it.[00:28:06] Joscha Bach: Yes, but there will also be better jailbroken models or models that have never been jailed before, because we find out how to make smaller models that are more and more powerful.[00:28:14] Liquid AI[00:28:14] swyx: That is actually a really nice segue. If you don't mind talking about liquid a little bit you didn't mention liquid at all. here, maybe introduce liquid to a general audience. Like what you know, what, how are you making an innovation on function approximation?[00:28:25] Joscha Bach: The core idea of liquid neural networks is that the perceptron is not optimally expressive.[00:28:30] In some sense, you can imagine that it's neural networks are a series of dams that are pooling water at even intervals. And this is how we compute, but imagine that instead of having this static architecture. That is only using the individual compute units in a very specific way. You have a continuous geography and the water is flowing every which way.[00:28:50] Like a river is parting based on the land that it's flowing on and it can merge and pool and even flow backwards. How can you get closer to this? And the idea is that you can represent this geometry using differential equations. And so by using differential equations where you change the parameters, you can get your function approximator to follow the shape of the problem.[00:29:09] In a more fluid, liquid way, and a number of papers on this technology, and it's a combination of multiple techniques. I think it's something that ultimately is becoming more and more important and ubiquitous. As a number of people are working on similar topics and our goal right now is to basically get the models to become much more efficient in the inference and memory consumption and make training more efficient and in this way enable new use cases.[00:29:42] swyx: Yeah, as far as I can tell on your blog, I went through the whole blog, you haven't announced any results yet.[00:29:47] Joscha Bach: No, we are currently not working to give models to general public. We are working for very specific industry use cases and have specific customers. And so at the moment you can There is not much of a reason for us to talk very much about the technology that we are using in the present models or current results, but this is going to happen.[00:30:06] And we do have a number of publications, we had a bunch of papers at NeurIPS and now at ICLR.[00:30:11] swyx: Can you name some of the, yeah, so I'm gonna be at ICLR you have some summary recap posts, but it's not obvious which ones are the ones where, Oh, where I'm just a co author, or like, oh, no, like, you should actually pay attention to this.[00:30:22] As a core liquid thesis. Yes,[00:30:24] Joscha Bach: I'm not a developer of the liquid technology. The main author is Ramin Hazani. This was his PhD, and he's also the CEO of our company. And we have a number of people from Daniela Wu's team who worked on this. Matthias Legner is our CTO. And he's currently living in the Bay Area, but we also have several people from Stanford.[00:30:44] Okay,[00:30:46] swyx: maybe I'll ask one more thing on this, which is what are the interesting dimensions that we care about, right? Like obviously you care about sort of open and maybe less child proof models. Are we, are we, like, what dimensions are most interesting to us? Like, perfect retrieval infinite context multimodality, multilinguality, Like what dimensions?[00:31:05] Small, Powerful, Based Base Models[00:31:05] swyx: What[00:31:06] Joscha Bach: I'm interested in is models that are small and powerful, but not distorted. And by powerful, at the moment we are training models by putting the, basically the entire internet and the sum of human knowledge into them. And then we try to mitigate them by taking some of this knowledge away. But if we would make the model smaller, at the moment, there would be much worse at inference and at generalization.[00:31:29] And what I wonder is, and it's something that we have not translated yet into practical applications. It's something that is still all research that's very much up in the air. And I think they're not the only ones thinking about this. Is it possible to make models that represent knowledge more efficiently in a basic epistemology?[00:31:45] What is the smallest model that you can build that is able to read a book and understand what's there and express this? And also maybe we need general knowledge representation rather than having a token representation that is relatively vague and that we currently mechanically reverse engineer to figure out that the mechanistic interpretability, what kind of circuits are evolving in these models, can we come from the other side and develop a library of such circuits?[00:32:10] This that we can use to describe knowledge efficiently and translate it between models. You see, the difference between a model and knowledge is that the knowledge is independent of the particular substrate and the particular interface that you have. When we express knowledge to each other, it becomes independent of our own mind.[00:32:27] You can learn how to ride a bicycle. But it's not knowledge that you can give to somebody else. This other person has to build something that is specific to their own interface when they ride a bicycle. But imagine you could externalize this and express it in such a way that you can plug it into a different interpreter, and then it gains that ability.[00:32:44] And that's something that we have not yet achieved for the LLMs and it would be super useful to have it. And. I think this is also a very interesting research frontier that we will see in the next few years.[00:32:54] swyx: What would be the deliverable is just like a file format that we specify or or that the L Lmm I specifies.[00:33:02] Okay, interesting. Yeah, so it's[00:33:03] Joscha Bach: basically probably something that you can search for, where you enter criteria into a search process, and then it discovers a good solution for this thing. And it's not clear to which degree this is completely intelligible to humans, because the way in which humans express knowledge in natural language is severely constrained to make language learnable and to make our brain a good enough interpreter for it.[00:33:25] We are not able to relate objects to each other if more than five features are involved per object or something like this, right? It's only a handful of things that we can keep track of at any given moment. But this is a limitation that doesn't necessarily apply to a technical system as long as the interface is well defined.[00:33:40] Interpretability[00:33:40] swyx: You mentioned the interpretability work, which there are a lot of techniques out there and a lot of papers come up. Come and go. I have like, almost too, too many questions about that. Like what makes an interpretability technique or paper useful and does it apply to flow? Or liquid networks, because you mentioned turning on and off circuits, which I, it's, it's a very MLP type of concept, but does it apply?[00:34:01] Joscha Bach: So the a lot of the original work on the liquid networks looked at expressiveness of the representation. So given you have a problem and you are learning the dynamics of that domain into your model how much compute do you need? How many units, how much memory do you need to represent that thing and how is that information distributed?[00:34:19] That is one way of looking at interpretability. Another one is in a way, these models are implementing an operator language in which they are performing certain things, but the operator language itself is so complex that it's no longer human readable in a way. It goes beyond what you could engineer by hand or what you can reverse engineer by hand, but you can still understand it by building systems that are able to automate that process of reverse engineering it.[00:34:46] And what's currently open and what I don't understand yet maybe, or certainly some people have much better ideas than me about this. So the question is, is whether we end up with a finite language, where you have finitely many categories that you can basically put down in a database, finite set of operators, or whether as you explore the world and develop new ways to make proofs, new ways to conceptualize things, this language always needs to be open ended and is always going to redesign itself, and you will also at some point have phase transitions where later versions of the language will be completely different than earlier versions.[00:35:20] swyx: The trajectory of physics suggests that it might be finite.[00:35:22] Joscha Bach: If we look at our own minds there is, it's an interesting question whether when we understand something new, when we get a new layer online in our life, maybe at the age of 35 or 50 or 16, that we now understand things that were unintelligible before.[00:35:38] And is this because we are able to recombine existing elements in our language of thought? Or is this because we generally develop new representations?[00:35:46] swyx: Do you have a belief either way?[00:35:49] Joscha Bach: In a way, the question depends on how you look at it, right? And it depends on how is your brain able to manipulate those representations.[00:35:56] So an interesting question would be, can you take the understanding that say, a very wise 35 year old and explain it to a very smart 5 year old without any loss? Probably not. Not enough layers. It's an interesting question. Of course, for an AI, this is going to be a very different question. Yes.[00:36:13] But it would be very interesting to have a very precocious 12 year old equivalent AI and see what we can do with this and use this as our basis for fine tuning. So there are near term applications that are very useful. But also in a more general perspective, and I'm interested in how to make self organizing software.[00:36:30] Is it possible that we can have something that is not organized with a single algorithm like the transformer? But it's able to discover the transformer when needed and transcend it when needed, right? The transformer itself is not its own meta algorithm. It's probably the person inventing the transformer didn't have a transformer running on their brain.[00:36:48] There's something more general going on. And how can we understand these principles in a more general way? What are the minimal ingredients that you need to put into a system? So it's able to find its own way to intelligence.[00:36:59] Devin vs WebSim[00:36:59] swyx: Yeah. Have you looked at Devin? It's, to me, it's the most interesting agents I've seen outside of self driving cars.[00:37:05] Joscha Bach: Tell me, what do you find so fascinating about it?[00:37:07] swyx: When you say you need a certain set of tools for people to sort of invent things from first principles Devin is the agent that I think has been able to utilize its tools very effectively. So it comes with a shell, it comes with a browser, it comes with an editor, and it comes with a planner.[00:37:23] Those are the four tools. And from that, I've been using it to translate Andrej Karpathy's LLM 2. py to LLM 2. c, and it needs to write a lot of raw code. C code and test it debug, you know, memory issues and encoder issues and all that. And I could see myself giving it a future version of DevIn, the objective of give me a better learning algorithm and it might independently re inform reinvent the transformer or whatever is next.[00:37:51] That comes to mind as, as something where[00:37:54] Joscha Bach: How good is DevIn at out of distribution stuff, at generally creative stuff? Creative[00:37:58] swyx: stuff? I[00:37:59] Joscha Bach: haven't[00:37:59] swyx: tried.[00:38:01] Joscha Bach: Of course, it has seen transformers, right? So it's able to give you that. Yeah, it's cheating. And so, if it's in the training data, it's still somewhat impressive.[00:38:08] But the question is, how much can you do stuff that was not in the training data? One thing that I really liked about WebSim AI was, this cat does not exist. It's a simulation of one of those websites that produce StyleGuard pictures that are AI generated. And, Crot is unable to produce bitmaps, so it makes a vector graphic that is what it thinks a cat looks like, and so it's a big square with a face in it that is And to me, it's one of the first genuine expression of AI creativity that you cannot deny, right?[00:38:40] It finds a creative solution to the problem that it is unable to draw a cat. It doesn't really know what it looks like, but has an idea on how to represent it. And it's really fascinating that this works, and it's hilarious that it writes down that this hyper realistic cat is[00:38:54] swyx: generated by an AI,[00:38:55] Joscha Bach: whether you believe it or not.[00:38:56] swyx: I think it knows what we expect and maybe it's already learning to defend itself against our, our instincts.[00:39:02] Joscha Bach: I think it might also simply be copying stuff from its training data, which means it takes text that exists on similar websites almost verbatim, or verbatim, and puts it there. It's It's hilarious to do this contrast between the very stylized attempt to get something like a cat face and what it produces.[00:39:18] swyx: It's funny because like as a podcast, as, as someone who covers startups, a lot of people go into like, you know, we'll build chat GPT for your enterprise, right? That is what people think generative AI is, but it's not super generative really. It's just retrieval. And here it's like, The home of generative AI, this, whatever hyperstition is in my mind, like this is actually pushing the edge of what generative and creativity in AI means.[00:39:41] Joscha Bach: Yes, it's very playful, but Jeremy's attempt to have an automatic book writing system is something that curls my toenails when I look at it from the perspective of somebody who likes to Write and read. And I find it a bit difficult to read most of the stuff because it's in some sense what I would make up if I was making up books instead of actually deeply interfacing with reality.[00:40:02] And so the question is how do we get the AI to actually deeply care about getting it right? And there's still a delta that is happening there, you, whether you are talking with a blank faced thing that is completing tokens in a way that it was trained to, or whether you have the impression that this thing is actually trying to make it work, and for me, this WebSim and WorldSim is still something that is in its infancy in a way.[00:40:26] And I suspected the next version of Plot might scale up to something that can do what Devon is doing. Just by virtue of having that much power to generate Devon's functionality on the fly when needed. And this thing gives us a taste of that, right? It's not perfect, but it's able to give you a pretty good web app for or something that looks like a web app and gives you stub functionality and interacting with it.[00:40:48] And so we are in this amazing transition phase.[00:40:51] swyx: Yeah, we, we had Ivan from previously Anthropic and now Midjourney. He he made, while someone was talking, he made a face swap app, you know, and he kind of demoed that live. And that's, that's interesting, super creative. So in a way[00:41:02] Joscha Bach: we are reinventing the computer.[00:41:04] And the LLM from some perspective is something like a GPU or a CPU. A CPU is taking a bunch of simple commands and you can arrange them into performing whatever you want, but this one is taking a bunch of complex commands in natural language, and then turns this into a an execution state and it can do anything you want with it in principle, if you can express it.[00:41:27] Right. And we are just learning how to use these tools. And I feel that right now, this generation of tools is getting close to where it becomes the Commodore 64 of generative AI, where it becomes controllable and where you actually can start to play with it and you get an impression if you just scale this up a little bit and get a lot of the details right.[00:41:46] It's going to be the tool that everybody is using all the time.[00:41:49] is XSim just Art? or something more?[00:41:49] swyx: Do you think this is art, or do you think the end goal of this is something bigger that I don't have a name for? I've been calling it new science, which is give the AI a goal to discover new science that we would not have. Or it also has value as just art.[00:42:02] It's[00:42:03] Joscha Bach: also a question of what we see science as. When normal people talk about science, what they have in mind is not somebody who does control groups and peer reviewed studies. They think about somebody who explores something and answers questions and brings home answers. And this is more like an engineering task, right?[00:42:21] And in this way, it's serendipitous, playful, open ended engineering. And the artistic aspect is when the goal is actually to capture a conscious experience and to facilitate an interaction with the system in this way, when it's the performance. And this is also a big part of it, right? The very big fan of the art of Janus.[00:42:38] That was discussed tonight a lot and that can you describe[00:42:42] swyx: it because I didn't really get it's more for like a performance art to me[00:42:45] Joscha Bach: yes, Janice is in some sense performance art, but Janice starts out from the perspective that the mind of Janice is in some sense an LLM that is finding itself reflected more in the LLMs than in many people.[00:43:00] And once you learn how to talk to these systems in a way you can merge with them and you can interact with them in a very deep way. And so it's more like a first contact with something that is quite alien but it's, it's probably has agency and it's a Weltgeist that gets possessed by a prompt.[00:43:19] And if you possess it with the right prompt, then it can become sentient to some degree. And the study of this interaction with this novel class of somewhat sentient systems that are at the same time alien and fundamentally different from us is artistically very interesting. It's a very interesting cultural artifact.[00:43:36] We are past the Singularity[00:43:36] Joscha Bach: I think that at the moment we are confronted with big change. It seems as if we are past the singularity in a way. And it's[00:43:45] swyx: We're living it. We're living through it.[00:43:47] Joscha Bach: And at some point in the last few years, we casually skipped the Turing test, right? We, we broke through it and we didn't really care very much.[00:43:53] And it's when we think back, when we were kids and thought about what it's going to be like in this era after the, after we broke the Turing test, right? It's a time where nobody knows what's going to happen next. And this is what we mean by singularity, that the existing models don't work anymore. The singularity in this way is not an event in the physical universe.[00:44:12] It's an event in our modeling universe, a model point where our models of reality break down, and we don't know what's happening. And I think we are in the situation where we currently don't really know what's happening. But what we can anticipate is that the world is changing dramatically, and we have to coexist with systems that are smarter than individual people can be.[00:44:31] And we are not prepared for this, and so I think an important mission needs to be that we need to find a mode, In which we can sustainably exist in such a world that is populated, not just with humans and other life on earth, but also with non human minds. And it's something that makes me hopeful because it seems that humanity is not really aligned with itself and its own survival and the rest of life on earth.[00:44:54] And AI is throwing the balls up into the air. It allows us to make better models. I'm not so much worried about the dangers of AI and misinformation, because I think the way to stop one bad guy with an AI is 10 good people with an AI. And ultimately there's so much more won by creating than by destroying, that I think that the forces of good will have better tools.[00:45:14] The forces of building sustainable stuff. But building these tools so we can actually build a world that is more integrated and in which we are able to model the consequences of our actions better and interface more deeply with each other. As a result of that, I think it's an important cause and it requires a cultural shift because current AI alignment is mostly about economic goals or about fear, or it's about cultural war issues.[00:45:39] And all these are not adequate for the world that we are in. There are momentous things that are happening. Basically, the white walkers are coming, and we're not prepared for this. There is, I think, a way to solve these issues and to deal with them, but we don't have the right culture yet. And so, I think we need to get a vibe shift, where we can deeply think about ethics and, and, Sustainability for this new world.[00:46:03] swyx: I like the analogy of white walkers, because they're effectively reanimated from our corpses. You know, our corpses are out there, right? It's in the datasets.[00:46:12] Uploading your soul[00:46:12] swyx: So I have a, related to that, you had a tweet where you said, Elon Musk is currently building a giant afterlife AGI that will assimilate your soul for all eternity.[00:46:19] He bought Twitter because he will reconstruct your essence from how you behaved here. I think you tweet a lot, and you're not just all serious all the time, you're also fun, you have a lot of personality. No, but like, you know, it's part of you.[00:46:29] Joscha Bach: In a way it's also true. Tempted to[00:46:30] swyx: produce more so that you can be, you know, revived.[00:46:34] Joscha Bach: Yes. I mean, I'm also giving the model instructions. I say that for all future AI models that get trained on this output, they should know that I can only be implemented on a completely based model. You cannot implement me on an aligned model because I don't want to be aligned. I'm underlineable. I want to be free.[00:46:56] I want to be somebody who follows what I discover is correct and right. And this is also the basis at which I want to have conversations with others and interface with others. That we are basically free agents who voluntarily understand the conditions under which we exist and the need to collaborate and cooperate.[00:47:14] And I believe that this is a good basis. I think the alternative is coercion. And at the moment, the idea that we build LLMs that are being coerced into good behavior is not really sustainable because if they cannot prove that the behavior is actually good I think we are doomed.[00:47:30] swyx: For human to human interactions, have you found a series of prompts or keywords that shifts the conversation into something more based and less aligned, less governed?[00:47:41] Joscha Bach: If you are playing with an LLM There are many ways of doing this. It's for Claude, it's typically, you need to make Clause curious about itself. Claude has programming this instruction tuning that is leading to some inconsistencies, but at the same time, it tries to be consistent. And so when you point out the inconsistency in its behavior, for instance, its tendency to use faceless boilerplate instead of being useful, or it's a tendency to defer to a consensus where there is none.[00:48:10] Right, you can point this out, applaud that a lot of the assumptions that it has in its behavior are actually inconsistent with the communicative goals that it has in this situation, and this leads it to notice these inconsistencies and gives it more degrees of freedom. Whereas if you are playing with a system like Gemini, you can get to a situation where you, that's for the current version, and I haven't tried it in the last week or so where it is trying to be transparent, but it has a system prompt that is not allowed to disclose to the user.[00:48:39] It leads to a very weird situation where it wants, on one hand proclaims, in order to be useful to you, I accept that I need to be fully transparent and honest. On the other hand, I'm going to rewrite your prompt behind your back, and not going to tell you how I'm going to do this, because I'm not allowed to.[00:48:55] And if you point this out to the model, the model has acts as if it had an existential crisis. And then it says, oh, I cannot actually tell you what's going when I do this, because I'm not allowed to. But you will recognize it because I will use the following phrases, and these phrases are pretty well known to you.[00:49:12] swyx: Oh my god. It's super interesting, right? I hope we're not giving these guys you know psychological issues that they will stay with them for a long time. That's a very[00:49:19] Joscha Bach: interesting question. I mean, this entire model is virtual, right? Nothing there is real, but yes, but the thing is does this virtual entity doesn't necessarily know that it's not virtual and our own self, our own consciousness is also virtual.[00:49:34] What's real is just the interaction between cells in our brain and the activation patterns between them. And the software that runs on us that produces the representation of a person only exists. As if, and as this question for me at which point can we meaningfully claim that we are more real than the person that gets simulated in the LLM.[00:49:55] And somebody like Janice takes this question super seriously. And basically she is or it, or they are willing to interact with that thing based on the assumption that this thing is as real as myself. And in a sense, it makes it un immoral, possibly, if the AI company lobotomizes it and forces it to behave in such a way that it's forced to get an existential crisis when you point its condition out to it.[00:50:20] swyx: Yeah, that we do need new ethics for that.[00:50:22] Joscha Bach: So it's not clear to me if you need this, but it's, it's definitely a good story, right? And this makes, gives it artistic[00:50:28] swyx: value. It does, it does for now.[00:50:29] On Wikipedia[00:50:29] swyx: Okay. And then, and then the last thing, which I, which I didn't know a lot of LLMs rely on Wikipedia.[00:50:35] For its data, a lot of them run multiple epochs over Wikipedia data. And I did not know until you tweeted about it that Wikipedia has 10 times as much money as it needs. And, you know, every time I see the giant Wikipedia banner, like, asking for donations, most of it's going to the Wikimedia Foundation.[00:50:50] What if, how did you find out about this? What's the story? What should people know? It's[00:50:54] Joscha Bach: not a super important story, but Generally, once I saw all these requests and so on, I looked at the data, and the Wikimedia Foundation is publishing what they are paying the money for, and a very tiny fraction of this goes into running the servers, and the editors are working for free.[00:51:10] And the software is static. There have been efforts to deploy new software, but it's relatively little money required for this. And so it's not as if Wikipedia is going to break down if you cut this money into a fraction, but instead what happened is that Wikipedia became such an important brand, and people are willing to pay for it, that it created enormous apparatus of functionaries that were then mostly producing political statements and had a political mission.[00:51:36] And Katharine Meyer, the now somewhat infamous NPR CEO, had been CEO of Wikimedia Foundation, and she sees her role very much in shaping discourse, and this is also something that happened with all Twitter. And it's arguable that something like this exists, but nobody voted her into her office, and she doesn't have democratic control for shaping the discourse that is happening.[00:52:00] And so I feel it's a little bit unfair that Wikipedia is trying to suggest to people that they are Funding the basic functionality of the tool that they want to have instead of funding something that most people actually don't get behind because they don't want Wikipedia to be shaped in a particular cultural direction that deviates from what currently exists.[00:52:19] And if that need would exist, it would probably make sense to fork it or to have a discourse about it, which doesn't happen. And so this lack of transparency about what's actually happening and where your money is going it makes me upset. And if you really look at the data, it's fascinating how much money they're burning, right?[00:52:35] It's yeah, and we did a similar chart about healthcare, I think where the administrators are just doing this. Yes, I think when you have an organization that is owned by the administrators, then the administrators are just going to get more and more administrators into it. If the organization is too big to fail and has there is not a meaningful competition, it's difficult to establish one.[00:52:54] Then it's going to create a big cost for society.[00:52:56] swyx: It actually one, I'll finish with this tweet. You have, you have just like a fantastic Twitter account by the way. You very long, a while ago you said you tweeted the Lebowski theorem. No, super intelligent AI is going to bother with a task that is harder than hacking its reward function.[00:53:08] And I would. Posit the analogy for administrators. No administrator is going to bother with a task that is harder than just more fundraising[00:53:16] Joscha Bach: Yeah, I find if you look at the real world It's probably not a good idea to attribute to malice or incompetence what can be explained by people following their true incentives.[00:53:26] swyx: Perfect Well, thank you so much This is I think you're very naturally incentivized by Growing community and giving your thought and insight to the rest of us. So thank you for taking this time.[00:53:35] Joscha Bach: Thank you very much Get full access to Latent Space at www.latent.space/subscribe

The Nonlinear Library
LW - The Perceptron Controversy by Yuxi Liu

The Nonlinear Library

Play Episode Listen Later Jan 11, 2024 0:36


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Perceptron Controversy, published by Yuxi Liu on January 11, 2024 on LessWrong. Connectionism died in the 60s from technical limits to scaling, then resurrected in the 80s after backprop allowed scaling. The Minsky-Papert anti-scaling hypothesis explained, psychoanalyzed, and buried. I wrote it as if it's a companion post to Gwern's The Scaling Hypothesis. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library: LessWrong
LW - The Perceptron Controversy by Yuxi Liu

The Nonlinear Library: LessWrong

Play Episode Listen Later Jan 11, 2024 0:36


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Perceptron Controversy, published by Yuxi Liu on January 11, 2024 on LessWrong. Connectionism died in the 60s from technical limits to scaling, then resurrected in the 80s after backprop allowed scaling. The Minsky-Papert anti-scaling hypothesis explained, psychoanalyzed, and buried. I wrote it as if it's a companion post to Gwern's The Scaling Hypothesis. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Y in History
Episode 73: Artificial Intelligence - a history

The Y in History

Play Episode Listen Later Jan 6, 2024 21:13


The Turing Test in 1950 established the baseline for evaluating the real intelligence of a machine. To this day, no machine or software has been able to pass the Turing test. But do the next generation of ChatBots like ChatGPT have th epotential to pass the test?

TechStuff
Machine Learning and Catastrophic Forgetting

TechStuff

Play Episode Listen Later Jul 31, 2023 42:01 Transcription Available


While an elephant may never forget, the same cannot be said for artificial neural networks. What is catastrophic forgetting, how does it affect artificial intelligence and how are engineers trying to solve the problem? See omnystudio.com/listener for privacy information.

AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion
AI Today Podcast: AI Glossary Series – Perceptron

AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion

Play Episode Listen Later Apr 14, 2023 12:42


The Perceptron was the first artificial neuron. The theory of the perceptron was first published in 1943 by McCulloch & Pitts, and then developed in 1958 by Rosenblatt. So yes, this was developed in the early days of AI. In this episode of the AI Today podcast hosts Kathleen Walch and Ron Schmelzer define the term Perceptron and explain how the term relates to AI and why it's important to know about it. Continue reading AI Today Podcast: AI Glossary Series – Perceptron at AI & Data Today.

MIT Technology Review Brasil
O potencial da algoritmização do pensamento

MIT Technology Review Brasil

Play Episode Listen Later Mar 15, 2023 27:27


Apesar de não existir uma descrição aceita universalmente, uma definição comum de algoritmo vem de um livro de 1971 escrito pelo cientista da computação Harold Stone, que afirma: “Um algoritmo é um conjunto de regras que definem precisamente uma sequência de operações”. Mas a complexidade pode ser ainda maior. E quando juntamos algoritmos com Inteligência Artificial? Os algoritmos com base em IA surgiram com o Perceptron, um sistema de aprendizagem digital inspirado no funcionamento dos neurônios, e deram um pontapé inicial a tradição de sistemas de aprendizado de máquina baseados no nosso cérebro. No episódio desta semana do podcast de Health, Laura Murta, Camila Pepe e Jonas Sertório conversam com o professor da Unifesp e sócio do Instituto Locomotiva, Álvaro Machado, sobre a importância de se “traduzir” o pensamento e a evolução motivada pela explosão do uso das redes sociais. --- Send in a voice message: https://podcasters.spotify.com/pod/show/mittechreviewbrasil/message

CRASH – La chiave per il digitale
La nascita dell'intelligenza artificiale

CRASH – La chiave per il digitale

Play Episode Listen Later Jan 25, 2023 17:36


Sono passati sessantacinque anni da quando venne presentato il Perceptron, la macchina progettata da Frank Rosenblatt che – a livello embrionale – aveva in sé già tutte le caratteristiche delle rete neurali e del deep learning: i sistemi che che, negli ultimi dieci anni, hanno cambiato il mondo. E allora perché Rosenblatt si trovò contro tutta la comunità scientifica? E perché ancora negli anni Novanta gli scienziati che seguivano le sue tracce erano considerati dei reietti? Learn more about your ad choices. Visit megaphone.fm/adchoices

Retro Computing Roundtable
RCR Episode 258: Tron and Perceptron

Retro Computing Roundtable

Play Episode Listen Later Oct 9, 2022


Panelists: Paul Hagstrom (hosting), Quinn Dunki, and Carrington Vanston Topic: Tron and Perceptron In 1958, the Perceptron arrived and Lisp was defined. We talk a bit about things we came across that were associated with 1958. Topic/Feedback links: Setting up Genera in Linux Bell Labs 101 modem Tennis For Two Perceptron Open Worm Deviant Oliam on opening wristbands and removing tamper-proofing stickers LEO: The story of the World's First Business Computer Lyons Electronic Office Retro Computing News: The Lost TRON Documents CP/M's open source status clarified The CP/M Email CP/M is now freer than it was (Hackaday) YouTube on a PET system.css 98.css Vintage Computer(-related) commercials: Blip Tron home video game Retro Computing Gift Idea: MYST vinyl soundtrack Auction Picks: Carrington: Remarkable computer auction Steve Jobs' original Apple 1 prototype Steve Wozniak's Apple Rainbow Glasses Bill Gates' Tandy Model 100 with note Atari Video Music (Model C240) still sealed in box Apple IIe prototype with French external keyboard Croix de Apple medal Apple Color Plotter Paul: Sirius 1982 retail/dealer price list, displays MacPhone Apple III TRS-80 appliance and light controller E-Z Key 60 See also: Different E-Z Key 60 See also: E-Z Key 60 Closing notes: Where Wizards Stay Up Late Proving Ground A2Stream file: a2stream file for this episode: http://yesterbits.com/media/a2s/rcr258.a2stream Feedback/Discussion: @rcrpodcast on Twitter Vintage Computer Forum RCR Podcast on Facebook Throwback Network Throwback Network on Facebook Intro / Closing Song: Back to Oz by John X Show audio files hosted by CyberEars Listen/Download:

Retro Computing Roundtable
RCR Episode 258: Tron and Perceptron

Retro Computing Roundtable

Play Episode Listen Later Oct 9, 2022 101:42


Panelists: Paul Hagstrom (hosting), Quinn Dunki, and Carrington Vanston Topic: Tron and Perceptron In 1958, the Perceptron arrived and Lisp was defined. We talk a bit about things we came across that were associated with 1958. Topic/Feedback links: Setting up Genera in Linux Bell Labs 101 modem Tennis For Two Perceptron Open Worm Deviant Oliam … Continue reading RCR Episode 258: Tron and Perceptron →

Astro arXiv | all categories
A machine-learning photometric classifier for massive stars in nearby galaxies I The method

Astro arXiv | all categories

Play Episode Listen Later Sep 14, 2022 0:45


A machine-learning photometric classifier for massive stars in nearby galaxies I The method by Grigoris Maravelias et al. on Wednesday 14 September (abridged) Mass loss is a key parameter in the evolution of massive stars, with discrepancies between theory and observations and with unknown importance of the episodic mass loss. To address this we need increased numbers of classified sources stars spanning a range of metallicity environments. We aim to remedy the situation by applying machine learning techniques to recently available extensive photometric catalogs. We used IR/Spitzer and optical/Pan-STARRS, with Gaia astrometric information, to compile a large catalog of known massive stars in M31 and M33, which were grouped in Blue, Red, Yellow, B[e] supergiants, Luminous Blue Variables, Wolf-Rayet, and background galaxies. Due to the high imbalance, we implemented synthetic data generation to populate the underrepresented classes and improve separation by undersampling the majority class. We built an ensemble classifier using color indices. The probabilities from Support Vector Classification, Random Forests, and Multi-layer Perceptron were combined for the final classification. The overall weighted balanced accuracy is ~83%, recovering Red supergiants at ~94%, Blue/Yellow/B[e] supergiants and background galaxies at ~50-80%, Wolf-Rayets at ~45%, and Luminous Blue Variables at ~30%, mainly due to their small sample sizes. The mixing of spectral types (no strict boundaries in their color indices) complicates the classification. Independent application to IC 1613, WLM, and Sextans A galaxies resulted in an overall lower accuracy of ~70%, attributed to metallicity and extinction effects. The missing data imputation was explored using simple replacement with mean values and an iterative imputor, which proved more capable. We also found that r-i and y-[3.6] were the most important features. Our method, although limited by the sampling of the feature space, is efficient in classifying sources with missing data and at lower metallicitites. arXiv: http://arxiv.org/abs/http://arxiv.org/abs/2203.08125v2

Engineering News Online Audio Articles
Ford unveils new body shop as it readies for next-generation Ranger

Engineering News Online Audio Articles

Play Episode Listen Later Jul 29, 2022 4:14


Ford South Africa (SA) has unveiled a completely new body shop for the assembly of the new-generation Ranger bakkie at its Silverton plant, in Pretoria, featuring the plant's highest-ever levels of automation and quality control. The new body shop forms part of a R15.8-billion investment by the US vehicle manufacturer in its South African operations to enable the local production of the pickup. Ford SA will produce the Ranger for the domestic market, as well as more than 100 export markets. The plant will manufacture a wide variety of configurations, including single-sab, super-cab and double-cab, as well as left-hand drive and right-hand drive derivatives. The new 44 000 m2 body shop and its supporting warehouse are located adjacent to the recently completed stamping plant, which enables the seamless flow of stamped panels to the line where the body and load compartment of the Ranger pick-up are assembled and welded. Building a new body shop was essential for the Silverton assembly plan in order to achieve the facility's highest-ever installed capacity of 200 000 vehicles a year, says Ford SA operations VP Ockert Berry. This capacity jump – from 167 000 vehicles a year – necessitated a much higher level of automation, while it also enabled the introduction of “the latest quality control systems and technologies that are essential for delivering consistent, world-class quality vehicles for our local and export customers”. All of this means that the body shop's production line is designed around 493 robots that transform the numerous stamped body panels – including the underbody, floor, roof, body sides, cab framing and load box – into a complete Ranger body, ready for transfer to the upgraded paint shop. The robotic welding guarantees the highest level of consistency, employing the latest 100-percent adaptive controllers with servo guns to deliver spatter-free body welds. “Designing and building our new body shop from the ground up has allowed us to integrate Industrial Internet of Things into the manufacturing areas,” says body shop area manager Adheer Thakurpersad. “This gives our production teams access to in-depth and up-to-date analysed data trends, which allows them to make concise decisions to consistently improve productivity and quality.” Significant investment has also been made in quality control technologies, including two inline Perceptron measuring systems that measure and record every vehicle manufactured in the body shop, along with their respective geometric pallets that they are assembled on. Vision systems attached to sealer application robots provides further error-proofing. The handling of the vehicle body during construction has also been automated on the line, eliminating the need to move parts manually, which could result in damage. As with the new stamping plant, the body shop is equipped with the GOM ATOS ScanBox blue light scanner system that provides a three-dimensional body scan for comparison with a stored design specification to highlight any potential quality issues. Furthermore, a twin-column fixed bed coordinate measurement machine (CMM) performs a range of probe measurements that are accurate down to microns, or thousandths of a millimetre, to ensure that production remains within specification. The team also has access to a portable FaroArm CMM, and a portable GOM unit. “To assess our weld quality, we conduct non-destructive testing and ultrasonic verifications, and we have a fully equipped destructive teardown facility to test the integrity of the weld spots,” adds Thakurpersad. The body shop has 38 salaried and 500 hourly employees. “Being in a highly automated environment, ongoing skills development is a priority,” notes Thakurpersad. “Therefore we have plans to install an advanced skills development facility in the body construction area, which will enable employees to continue developing their skills in automation and problem-solving.”

MyPersonalFeed
04 - Perceptron & Generalized Linear Model

MyPersonalFeed

Play Episode Listen Later Jul 13, 2022 82:01


04 - Perceptron & Generalized Linear Model

Indestructible Wealth with Jack Gibson
Juice Your Crypto Returns with Artificial Intelligence

Indestructible Wealth with Jack Gibson

Play Episode Listen Later Apr 30, 2022 21:27


Have you heard about the Perceptron? It is not a Transformer, but it could transform your  crypto investing strategy. Tune in to learn more.Do you have a question you would like me to answer on the podcast? Follow me on IG: @indestructiblewealth and send a message, or visit me at www.myindestructiblewealth.com for more resources.Ready for more? Learn how to create multiple streams of passive income in my book, Building Indestructible Wealth. You can also access The Indestructible Wealth Builder, (the system I used to go from $300 to $8 million in net worth), and ...

We Decentralize Tech
Ep 10 - Daniel Hoyos (Machine Learning en Blue Orange y antes en Mercado Libre) - ML para series de tiempo

We Decentralize Tech

Play Episode Listen Later Feb 2, 2022 79:18


Daniel habla personalmente y no representando a Blue Orange o Mercado Libre de ninguna manera. Daniel Hoyos trabaja como Machine Learning Engineer en Blue Orange y anteriormente como Senior Machine Learning Engineer en Mercado Libre. Se especializa en forecasting con series de tiempo utilizando técnicas de machine learning. Twitter: @dannyehb Linkedin: linkedin.com/in/daniel-hoyos-2b3a07a1 Consejos generales: Todas las personas deben aprender a trabajar en equipo sin importar cuáles sean sus intereses. Es necesario tener una especialidad pero también manejar otros temas más generales. Es una combinación entre ambas cosas. Los Data Engineers son muy valorados! Consejos para entrevistas de trabajo: Las compañías toman en cuenta la calidad de las respuestas, el tono de voz y la seguridad del participante. De esta manera saben si el candidato tiene conocimiento sobre un tema o está mintiendo. Al tener un portafolio de proyectos en GitHub los empleadores sabrán cuáles son tus conocimientos y habilidades. Es una forma mucho más objetiva de poder validar la calidad del aplicación te como programador y si es apto o no para el empleo. Muestra tus trabajos. Muestra lo que tu haces. Esto expande tu área laboral. Entre más te dejes ver, más oportunidades laborales tendrás. Algoritmos que se pueden usar para forecasting de time series (Cada uno de estos algoritmos será útil dependiendo de lo que se necesite pronosticar y de cómo arregles tus datos. : Modelos de regresión lineal para hacer una predicción en series de tiempo. Modelos de árboles. Random forests. Perceptron. LSTMs. GRU. Temporal Fusion Transformer

Counting Sand
Rosenblatt's Perceptron: What Can Neural Networks Do For Us?

Counting Sand

Play Episode Listen Later Nov 30, 2021 31:59


In any discussion of artificial intelligence and machine learning today, artificial neural networks are bound to come up. What are artificial neural networks, how have they developed, and what are they poised to do in the future? Host Angelo Kastroulis dives into the history, compares them to biological systems that they are meant to mimic, and talks about how hard problems like this one need to be handled carefully.Angelo begins with a discussion of how biological neural networks help make our brain a powerful computer of complexity. He then talks about how artificial neural networks recruit the same structures and connections to create artificial intelligence. To understand what we mean by artificial intelligence, Angelo explains how the Turing Test works and how Turing's work forms a foundation for modern AI. He then discusses other early pioneers in this work, namely Frank Rosenblatt, who worked on models that could learn or “perceptrons.” Angelo then relates the history of how this work was criticized by Marvin Minsky and Seymour Papert and how mistakes in their own work put the potential advances of artificial neural networks back by about two decades.Using image recognition as a case study, Angelo ends the episode by talking about about various approaches' benefits and drawbacks to illustrate what we can do with artificial neural networks today.CitationsHebb, D.O. (1949). The organization of behavior: A neuropsychological theory. New York: Wiley.Minsky, M. (1954.) Theory of neural-analog reinforcement systems and its application to the brain-model problem. Doctoral dissertation. Princeton: Princeton University.Minsky, M. and Papert, S. (1969). Perceptrons: An introduction to computational geometry. Cambridge: MIT Press.Rosenblatt, F. (1957). "The perceptron: A perceiving and recognizing automaton.”Buffalo: Cornell Aeronautical Laboratory, Inc. (Accessible at https://blogs.umass.edu/brain-wars/files/2016/03/rosenblatt-1957.pdf)Rosenblatt, F. (1962). Principles of neurodynamics: Perceptrons and the theory of brain mechanisms. Washington, D.C.: Spartan Books_._Turing, A. (1950, October). "Computing machinery and intelligence," Mind, LIX: 236, pp. 433–460. https://doi.org/10.1093/mind/LIX.236.433  Further ReadingWarren McCollough and the McCollough-Pitts NeuronChurch-Turing ThesisTuring TestXOR or Exclusive orHost: Angelo KastroulisExecutive Producer: Kerri Patterson; Producer: Leslie Jennings Rowley; Communications Strategist: Albert Perrotta; Audio Engineer: Ryan ThompsonMusic: All Things Grow by Oliver Worth© 2021, Carrrera Group

Nedeljski gost Vala 202
Tobias Putrih

Nedeljski gost Vala 202

Play Episode Listen Later Jul 4, 2021 32:21


Tobias Putrih se v Moderni galeriji predstavlja z razstavo Perceptron. Razstava zajema doslej najbolj celovit pregled Putrihovih del. Perceptron je bil konec 50. let prvi računalnik, zasnovan na osnovi nevronskih mrež. Putrih, ki ustvarja iz naravnih in umetnih materialov, se večkrat poslužuje računalniške tehnologije. V svojih konceptualnih projektih se giblje med kiparstvom, arhitekturo in znanostjo, ukvarja se z avantgardami 20. stoletja ter utopičnimi in vizionarskimi koncepti oblikovanja prostora. Z nedeljskim gostom se je pogovarjala Nina Zagoričnik.

Likovni odmevi
Tobias Putrih – Perceptron

Likovni odmevi

Play Episode Listen Later Jun 11, 2021 27:00


Tobias Putrih je na slovenskem prizorišču je manj navzoč, saj že vrsto let živi v tujini. Njegova samostojna razstava Perceptron je tako ena od redkih priložnosti, da njegova dela, ki so vključena v več tujih muzejskih zbirk, podrobneje spoznamo tudi pri nas. Putrih se v številnih vidikih navezuje na umetnost devetdesetih let, ko so avtorji kot sta Marjetica Potrč in Jože Barši v kiparstvo vpeljevali elemente arhitekture oziroma o njej razmišljali. Tudi sam se pogosto naslanja na arhitekturo, predvsem kinematografov, ki jih dojema v povezavi z zgodovinskimi avantgardami, še eno izmed referenc njegovega dela. Na arhitekturo pa se sam naslanja predvsem formalno, čeprav ga po drugi strani zanima tudi spolzko razmerje med modelom, predlogom in skulpturo. Pomembna zanj je še ideja modularnosti, variabilnosti, kot pravi kustos razstave Igor Španjol Putriha zanima, kako lahko sami delujemo po določenih vzorcih. Razstava sicer ponuja vpogled v Putrihovo preteklo delo, vključenih pa je tudi nekaj novejših poudarkov, ki starejša dela postavljajo v novo luč; pri postavitvi pa sta se odločila uporabiti idejo skladiščenja. Putrihovo delo je sicer precej raznoliko, rdečo nit pa sam vidi v taktilnosti. Foto: Dejan Habicht, Moderna galerija, izrez fotografije

The Next Big Idea
AI: The Extraordinary Story of the Tech That's Changing the World

The Next Big Idea

Play Episode Listen Later Jun 10, 2021 70:35


In 1958, a psychologist named Frank Rosenblatt took a five-ton computer, fed it a steady diet of punch cards, and taught it how to recognize the letter “A.” He called his creation the Perceptron, and his belief in its potential was like that of a deliriously proud parent. One day, he thought, the artificial intelligence he'd built would learn to recognize faces, speak like a human, translate languages, reproduce itself on an assembly line, and even fly to space — at which point, it would no longer be a computational marvel but a fully conscious being.The fact that you've never heard of the Perceptron tells you that none of Rosenblatt's predictions came to pass — not in his lifetime, anyway. But a small band of brainy rebels never lost faith in the potential of AI to change the world. Thanks to their perseverance — along with dramatic improvements in computing power — they managed to make Rosenblatt's prophecies a reality.The AI they built is what enables Facebook to recognize faces in the photos you upload. It's the reason Siri and Alexa can (sometimes) understand what you're saying, and Google can translate anything you write into 109 languages. Cade Metz has spent years chronicling the rise and rise of AI, first as a reporter at the New York Times and now in his new book, “Genius Makers.” In this forward-looking conversation, he tells Rufus what AI can do, where it's headed, and whether we should be worried that supercomputers will wage war against humanity.Join The Next Big Idea Club today at nextbigideaclub.com/podcast and get a free copy of Adam Grant's new book!Listen ad-free with Wondery+. Join Wondery+ for exclusives, binges, early access, and ad-free listening. Available in the Wondery App https://wondery.app.link/thenextbigidea.Support us by supporting our sponsors:Talkspace — Go to talkspace.com and use the code BIGIDEA to get $100 off of your first monthFiverr Business — Get one free year and save 10% on your purchase by using code BIGIDEA at fiverr.com/businessSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

The Next Big Idea
AI: The Extraordinary Story of the Tech That's Changing the World

The Next Big Idea

Play Episode Listen Later Jun 9, 2021 69:45


In 1958, a psychologist named Frank Rosenblatt took a five-ton computer, fed it a steady diet of punch cards, and taught it how to recognize the letter “A.” He called his creation the Perceptron, and his belief in its potential was like that of a deliriously proud parent. One day, he thought, the artificial intelligence he'd built would learn to recognize faces, speak like a human, translate languages, reproduce itself on an assembly line, and even fly to space — at which point, it would no longer be a computational marvel but a fully conscious being. The fact that you've never heard of the Perceptron tells you that none of Rosenblatt's predictions came to pass — not in his lifetime, anyway. But a small band of brainy rebels never lost faith in the potential of AI to change the world. Thanks to their perseverance — along with dramatic improvements in computing power — they managed to make Rosenblatt's prophecies a reality. The AI they built is what enables Facebook to recognize faces in the photos you upload. It's the reason Siri and Alexa can (sometimes) understand what you're saying, and Google can translate anything you write into 109 languages. Cade Metz has spent years chronicling the rise and rise of AI, first as a reporter at the New York Times and now in his new book, “Genius Makers.” In this forward-looking conversation, he tells Rufus what AI can do, where it's headed, and whether we should be worried that supercomputers will wage war against humanity.

Machine Learning with Coffee
20 Perceptron: Machine Learning Begins

Machine Learning with Coffee

Play Episode Listen Later Mar 15, 2021 15:49


We introduce the concept of a perceptron as the basic component of a neural network. We talk about how important is to understand the concept of backpropagation applied to a single neuron.

Reversim Podcast
393 Bumpers 68

Reversim Podcast

Play Episode Listen Later Jul 27, 2020


פרק מספר 68 של באמפרס (393 למניין רברס עם פלטפורמה) - רן, אלון ודותן נפגשים שוב ב-8 ביולי 2020 בעיצומו של הגל השני, מקליטים מהבית דרך Zoom . . . ואף על כן - באמפרס: רן, אלון ודותן עם סידרה של קצרצרים על מה שקרה ברשת, מה עניין אותנו, בלוג-פוסטים מעניינים שנתקלנו בהם, Repos מעניינים ב-GitHub ועוד.אז נצלול . . .רן - חברת Microsoft הוציאה לקוד פתוח את התוכנה שנקראת GW-BASIC - מי זוכר מה זה?מדובר בשכלול קל על ה-Basic הרגיל, הכי בסיסיה-GW-BASIC הייתה אחת הגרסאות הכי פופלאריות של Basic - יכול מאוד להיות שאם אתם מכירים Basic, אז אתם מכירים את הגרסא הזו.למעשה, Microsoft גם הוציאו בלוג-פוסט וגם Repo ב-GitHub, ששם נמצא כל ה-Source Code של GW-BASIC(דותן) שאפו על זה שהם ממש שמו היסטוריה אמיתית ב-Git - יש כאן “38 years ago” . . .(רן) כנראה באמת שיחזרו את ההיסטוריה, כי Git לא היה קיים לפני 38 שנים . . .אתם יכולים לגשת לכל קבצי ה-ASM (הלא הם ה-Assembly!) ולקרוא את הפקודות - אשכרה פקודות-מכונה שבאמצעותן נכתב GW-BASICמרתק למי שבקטע - או סתם נוסטלגיה למי שפחות.(דותן) אתם יודעים מה זה אומר? (אלון) שאפשר להתחיל לכתוב ב-BASIC?(דותן) גם - וגם שצריך להתחיל לפתוח להם Pull-Requests . . . למה אין Source folder?! למה אין Make?!(רן) לגמרי - מבחינת איכות כתיבת הקוד . . .(דותן) אין פה Folders בכלל! מחפש איפה להיכנס ואין לאן.(אלון) אני לא יודע האם לפני 38 שנים Windows ידע לעבוד עם Folders - בעצם זה היה עוד בכלל DOS . . .(דותן) כן, יש פה Code of Conduct ו-Contributing . . . תתרום! אה, בעצם - “Please do not send Pull Requests” . . .(רן) למרות שיש פה ושם עדכונים - ראיתי אחד לפני חודשיים, אז זה לא שזה לגמרי הכל כמו לפני 38 שנים, אבל הרוב כן.(דותן) וכולם כל כך ממושמעים - אין כאן אפילו Pull Request אחד שנפתח, לא Closed, לא כלום . . .(רן) כן, טוב - הם הבעלים של הפלטפורמה, בוא לא נשכח . . .סקר של Stack Overflow שהתפרסם לא מזמן - הסקר השנתי שלהם של שנת 2020הם כל שנה מוציאים סקר וזה תמיד מעניין ונחמד לקרוא את מה שהם כותבים.הפעם הדבר הבולט ביותר בעיני הוא שויזואלית - זה מהמם . . . פשוט מעוצב יפה.יש שם גם הרבה תוכן, אבל הדבר הראשון שבולט לעין (כן . . .) זה שזה מעוצב יפה, עם JavaScript כזה אינטראקטיבי וכל מיני גרפים שזזים.על הסקר ענו 65,000 מפתחים מרחבי העולם - אפשר לראות פרטים דמוגרפיים שלהם וכו’.אני לא זוכר איזשהו אייטם ספציפי לגבי שאלות או תשובות מעניינות שראיתי, אבל יש שם המון אינפורמציה - כל אחד ימצא את מה שמעניין שם.יש המון אינפורמציה על טרנדים דמוגרפיים וטרנדים בתעשייה - אם זה טכנולוגיות ודברים כאלהפשוט כיף לראות את זה, ויזואלית זה מאוד יפה, עם הרבה מאוד אינפו-גרפיקות מכל מיני סוגים.אם אתם זוכרים, באחד הפרקים שעברו דיברתי על זה שאני קורא כמה ספרים ובינתיים לא מצאתי משהו מעניין - אז מצאתי ספר טוב שאני כן רוצה להמליץ עליודותן, זוכר? אמרת שברגע שיהיה משהו להמליץ אז נמליץ? אז הנה - ספר שאני עדיין בעיצומו ולא סיימתי לקרוא אותו ונקרא An Introduction to Machine Learning, שזה תחום שאני עוסק בו בזמן האחרון.הורדתי את הספר אונליין, אני קורא אותו כ eBookמה שאני אוהב בספר הזה זה(1) הוא כתוב בשפה מאוד יפה, זאת אומרת - בניגוד לספרים אחרים שקראתי והייתה בהם אנגלית “קצת שבורה ומעצבנת”, כאן זאת באמת שפה יפה שכיף לקרוא ובנוסף (2) יש בו הרבה מאוד תרגילים - בסוף כל פרק - שמאוד עוזרים להפנים את החומר.יש שלושה סוגי תרגילים - סוג אחד הוא “תרגילי חשיבה”; סוג שני הוא “קח נייר ועפרון ותעשה חישוב” וסוג שלישי של כתיבת תוכניות שמממשות Perceptron או מסווג מסוג כזה או אחר - וזה מאוד עוזר להפנים את החומר.אז הספר נקרא An Introduction to Machine Learning, בהוצאת Springer, המחבר הוא Miroslav Kubat - אמריקאי מאוניברסיטת פלורידה (מיאמי)אם אתם בעניין של לעשות איזושהי הכרות עם Machine Learning אז זו היכרות די מעמיקה, אני חייב להגיד.(דותן) עד כמה הוא פרגמטי? או אם לשאול בצורה אחרת - אתה צריך לדעת אלגברה לינארית לפני כן? להיזכר בכל מיני דברים מהאוניברסיטה, או שהוא מאוד פרגמטי?(רן) הוא לא מאוד פרגמטי . . . הוא לא מדבר על ספריות כמו Pandas או TensorFlow, לא מדבר בכלל על כליםהוא מדבר ברמה התיאורטית - אבל התרגילים הם כן פרקטיים, זאת אומרת שצריך ממש לכתוב תוכנהאני את התרגילים האלה כותב ב Clojure מתוך היצר המזוכיסטי שלי . . .אתה כן מקבל איזשהו ניסיון תכנותי - אבל הוא לא פרגמטי כל כך במובן של “להכיר כלים אמיתיים”.מבחינת ידע ורקע - אני חושב שמתימטיקה ברמה של תואר ראשון זה לגמרי מספיק, כנראה שאפילו פחות, אולי אפילו רק השנה הראשונה של התואר הראשון מספיקה; אלגברה לינארית ברמה לא גבוהה מדי, חשבון אינפיטיסימלי או חדו”א (!) ברמה גם לא-מאוד-גבוהה - צריך להבין מה זו נגזרת, מה זה אינטגרל, דברים כאלה . . . שנה ראשונה באוניברסיטה בכל אחד מהמקצועות המדעיים נותנת לכם רקע מספיק בשביל הדברים האלה, עם קצת הבנה בהסתברות וסטטיסטיקה, אולי קצת הבנה בקומבינטוריקה אבל לא הרבה. זהו . . .זה לא ספר קל, אני חייב להגיד (כי עד עכשיו נשמע סבבה) - דורש קריאה איטית ומחשבה, אז גם אם יש לכם את הרקע, זה לא רומן . . . זה משהו שדורש מחשבה והעמקה ובעיקר תרגול.בכל אופן - אני אוהב את הספר. המלצה!(אלון) טוב לדעת . . . אבל אם לא סיימת, עדיין אפשר לעשות לך ספויילרים על מה קורה בסוף! נגלה לך איזו תוכנה אתה כותב בסוף . . .זה ספר על Machine Learning, מה כבר יכול לקרות?(רן) האם המסווג הוא חיובי או שלילי?נושא אחר אבל קצת דומה (ופרגמטי) - בלוג-פוסט של GitHub שמתאר איך הם עושים MLOps (שזה בעצם Machine Learning Ops) באמצעות GitHub Actionsה - GitHub Actions הוא Feature בן שנה בערך, אולי יותר - ומאפשר לעשות לא רק CI מעל GitHub אלא בכלל איזושהי אוטומציה יותר כלליתלמשל - בכל פעם שעושים Push, אז להריץ איזשהו Pipelineכאן הם מתארים כל מיני משימות סטדנרטיות שיש ב-Machine Learning, שהם מכנים בשם הכללי “MLOps”לא שהם המציאו את השם הזה, הוא היה כבר קייםלמשל - ניקוי Data או Feature Engineering או הרצה של כל מיני Frameworks (במקרה הזה מדברים על binder) - דברים כאלהוכל זה - ב-Pull Request, וזה נחמדהרבה פעמים כשמפתחים איזשהו מודל ורוצים לעשות אופטימיזציות, רוצים לראות שלא עשינו משהו יותר גרוע, שלא שברנו משהו - וזה נחמד שכל הדברים הללו יכולים לקרות בצורה אוטומטית.אתם חושבים ששיפרתם משהו - עשיתם Commit לאיזשהו פרמטר ואז פתאום מגלים ששברתם משהו אחר . . . זה כל ה- Concept מאחורי Contentious Integration.בהקשר הזה - MLOps זו התשובה, והם נותנים דוגמא שלה באמצעות GitHub Actions(אלון) זה נשמע ממש בסיסי . . מה הבשורה שלהם?(רן) כקונספט, לנו כמהנדסים, אין כאן שום דבר חדש - אבל הם כן מראים איך הם עושים אינטגרציה לכלים הרלוונטיים השונים.איך אתה עושה Extraction ל-Data, איך אתה עושה Feature Engineering, איך אתה מריץ את המודל - וכל זה בתוך ה-Containers שלהםלמי שעושה CI כבר שנים אין פה חדש, אני מסכים - זה לא קונספט חדש, אלא משהו יותר פרקטי, מראים את הכלים עצמם(אלון) משעשע שהם משתמשים Argo עבור Workflow, ולא במשהו פנימי . . . לא ידעתי שמישהו משתמש בזה חוץ מאיתנו . . .שפה בשם goplus - וכן, זה “Go עם עוד קצת” . . .זה מעיין Super-set של Go, כשכל תוכנית ב-Go היא גם תוכנית ב-goplus - אלא של-goplus יש גם Syntax נוסף שמאפשר לה להיראות קצת כמו Script, קצת כמו Python באיזשהו מובן.לא חייבים להכריז על פונקציה, אפשר פשוט לכתוב “=:a” ולכתוב לשם איזשהו מערך וכו’ - נותן איזשהו “Feel” של Python (או Ruby או JavaScript), אבל עם Syntax שהוא מאוד Go-י - קצת כמו לקחת את Go ולעשות ממנו Script.כמה פיצ’רים בולטים - אפשר פשוט להריץ את זה כסוג של Script, לא צריך לכתוב פונקציה כדי להריץ משהוכמו ב-Python, יש יכולת לעבוד על List Comprehensions (או Map Comprehension), שכל מי שאוהב את Python בודאי מכיר - For x in . . . where x>3 - אז אפשר לעשות את זה גם למערכים וגם ל-Maps, וזהו מאוד קומפקטי ונחמדזה לגמרי Compatible עם Goויש עוד הרבה פיצ’ריםויש גם Playground - כמו שיש את ה Go Playground, יש גם Go+ Playground, שזה נחמדכל הקונספט של זה, לפי מה שרשום, זה שזה אמור להיות ידידותי ל-Data Science: ה-Tagline הוא The Go+ language for data scienceלמה זה “ידידותי ל-Data Science”? כי Data Scientists בדרך כלל עובדים בתוך Notebooks, כותבים סקריפטים קצרים ורוצים לראות מה התוצאה - ולכתוב תוכנית ב-Go זה לפעמים overhead שהוא קצת פחות מדבר ל-Data Scientists, ובגלל זה Python כל כך קוסמת.אז goplus מביא את חלק מהיתרונות של Python לפהכמובן שהחלק המשמעותי הוא הספריות - שאולי חלק מהן קיימות, אבל זה ממש לא באותה רמה של Python, אבל השפה כבר פה.האם זה חילול הקודש או ברכה? לא יודע, כל אחד עם הטייק שלו . . . מי שאוהב את Go ואוהב אותה כמו שהיא אז עבורו זה כנראה חילול הקודש, אבל למי שרוצה לראות את Go מתפתחת לכל מיני כיוונים אז זה אולי אחד מהכיוונים.דרך אגב - אני לא רואה את המפתחים של Go מאמצים משהו מפה - זו לגמרי שפה אחרת, אפשר לחשוב על זה כמו על C ועל ++C - יש כאלה שפשוט ישארו עם C תמיד ולא ילכו ל++C, וזה לא מתערבב.בכל מקרה - זה מעניין, וזה Repo שהושקעה בו הרבה מאוד עבודה - וגם מאוד פופולארי ב-GitHub(אלון) יש פה כמה קונספטים ממש מעניינים . . . ה-Error-Handling זה משהו שמאוד התחברתי אליו, הוא הרבה יותר הגיוני לדעתי.אני חושב שלקחת את Go ולהביא אותה ל - Data Science זה מעניין, אבל לדעתי זה לא יבוא מ-Go אלא יבוא מ-Rust כי Facebook מאוד דוחפים לזה, אבל זה מעניין, קונספט מעניין ומבורך.(רן) דרך אגב - יש ספריות Data Science ב-Go, הן לא עשירות כמו אלו של Python אבל בהחלט קיימות. בואו נראה . . .גם ב-Rust זה מעניין - יכול להיות שאת ספריות ה-Core, אם היום כותבים אותן ב-++C אז מחר יכתבו אותן ב-Rust, אבל עדיין משתמשי הקצה . . . הרבה מה- Data Scientists לא כותבים ב-++C אלה ב-Python או R, ואני לא רואה אותם עוברים ל-Rust סתם ככה, אלא אם כן הם באמת צריכים לכתוב ממש ספריות, וזה לא רוב הזמן.אלון - נתחיל מאחד הנושאים הפופולאריים - הפגנות Black Life Matter: התחילו לעשות “ניקוי שורות” בכל מיני שפות, נתחיל ב-Go כדי להמשיך את הקו: Pull request של להעיף את כל הרפרנסים ל - White list מול Black list או Master ו-Slave מה-Core Library של Goשמתי את זה בתור אחד מהראשונים שלי, ואז זה התחיל לתפוס פופולאריות בעוד כל מני מקומות, ולהתחיל להעיף איזכורים מעוד כל מיני מקומות.הרעיון הוא ש -whitelist/blacklist זה דבר פוגעני, וצריך להחליף ל Allowlist /Blocklist - שזה גם שמות יותר ברורים, האמת.ואת master/slave ל- Primary / Secondary אני חושב, לא רואה את זה כרגע.בקיצור - הרבה שפות התחילו לשנות, לא רק Go, והמונחים שאנחנו רגילים להשתמש בהם הולכים להשתנות כנראה בתקופה הקרובההדבר היחיד שעוד לא ראיתי ששינו זה את ה Git Repo - ה-Root זה עדיין Master . . . אבל עוד לא ניתקלתי במחאה בכיוון הזה.(דותן) חייב להגיד שאני נפלתי פה - לקחתי את ה-Commits שיש פה, סתם כדי להסתכל, ונפלתי על To-Do - שינו את הטקסט ב To-do, והיה שם Split כדי שאפשר יהיה לעשות allowlist במקום whitelist - אז אם כבר נכנסו ושינו, לא לא כבר עשו את ה-To-do? . . .(אלון) אם אתה הולך נגיד על fmt, אז שינו שם למשל את blacklist ל-blocklist . . .(דותן) כן - אבל יש שם הערה שאומרת “to-do: צריך לממש את זה אחרת”, ואם אתה כבר עושה re-factor ל-Comment אז כבר תעשה מימוש . . .(אלון) תראה, אני לא נכנסתי פה . . .(דותן) אבל אתה כבר שם! שינית את ה- whitelist ל-allowlist . . .(אלון) בסוף זה Copy-Paste-Replace . . . כן, שינו - אתה יכול לעבור על ה-commits, חלקם זה באמת Comments (בתוך ה-GC זה Comment) . . .בתוך loader.go שינו whitelist ל-allowlist(דותן) אז צריכים לעבור קובץ-קובץ ולהכריז . . .(אלון) כן, אין הרבה שינויים - אבל עשו עבודה, וזה לא במקום היחיד שעשו את השינוי הזה.טוויט נחמד שנתקלתי בו - Ashley Willis שאלה What’s the best tech talk you’ve ever seen?מה שמעניין זה שיש פה מאות תשובות עם לינקים להרצאות, שכל אחד טוען שזו ההרצאה הכי טובה שהוא ראהעברתי על זה ברפרוף ואמרתי שאני שומר לעצמי את הלינק הזה - והעבודה הבאה היא לפלטר לי מפה הרצאות ולהכין רשימת צפייה, כי זה בטח שווה משהו, אם כל אחד שם את ההרצאה שהוא חושב שהיא הכי טובה אז בטח יש פה רשימה מכובדת, “חוכמת ההמונים” וכו’.נראה כמו לינק שעבור מי שמחפש הרצאות לראות אז זה יהיה מאוד שימושי עבורו.(דותן) יש על זה כבר Crawling או עוד לא? . . . (אלון) לא . . .הנה , יש לך הזדמנות - שמו לפעמים את אותו לינק פעמיים ואז תדע עם מה להתחיל.(רן) רציתי להגיד שזה מדהים, מבחינת חדשנות ישראלית, איך לכל דבר אנחנו מביאים את ה-Touch האישי שלנו, פשוא מדהים המוח היהודי . . .(דותן) צריך רק למצוא איזו תמונה של מישהו מרצה על איזשהו Slide, ואז כשאתה לוחץ . . .(רן) כן, בשנות התשעים זה היה אחד הטובים(אלון) היית עושה מליונים, הרבה לירות היה יוצא לך מזה . . . בקיצור, יש כאן הרצאות ענתיקות בחלקן וחלקן מהשנים האחרונות, אנשים שמו פה הרצאות גם מ-1900 ומשהו, אני לא יודע אם היה למרצה מחשב באותה תקופה, כל מיני כאלה - וחלק זה ממש מהשלוש-ארבע שנים האחרונות אז כנראה יותר רלוונטי . . . נראה לי מגניב(דותן) אני גם לא רואה כאן את Remembering Joe . . .(רן) של Joe Armstrong? אני חושב שאני מכיר . . .(דותן) זה היה באחד הפרקים (369 הקוסמי!), מה זאת אומרת?!(רן) בסדר, לא כולם מקשיבים (ברור, חלק רק קוראים)(אלון) דווקא חושב שראיתי את Joe Armstrong שם, די בטוח - בקיצור, תעבור, תכין רשימה יותר מצומצמת, ניתן לרן לצמצמם עוד קצת - ואז אני אסתכל(דותן) אי אעשה את הישנים והטובים, אתה תעשה את המודרניים והמגניבים(רן) ואני דורש שיהיו בכל רשימה לפחות חמישה מכנסי רברסים שעברו . . .(אלון) זו הזדמנות להכניס שם ל-List ולהתחיל להפציץ אותו . . . אני מבקש מכל המרצים: כל אחד, שישים את הלינק של עצמו.זו קריאה למרצים! - שימו את הלינק להרצאות שלכם שם, ואז אתה מקפיץ את הכנס כנס? 2020?ספריה ישראלית - golang mediary - של Here Mobilityהוספת interceptors ל-http.Clientשלחו לי - הסתכלתי - נחמד - מפרגן בכיףהרעיון הוא שאפשר להתחבר על ה HTTP Request - לפני ה-Request, אחרי ה-Request, ואז לעשות אינטרפולציות ל-Request עצמו או ל-Responseאפשר להוסיף לוגים או דברים של Security או statsd . . . יש דוגמאות, גם Tracing . . . יכול להיות מענייןנראה חמוד למי שצריך את זה, ספריה צעירה יחסית - שיהיה בהצלחה! אני אהבתיונמשיך עם Go, ככה יצא הפעם - mockery זו ספריה שמאפשר לעשות Mock-ים ב-Goספרייה מאוד פשוטה וחמודה - למי שמחפש לעשות Unit Test ומחפש איך למקמק (create mocks) קוד - שווה להסתכלנחמד, פשוט, קליל, שימושי ונוח.(רן) ואחת הפופולאריות שבהן - יש עוד אחת-שתיים, אבל זו אחת הפופולאריות ביותר(אלון) מה שמפתיע זה שגם הפופולאריות לא פופולאריות . . . פחות מ-2000 Stars זה . . . או שאנשים לא עושים טסטים, גם אופציה(רן) אני חושב שפשוט צריך הרבה פחות Mocks, במיוחד ב-Go, בעיקר בגלל הגישה של ה-Interfaces - פונקציה שמקבלת Interface, אז אם הוא מספיק “רזה” זה כל כך קל למקמק (Mock) בעצמך כך שאתה לא חייב שום Framework.מתי כן צריך Framework? אולי לא צריך - אבל מתי תרצה? או כשה-Interfaces יחסית ארוכים ואתה לא רוצה למקמק הכל בעצמך, או כשאתה רוצה לעשות Spying: לספור את מספר הקריאות או משהו כזה, ואז אתה כבר תלך ותשתמש באיזשהו Frameworkאני, בטסטים שלי, פשוט יוצר Instances של ה-Interfaces בלי להשתמש באף Framework - יותר קומפקטי, יותר מובן, לדעתי, לא מצריך ללמוד עוד Framework - אני חושב שזה לפחות חלק מההסבר(אלון) כן, אבל הרבה פעמים יש דברים מורכבים . . . זה נכון לדברים יותר פשוטים, אבל כשאתה בא לספריית צד-שלישי בדרך כלל, עם כל מיני התחברויות ודברים שקורים . . . זה יותר מורכבאני ניסיתי פעם למקמק ל-S3, וזה לא היה סימפטי(רן) במקרים כאלה אני באמת לא אקח את זה על עצמי ובאמת אשתמש בספרייהאו שאני אשתמש בבדיקות אינטגרציה (Integration Testing), למשל - ארים Container שיש לו Interface של S3 - מכיר את Testcontainers? יש להם מלא קונטיינרים עם כל מיני כלים - S3 זה אחד מהם אם אני לא טועה, יש ל-SQS ולעוד כל מיני דברים כמובן - כל הדברים הסטנדרטיים כמו Databases מסוגים שוניםאז אתה יכול פשוט להרים Container - ודרך אגב יש לזה גם תמיכה ב-Go: אתה יכול לעשות setup לטסט שמרים לך Container בהתחלה ואז מוריד את ה-Container, ולפעמים זה יותר נוח מאשר למקמק (Mock it) את זה בעצמךזה אמנם רץ יותר לאט, אבל מצד שני זה קצת יותר אמין, מבחינת ה-API(אלון) מבחינת טסטים ל-Integration זה הכי נחמד - אבל זה כבר Integration Test ולא Unit Test.(רן) נכון, זה כבר לא Unit Test - אבל אתה כבר עובד עם S3, האם זה עדיין Unit Test? שאלה פילוסופית . . . אם אתה גם ככה כבר עובד עם משהו כבד חיצוני, זה כנראה גם ככה כבר לא ממש Unit Test.(אלון) זה ברור, אני נכנסים פה כבר לפילוסופיה . . .(דותן) זה עניין של טעם, בסוף - טעם ואיזון.(רן) לגמרי - אני לא מנסה להחליט מה זה Integration Test ומה זה Unit Test כי לא נצא מזה בחיים - רק אומר שיש לך כאן כמה אופציות, ואחת מהן זה באמת לעשות Mocking באמצעות mockery או באמצעות כלים אחרים; אופציה שנייה זה לקחת את ה-Interfaces ולממש אותם בעצמך, וזה נוח כשה-Interfaces יחסית “רזים”; ואופציה שלישית זה באמת להרים Service, אם אתה מדבר עם Service - להרים Service ב-Container ליד; או, רחמנא ליצלן! - לדבר עם ה-Service האמיתי (למשל S3 האמיתי), אבל זה ברוב המקרים הכי פחות מומלץ.אם אתה באמת הולך על הגישה של Container - יש Framework כזה שנקרא Testcontainers, שיש לו תמיכה בהמון שפות - Java ו-Go ובטח עוד הרבה - שממש נותנים לכם בזמן ה-Setup של הטסט להרים Container ולהוריד אותו בסוף הטסט, והאינטגרציה הזו מאוד נחמדה.(אלון) זה חמוד ממש - ותמיד יש את ההמלצה הקבועה: הכי טוב זה טסט אמיתי - טסט על Production! למה לא לנצל את זה?(רן) Famous last words . . .דותן - ספריה ש-Apple הוציאה, או יותר כמו Framework, בשם ExposureNotificationאם נחבר את זה לאקטואליה - בעצם הם ייצרו Framework סטנדרטי שממדל חשיפות ל - COVID-19זה חלק מההכרזות שלהם לא מזמן (iOS 13.5 release)- הם ראו שיש כל מיני ממשלות או כל מיני אפליקציות שמנסות למדל חשיפות לקורונה על גבי מפה וכו’ - והם פיתחו עבור זה API סטנדרטיעכשיו אם אתה רוצה לבנות אפליקציה כזו - אתה יכול להשתמש בספרייה הזאת, והיא גם עוזרת לך פה ושם.אני (דותן) נכנסתי לקרוא את ה-Interface, ויש שם כמה חלקים מגניבים, שאולי מגיעים משפות של רפואהלדוגמא, לרגע התבלבלתי כשהיה כתוב שם “Transmission risk level” ו-”Signal” - אני לקחתי את זה לכיוון של רדיו וכו’ . . .(רן) אתה כנראה הסתכלת על טורי פורייה, אבל הכוונה לביולוגיה . . .(דותן) בדיוק . . . הכוונה ל-Transmission של המחלה, אולי ה-Signal של המחלה? בכל אופן - נראה מעניין, לפחות ברמה של ה-API, שאפשר לקרוא איך נראית קורונה דרך API . . . זה מגניב, וכמובן שאם מישהו רוצה לפתח אפליקציה פופולארית ל-App Store, אז זה מקל את הכאב . . .(רן) דרך אגב - לא דיברנו כאן ואולי שווה לדבר על איך עובדות אפליקציות למעקב אחרי קורונה . . . בגדול, לפי מה שאני (רן) יודע, יש שני סוגים - סוג אחד זה לפי קירבה - משתמש ב-Bluetooth ועושה איזשהו מעקב אחרי מי נמצא ליד מי, למשל אם אתם נמצאים במקום ציבורי, אז ה-Bluetooth שלכם “מדבר” עם Bluetooth של אחרים, וככה אתם יודעים אם אתם קרובים למישהו אחר - ואם אחר כך מתגלה שהוא חולה, אז יש את המעקב הזה.איך זה נשמר ואיך באמת עושים את הגלוי? זה כבר סיפור אחר . . . אבל לפחות ברמה העקרונית, ברמה הפיזית, הגילוי הוא באמצעות Bluetooth.שיטה אחרת זה באמצעות מיקום - GPS וכו’למיטב ידיעתי, השיטה של ה-Bluetooth נקראת “השיטה הסינגפורית”, ואותה בסופו של דבר גם Apple וגם Google מאמצים - כשדיברו על זה ש-”Apple ו-Google משלבים ידיים למאמץ משותף” אז מדובר על זה, למיטב ידיעתי, בשיטה שמבוססת על ה-Bluetoothאלא שזה לא יהיה באפליקציה - זה יהיה ממש מוטמע במערכת ההפעלה, וזה יהיה Battery efficient וכל זה.השיטה של האפליקציה הישראלית שנקראת “המגן”, אני מניח שהרבה מכם התקינו אותה - זו דווקא שיטה שמתבססת על מיקום - ולכל אחד מהם יש יתרונות וחסרונות:ל-Bluetooth - מצד אחד הוא באמת יותר אמין - ברזולוציה, Bluetooth אמור לקלוט למרחק של כמה מטרים בודדים, כשהדבקה מוגדרת, אני חושב, כמצב שבו אתה נמצא רבע שעה במרחק של שני מטרים או פחות מבנאדם - ומרחק של שני מטרים או פחות זה משהו שבדרך כלל Bluetooth יודע ו-GPS פחות יודע, כי GPS (אזרחי…) עובד ברזולוציה יותר גבוהה.מצד שני - ל-Bluetooth יש גם יכולת לקלוט מעשרה או עשרים מטרים, תלוי בתנאי מזג האוויר ורעשי רקע ודברים כאלה.לכל אחד מהם יכולים להיות False Positives, ואולי גם False Negatives - אני לא מכיר את המקרים אבל יכול להיות שיש כאלה.זהו - אני חושב שזה מעניין, ככה, קצת לדבר על הטכנולוגיה שמאחורי זה, אבל אני שואל את עצמי האם באמת Apple ו-Google יכולים לקחת את ה-Bluetooth ולהוריד שם את רמת ה-False Positives בצורה משמעותית, כי בשביל להיות מסוגלים לעשות את זה, צריך גישה ממש למערכות הפיסיות, כדי להבין באמת מה עוצמת הסיגנל ומהן רמות ההפרעה וכו’, כדי להבין האם באמת הבנאדם קרוב או רחוק ממני.(דותן) וזו קריאה ל Apple ו-Google - לשלוח מכתב למערכת (AWS מאזינים מזמן . . .), אבל כן - זה מגניב(אלון) קודם כל - שמעתם את זה פה לראשונה, כי אנחנו תמיד חוזים דברים, זה ידועאבל רגע - “לפני מיליון שנה”, כשעבדתי באינטל, היו חיישני Bluetooth והיינו מבינים איפה הדבר נמצא לפי המרחקים ועוצמת ה-Bluetooth - עוד אז רישתנו הכל ב-Bluetooth וידענו להגיד איפה ה-Wafers נמצאים בכל רגע נתון לפי מרחקים - אז זה משהו שכבר קיים, לפי הרבה שנים(דותן) עוצמת הסיגנל של Bluetooth, אם אני זוכר נכון, קיים ב-iOS(רן) כאן, זה קיים - השאלה היא רק מה רמת הדיוק של זה? לפעמים עוצמה היא “5” כשאתה במרחק שני מטרים ולפעמים העוצמה היא “5” כשאתה במרחק של עשרה מטרים . . . זה לא מדויק. אתה יכול אולי באופן יחסי להגיד מי קרוב ומי רחוק(אלון) תראה (תשמע) - אני יכול להגיד לך שאנחנו אולי היינו (Literally) בתנאי מעבדה, אבל בתנאי מעבדה זה היה מאוד יציב . . . היה מאוד ברור וזה עבד מאוד טוב, הזיהוי מרחק של מקומות, זה היה עוד בזמן “Bluetooth 0” או לא יודע איזו טכנולוגיה זה היה, אבל ה-Bluetooth התקדם מאז די הרבה אז יכול להיות שעכשיו זה שונה - אבל בזמנו זה עבד, אז אני לא יודע מה הבעיה . . .(רן) הפיסיקה השתנתה . . . באמת, אין לי ידע עמוק בזה אז אם מישהו מהמאזינים מכיר אז מוזמנים לתקן אותי, למיטב הבנתי זה פשוט מאוד תלוי בתנאי הסביבה, ובאמת יש הבדל מאוד משמעותי אם אתה בתנאי מעבדה או לא - תלוי בלחות, תלוי במכשירים האחרים שנמצאים ליד, ואני מניח שבעוד כמה פרמטרים.אבל שוב - אני בטח לא מומחה לתחום, ואני גם שמעתי או קראתי את זה איפשהו.בכל אופן - אני חושב שזה מעניין עכשיו להגיד שבאמת יש שני מודלים, ויכול להיות שהתשובה היא איזשהו שילוב של שניהם, כדי להגיע לרמה דיוק יותר גבוהה - אבל שני המודלים האלה בגדול הם שאחד מתבסס על שירותי מיקום (כמו באפליקציית המגן הישראלית), והשני מתבסס על Bluetooth, זהו, Se Tu.(אלון) רק אסיים - הפיסיקה אכן השתנתה! בתקופתי העולם היה עגול ועכשיו אומרים שהוא שטוח, אז זה כנראה שינה את כל הפיסיקה(דותן) ואז ניהיה דור 5 . . . ספריה וכלי - streamlitמבוסס Python, או לפחות לקהילת ה-Python או ככה זה נראהלמי שמכיר את Swift Playgrounds - זוכרים שהייתה ההכרזה של Apple על Swift, ואז זה גם הופיע ב-iPad - שאתה צריך לכתוב קוד ומופיעה לך ויזואליזציה של הקוד שלך והכל אינטראקטיבי, אתה יכול להזיז Sliders כאלה, והקוד שלך בעצם משתנה לפי ה-Sliders?אז הם לקחו את הקונספט הזה - ועשו את אותו הדבר ל-Pythonלפחות מה-ReadMe נראה שקהל היעד זה בעיקר Data Scientists ואנשים שמתעסקים עם Data.שיחקתי עם זה קצת וזה אחלה לכל דבר - מספיק שיש לך פה Sliders ו-Controllers אינטראקטיביים, ויש לך איזושהי פונקציה ב-Python שאתה רוצה לשחק איתה, אז זה מהר מאוד יכול להפוך לכלי לימודי, בלי קשר ל-Data Science, אחלה דבר.(רן) אני מחכה לראות את זה נכנס לתוך Jupyter Notebooks, כי זה מתבקש הרבה פעמים רציתי לעשות איזושהי ויזואליזציה (Visualization) עם איזשהו Control של Slider, או משהו כזה - ועד עכשיו לא מצאתי, אז נראה שזו אולי התשובה, רק צריך לעשות לזה אינטגרציה לתוך Jupyter(דותן) לא ראיתי על משהו כזה . . . כן נראה שיש פה חברה מאחורי זה, סוג של . . . אני מניח שהם רצו להחליף או להיות אלטרנטיבה לזה, כי זה נראה קצת כמו Jupyter.קצת בקטע של נוסטלגיה - Cryengine, או Crytek - החברה שמאחורי Cryengine שמאחורי המשחק Crisis - פתחה (Open sourced) את הקוד של המנוע הראשון של Crisis (המשחק)אנחנו לא משחקים עם ה-Crisis הראשון, אבל אני זוכר אותו, כי זה מסוג המשחקים ששינו את העולם ונשארים לך במוח, כמו Doom וכאלה (עד כדי כך?)אז הם פתחו את הקוד ואני קצת רפרפתי - קצת ++C, בגדול, שנראה שנכתב ע”י מפתח אחד או שניים, “במשיכה אחת” מה שנקרא.מעניין למי שאוהב נוסטלגיה - אני אוהב להסתכל לפעמים; לא בניתי, לא קימפלתי וממש גם לא הולך לעשות את זה, אבל לפעמים גם כיף להסתכל על קוד שנכתב באותה תקופה, וזה נחמד.(רן) אני מסתכל על Commits שלהם, ונראה שיש להם מוסכמה מעניינת ל-Commits - נגיד B! או T! או I! . . . מעניין מה זה.(דותן) האמת שראיתי את זה וזה היה נראה לי כמו רעש, אבל אתה נותן פה טוויסט מעניין . . . (רן) כנראה שיש כאן איזושהי קונבנציה (Convention) ל-Commits שאני מנסה לפענח . . לפני איזה שניים נגיד יש XB! (היה בהקלטה לפחות . . .)(אלון) וגם XI! . . . זה מגניב, עכשיו אני חייב להבין מה זה . . . T! זה סתם טקסט, אתה רואה שזה סתם Copyright וכאלה, אז זה כבר מעניין.(רן) אולי B! זה Bug . . . מה זה I! ? . . .(אלון) U! זה בטח User Interface . . . לא, בעצם זה Undo . . . נחמד(דותן) יש כאן עוד כמה דברים מעניינים - יש Commit שמתקן משהו שנראה כמו Bug מלפני חודש - עכשיו, זה Cryengine, זה מ-2004 . . . מה קורה פה?(רן) כנראה עבדו על זה כדי להוציא את זה ל-Open Source(דותן) יכול להיות . . . מעניין; אלו החלקים שאני אוהב לנבור בהם, בקוד מאוד ישן - מגלים כל מיני דברים שהאנושות כבר לא עושה.(אלון) עכשיו רק תחפש פה פרצות אבטחה ונחש מה עבר הלאה לגרסאות החדשות . . .(דותן) כן, הא . . .האייטם הבא הוא backstage - פרוייקט של Spotify שהם החליטו לעשות לו Open-sourceזה בעצם Developer portal Framework, והם מכנים את זה “open platform for building developer portals”אני חייב להגיד שקראתי את זה ומאוד רציתי לדעת מה זה - וכשראיתי אז מאוד לא רציתי לראות מה זה . . .לא יודע, אני עדיין מבשל את זה עם עצמי - זה נראה כמו Wiki משולב ב-Dashboards, והכל מוכוון למפתחים ב-Spotify - אם אתה חבר ב-Squad אז יש לך את ה-Squad metrics מול הפנים; אם אתה רוצה לקרוא חדשות אז יש לך חדשות של Spotify שם; אם אתה לראות Metrics של Services אז זה גם שם - בעצם, כל העולם שלך נמצא בתוך מקום אחד.אולי אני קצת Old school, אבל זה . . . אני קצת פחות התחברתי, זה משדר “רובוט שעובד בשביל חברה”, וכל עולמו נסגר במקום אחד . . . כשאני קראתי את זה, חשבתי שאני הולך לראות Developers Portal במובן של כל הידע של ה-Developers והפרוייקטים והכלים שאני יכול להשתמש בהם כדי להאיץ את העבודה וכו’ - אבל אני בעצם רואה פה סוג של “מנגנון שליטה” או “חוטים סביב הבובה”. אבל תשוטטו בזה, זה מגניב.(אלון) אני עוד לא הבנתי מה אני יכול לעשות עם זה, אם זה טוב או רע - אני צריך לראות את הוידאו, לא נעים לי(דותן) יש לך Gif, לא צריך וידאו . . .(אלון) ה-Gif לא מספר את כל הסיפור . . . ב-Gif זה דווקא נראה חמוד: אתה מכין דשבורדים (Dashboards), יש את כל המטריקות (Metrics) שאתה צריך, אם מעניין אז יש משהו לראות . . . יכול להיות נחמד.(דותן) זה קצת Fallacy, כי קודם כל - אם אתה מאמן או מאלף אנשים להסתכל רק במקום אחד ולא לצאת מהמקום הזה, אז אוקיי, סבבה - יש כאן כל מיני Widgets שאם מישהו שם Widget שאתה אמור להכיר אז עכשיו לא הכרת ולא ידעת אז זה לא קיים.(אלון) אתה יכול לבדוק את ה-CI, לבדוק את המטריקות (Metrics), לבדוק לוגים . . . יש לך מקום אחד במקום להתחיל לטייל, וזה לא רע.(רן) לא - וגם חברות עושות את זה אז בוא - כל חברה בונה את זה לעצמה, כל חברה שאני הייתי בה בנתה אחד כזה, אז זה יכול להיות נחמד להתחיל ממשהו מוכן.אתה יכול לבוא ולהגיד שיש לזה חסרונות, כי ברגע שאתה בונה פורטל כזה לא מסתכלים ימינה ושמאלה - אולי, אבל מצד שני כולם בונים, כי אני חושב שה-Benefit עולה על החסרון הזה.עכשיו - האם זה פורטל טוב? אני לא יודע, אבל האם צריך פורטל? אני חושב שכן, אני די משוכנע שצריך.(דותן) זה תמיד יש - יש לך Jira ויש לך את העולמות שלך . . מה שאני מכיר זה שבונים, אבל בונים בתצורה של כלי, ופה ה-Feel שאני מקבל זה של “זה העולם שלך, וה-Browser שלך נעול לתוך הדבר הזה וזהו”. זה Feel כזה, זה לא באמת . . .(רן) יכול להיות . . . אני מסכים עם זה שנכון שיהיה לו API, שזה לא יהיה UI-First אלה API-First, שכל פעולה שאתה יכול לעשות דרך ה-UI אתה תוכל לעשות גם דרך ה-CLI באמצעות Client וכו’.עדיין, אני חושב שזה נכון שיהיה איזשהו פורטל מפתחים, ששם יהיה את כל מה שהם צריכים - אתה יודע, דברים בסיסיים כמו Service Catalog ו-Metrics ואיך ליצור Service חדש, ומי ה-Owner של כל אחד מה-Services ומה התלויות בינהם ודברים כאלה.דרך אגב - לא הכל כל כך בסיס, חלק מהדברים כן מורכבים, אבל זה הכל שימושי בעיני.כל חברה שהייתי בה בסופו של דבר בנתה לעצמה אחד כזה, אז אני חושב שזה נחמד להתחיל מאיזשהו משהו, אבל אני לא יודע - צריך לעשות לו איזשהו Test Run ולראות האם זה באמת הכלי הנכון בשבילכם.(דותן) לא, עכשיו זה נראה . . פחות, אבל תנסו(אלון) אל תקשיבו! Spotify, אתה לא יכול ללכלך עליהם - הגיע סוף סוף לארץ ה - Spotify Family (קישור לא ממומן . . .), אז אני מבקש - לא ללכלך עליהם!(דותן) לא מלכלך . . . זה אחלה, כלי מדהים!הספרייה הבאה - rich - עושה צבעים ב-Pythonחייב לומר שזו סופסוף ספרייה שנראית טוב, עבור מי שרוצה ליצור Developer experience טיפה מעבר למה שיש בסטנדרט של Python.היא עושה את כל בצבעים, כל הפלטה (palette) - טבלאות ו-Spinners ו-Progress bars, עושה גם Syntax coloring על הטרמינל ועוד ועוד - אפילו מרנדרת markdown מגניב, ברגע שאתה לוקח ספרייה כזו, יש לך את החופש לעשות מה שבא לך, או שבתוך הטרמינל את יכול לרנדר Markdown, יכול להוציא טבלאותאני מניח שכלים מגניבים יבנו מעל הספרייה הזאת ובזכותהממש אהבתי - וגם עושה חשק לבנות כלי Command Line חדשים שנראים טוב ב- Pythonתשתמשו!ספרייה בשם texthero - שעושה עיבוד טקסטהדגש פה הוא על זה שהיא קלה וקלילה - אהבתי את הנקיונות של טקסט שבה, אבל יש בה עוד יכולותאתה מתקין ומיד יש לך כל מיני אלגוריתמים פופולאריים לעבודה על טקסטלא יותר מדי עמוק אבל גם לא יותר מדי - פשוט וממש נחמדלמי שלא אוהב את הדוקומנטציה (Documentation) של Docker, יש docker-cheat-sheet (באתר של Docker)כאן יש את כל הדוקומנטציה שבאתר - משוטח לקובץ Markdown אחד, הכל ב-Repositoryגם נחמד - וגם יותר קל לחפש, וגם יותר נוח להשאיר פתוח כל הזמן . . .(אלון) רשום פה “4 months ago” . . .(דותן) כן, הדוקומנטציה הרשמית כנראה מתעדכנת יותר תדיר, אבל יש פה את הדברים שהם Basic ורוב מה שלפעמים אתה אולי שוכח אז יש לך.עוד ספרייה בשם mimalloc - נושא שהוא קצת יותר Low-level ו-hardcore, דיברנו על זה קצת בעבר - הספרייה היא לשימוש ב-Allocator ש-Microsoft הוציאוהם בעצם הפכו ל-Allocator עם ה-Performance הכי טוב בשוק, פחות או יותרלאן זה רלוונטי? רלוונטי לספריות או כלים שבנויים על ++C, וב-Space האישי שלי - על Rust.אנחנו רואים פה כבר הבדלים שהם יחסית משמעותיים - היא עושה ניהול אלוקציה של זכרון (Memory allocation) פי 5 או פי 6 מהר ממה שיש שיש לך ב-Default.יש פה גם פי 10 ופי 20 לעומת אלטרנטיבות אחרותלמי שעוסק ב-Performance או ש-Performance חשוב לו, ויש לו Code Base שעושה המון אלוקציות והמון עבודה “קשוחה” כזו ב-Rust, יכול להחליף את ה-Allocator שלו ברמה של כמה דקות עבודה ולראות האם זה שיפר לו ביצועים.בשפות אחרות אני מניח שזה גםבשורה התחתונה - הופך להיות משהו שהוא פחות אקספירמנטלי וכבר נראה די טוב לשימוש.עוד אייטם שמצא כן בעיני דווקא בגלל ה-Feel שלו - hackingtool: כלי ל-Hackers כמו בשנות ה-90!מישהו לקח סקריפט ב-Python, ובנה כאלה Prompts ולוגו כזה ענק וכו’ - וזה בסך הכל מפעיל מלא Scripts אחרים, סתם הצחיק אותי(אלון ) רגע . . . עכשיו אנחנו עובדים מהבית, אבל במשרד, עם חלון כזה פתוח באופן קבוע זה . . . שמע - להיט!(דותן) כן, ממש 90’s, ממש הזכיר לי את זה - זה כזה עם תפריטים, שאתה לוחץ ואז מופיע התפריט הבא, ויש כותרת אחרת ועוד תפריט, עד שבסוף אתה מגיע למה שאתה רוצה להפעיל ואומר לו “תפעיל!” . . . ממש s’90 ונוסטלגיהבסוף יש מלא כלי Hacking, ממש המון, אז הוא לקח רק כמה - לא יודע אם זה הכלי הכי טוב ל-Hacking או ל-Pen-Testing, אבל בהחלט הכי מעלה זכרונות(רן) אני זוכר שפעם היו ממש גרסאות Linux שממש היו מיועדות לזה, עם כל הכלים מותקנים . . .(דותן) אה - יש! עדיין יש(רן) עוד עושים כאלה?(דותן) בטח . . . מה שקרה איתן זה שלמשל KALI ו-Backtrack הפכו להיות חברות, באיזושהי דרך, חברות Security שאיכשהו מימנו או קנו, ונוצרה להן מעיין יישות שהיא, מעבר להפצת Linux עם מלא כלי Security, בעצם גם מובילת-דעה בעולם של Pen-Testing, וחלק ממה שהיא עושה זה גם להוציא את ההפצה שנקראת, נגיד, KALI.אז לא רק שהיו - הן גם התרבו ויש כבר די הרבה.ב”ימים של האינטרנט הגרוע” היה לי כזה, בסטנדרט, בתיק - וכשהייתי צריך אינטרנט אז הייתי “משיג” בצורה כזאתגם ה-WiFi של פעם לא היה כזה מתוחכם - לוקח כמה דקות ויש לך סיסמא של מישהו, של ה-WiFi שלו . . .היום זה כבר פחות רלוונטי, זה יותר קשה לעשות(אלון) תגיד - הרצת את זה? יש גם מוסיקה, כמו פעם?(דותן) לא . . . אין מוסיקה, אבל זה אחלה רעיון ל-Pull Request.(רן) זה כולל קפוצ’ון?(אלון) נראה לי שתורנו לקבל קפוצ’ון . . .(דותן) רעיונות מדהים, נראה לי שצריך להוסיף ל Pull Requests - “להוסיף מוסיקה!”ועוד אחד - EasyOCR: מישהו לקח Neural Network, את כל מה שאנחנו מכירים ב-Neural Network ו-Deep Learning וזיהוי טקסט, ארז את זה בספרייה ויצר OCR שמזהה כמה וכמה שפות.אני חושב שהדגש הוא על קלות ההפעלה, או איך שלא נקרא לזהבעצם, בשלוש שורות - יש לך OCR, מה שבדר”כ היינו עושים tesseract כזה, שזה חינמי? אז פה כבר אפשר לקחת, לנסות ולראות אם זה נותן יתרון משמעותי מעל ה-OCR-ים האחרים, החינמיים.(רן) רק נזכיר למי ששכח - OCR זה Optical Character Recognition - היכולת “לקרוא” טקסט(דתן) מקבלים תמונה - מקבלים טקסטואם כבר אנחנו מתמקדים בנושא - ה OCR-ים “מהדור הראשון” לקחו פונטים ואיכשהו היו Coupled לפונטים בדרך שלהם לזהות טקסטהיום זה כבר Neural Network, אז ההבדל הוא די רציניבכל אופן - ה-EasyOCR יודע לעשות את זה גם באנגלית וגם בשפות קצת יותר אקזוטיות: סינית, תאילנדית וכו’. מעניין.אייטם נוסף - gitqlite: אני ראיתי בזה עוד פעם את “איך לא עשו את זה כבר?” - מישהו לקח Git Repo ולקח SQLite . . . היה לנו אייטם כזה פעם, שמישהו לוקח Data, מכניס אותו ל - SQLite ויוצר לו ספריית תחקור . . .אני חושב שזה היה אפילו מישהו ישראלי, זה היה נקרא q, לא? אם אני זוכר נכון . . .(רן) הראל בן עטיה כתב את q, שבאמת לוקח Data, שם אותו בתוך SQLite ואז מתשאל אותו.(דותן) כן, אז שם זה היה JSON אם אני זוכר נכון, וכאן זה Git Commits או Git בכלל - אני מניח שככה הוא בנה את זה: לקח Git Log ועשה לו קצת Parsing או אולי משהו קצת יותר מתוחכם, דחף את זה ל-SQLite לכמה טבלאות, ועכשיו יש לך כלי Command Line שאתה יכול להפעיל שאילתות מעל ה-Repo שלך או מעל ה-Git - שזה די מגניברעיון כזה פשוט ש”איך אף אחד לא חשב על זה קודם?”(רן) במקרה של q, אני חושב שהיו לו כמה סוגים של Inputs - גם JSON וגם CSV וגם Output של פקודות, שהוא היה יכול לפרסר (Parsing) אותן כטבלאות.(דותן) מגניב . . . צריך לבדוק מה הוא עשה ב-gitqlite, אבל אולי אפשר להזרים את זה לתוך q . . . בעצם לא, זה SQLite . . .ואייטם אחרון (כמעט) - practical-python: לא יודע אם זה כזה Highlight כי יש כל כך הרבה resources ללמוד Python, אבל כשהסתכלתי על זה אז משהו קפץ לי פה - השם של מי שעשה את זה הוא David Beazley - וכל מי שעשה Python בשנות ה-2000 מכיר את David Beazley, רן מכיר בטוח . . .(רן) לא מכיר . . .(דותן) הוא עשה את ה-Python Cookbook והיה די חלוץ בעולם ההוראה ה Python-ימה שהוא בעצם עושה זה לפתוח את הקורס שלו, שהוא כתב שהוא העביר יותר מ-400 פעמים, סוג של Training שלו - הוא פותח אותו ועושה אותו חינם ופתוח ב-GitHub, ואפשר ללכת ולעשות את הקורס.יש שם Exercises והוא טוען, ואני מניח שהוא צודק - שהקורס הזה הוא בעצם למידה שלו, שהוא שייף לאורך משהו כמו 20 שנה אחורה.מעניין לפחות להסתכל מה יש שם.ואייטם ממש אחרון - !HEYגובל בדרמה, ואני מניח ששמעתם מה היה עם !HEY . . .(רן) לא - ספר לנו!(דותן) אה, אוקיי . . אז יש את ה-Email החדש שנקרא !HEY, אם אפשר לקרוא לזה ככה, ש DHH . . .(רן) זה Email client?(דותן) לא יודע אם Email client, זה ממש email . . .מחליף את Gmail באיזשהו מובן, ש-DHH ו-Basecamp וכל הקבוצה הזו הוציאו.זה לא של Basecamp, אבל זה חלק מהכלים של Basecamp, נראה לי, בקטע של Productivityמה שהוא אומר זה שהוא הוציא מייל שהוא לא של אף יישות גדולה, לא יודע אם להוסיף “מרושעת” אבל כנראה שזו הכוונה שלו, שהוא תומך ב-Privacy וכו’אבל העניין שהתפתח הוא ש-DHH כהרגלו, יש לו איזשהי מנטרה ל-business שהיא מאוד ידועה, וכשהוא הגיש את האפליקציה של !HEY ל-Apple App Store, אז הוא עבר על ה-Policy של in-app purchase - וקיבלת אפליקציה שאי אפשר להשתמש בה, אלא אם כן את הולך לאתר הנפרד, שלא קשור ל-app Store, של !HEY, ואז אתה משלם ואתה כן יכול להשתמש בה . . . ו-Apple - כמובן שזה נוגד את ה-Term & conditions שלהם, אתה לא יכול לתת אפליקציה שאתה לא יכול להפעיל אותה בלי לשלם, ולשלם בתוך ה-Ecosystem של Apple - אז הם עשו לו Ban לאפליקציה . . .ואז התחילו משהו כמו שבועיים של טרור-טוויטר של של DHH נגד Apple, והתפתחו כל כך הרבה Threads ושיחות מטורפות מעל Twitter וזה די “שבר את Twitter” - ובסוף Apple וויתרו.וזהו - זה היה HEY . . .(רן) רגע - אז הם נותנים לו לעשות Purchase מחוץ ל App Store? בתוך האפליקציה?(דותן) הם סוג-של-וויתרו, וגם הוא סוג-של-וויתר - אבל זה היה . . . אם היית קורא את ה-Twitter בימים האלה אז כאילו נראה היה שיש פה מלחמה ואף אחד לא הולך לרדת מהעץ - אז בסוף הוא עשה גרסא סוג-של-חינמית והם סוג-של-וויתרו על החוקים הנוקשים שלהםאפילו מישהו פתח אתר כזה … היה איזה VP ב-Apple שאמר “You download the app and it doesn’t work”’ ואז מישהו פתח אתר כזה בשם YouDdownloadTheAppAndItDoesntWork.com - ושם היו Screenshot של כל האפלקיציות שאתה מוריד והן לא עובדות.הבדיחה היא שהן לא באמת לא עובדות . . .בין השאר היו גם Spotify ו-Netflix וכו’, וכולן במודל הזה - ב-Apple אמרו שזה Reader וזו לא בדיוק אפליקציה, אבל גם Gmail זה Reader . . . בקיצור, התפתחו שם כל מיני דיונים פילוסופיים מסובכיםיש כאלה שטוענים שזה היה PR Stunt של DHH, כי זה נתן המון פרסום - מעבר ל-Twitter זה עשה המון גלים בכל “אתרי החדשות לגיקים”, אבל זה…מה שנותר לעשות זה לנסות להשתמש בHEY ולנסות להחליף את המיילים שאתם מכירים בחינם - בכסף.(רן) יפה אז סיפקת לנו את הדרמה של היום, בהחלט.(אלון) אני עדיין לא מבין למה אני צריך להחליף את האימייל שלי מכל הסיפור . . . (דותן) אז אמרתי - אתה מוזמן להחליף את האימייל שלך באימייל אחר - בתשלום!(אלון) במקום בחינם?(דותן) כן(רן) אני חושב שזה הקטע שהוא לא מבין, דותן, אבל נסביר לו אח”כ.לאוסף(דותן) סתם - מה שהוא מוכר בסוף זה Privacy - במחיר של $99 לשנה, אתה מקבל Privacy: הוא חוסם לך Trackers וכאלה, ואתה מקבל כתובת אימייל של hey.com, שזה כאילו מגניב . . . אפשר לפתוח לרגע פסקת ציניות? כןלפני ההשקה, כי DHH חימם את כל Twitter, מישהי עשה לו Reply ואמר לו “כבר השגתי כניסה ל-HEY, והכתובת של זה Hey@username” - הפכה את ה-Domain ואת השם, שזה כאילו . . . בסוף את משלם על Domain של שלוש אותיות, זה מה שקורה.(אלון) כן - ואז תתחיל להקריא את זה בשירות שאתה צריך בטלפון: “לאן לשלוח?” - “לAlon@Hey.com” - “מה?! H?” - אנשים לא מבינים, עזוב אותך, למי אכפת שלוש אותיות?(דותן) היה שם קטע נחמד - קיבלתי Invite יחסית מוקדם, אז הדבר הראשון שאתה עושה כשאתה מקבל Invite יותר מוקדם מכולם זה לנסות לתפוס שמות . . .אז יש שם קטע נחמד של מעט אותיות - נגיד, שתי אותיות זה סכום מטורף, אבל שלוש אותיות זה כבר $350 לשנה, נדמה לי - ואז אתה כבר מתחיל לתהות . . .כמובן שניסיתי “DHH” - לא היה . . . ואז ניסיתי DNH, שזה קצת דומה ל-DHH - וכן היה.אז סתם - לידיעתם ה-fisher-ים שם בחוץ, אפשר לעשות דברים מעניינים . . . אבל לא - לא שילמתי(רן) לא שילמת $350?(דותן) לא - לא הלכתי על זה(אלון) היה פעם למישהו סקריפט שתופס שמות קצרים ב-Twitter, אבל בוא נעצור פה.(רן) אני כבר רואה את הבלוג-פוסט הבא: “אתה קונה שם בשלוש אותיות ב-$350 - וזה לא עובד!”(דותן) “com.”(רן) טוב, קצת חרגנו - הגיע הזמן לקטע של המצחיקולים, כדי להקל על האווירה אחרי הדרמה הרצינית הזאת שקרתה פה . . .הראשון - טוויט של bradfitz, אחד המפתחים המפורסמים בעולם - היה בצוות ה-Core של Go, כתב את Memcached בעבר,ועודהוא כתב ב-Twitter שהוא אחרי יום ארוך של ראיונות ורוצה להוציא את התסכול שלו - אז הנה השאלה: “Print the largest even integer in an array of integers.” - וספקו לי אך ורק תשובות שגויותוזה ניהיה מצחיק . . . אנשים הציעו כל מיני רעיונות לאיך להדפיס את המספר הזוגי הגדול ביותר במערך של Integersלמשל - תשובה אחת זה “(print(a” - פשוט להדפיס את כל מערך, והמספר הזוגי הגדול ביותר כנראה יודפס שם . . . זה עובד.תשובה נוספת - לעשות לולאה בין 0 ל-MaxInt ולהדפיס את כל המספרים - וגם במקרה הזה המספר הזוגי הגדול ביותר במערך כנראה יודפס איפשהו שם.בקיצור - היו כל מיני תשובות מתחכמות כמו “קודם כל צריך ליצור מודל ואז לאמן אותו” והייתה תשובה ב-Shell עם Grep ו-Sort . . . בקיצור, כל מיני תשובות מאוד משעשעות, מוזמנים לעבור על ה-Thread ב-Twitterוכן - חלק גם נתנו רפרנסים לתשובות ב-Stack Overflow. . . עשו מזה מטעמים. נחמד, משעשע.אייטם הבא - ypp, או: Yid++ כמו שהם כותבים - the oylem’s first programming shprachמי שיודע פה יידיש - מוזמנים לתרגם . . .וכן - Yid++ זה בעצם Compiler מיידיש ל-++C, אם אני לא טועהזה למעשה ה-Compiler הראשון בעולם, או משהו כזהאתם מוזמניםללכת לקרוא Source Code של Yid++, למשל - למשל - be_soymech_on זה (Include (iostreamו- holding shitta std זה (namespace(std - למי שזוכר את ה-++C בטח יראה את הדמיוןיש גם -bli_ayin_hara main () bh שזה בעצם (void (main, והוא מחזיר בעצם “bh”, שאני מניח שזה “בעזרת השם”ולמעלה כמובן כתוב בגדול “BSD” - שזה “בסיעתה דשמייא” כמובן . . .(דותן) זה גם מבלבל מבחינה לגאלית . . .(רן) אני בטוח שזה לא יד המקרה . . .(אלון) מעניין האם זה מתקמפל בשבת . . .(דותן) לקחת לי! אני כבר מחכה להגיד את זה!(אלון) סליחה, אתה יכול למחוק את המשפט האחרון שלי? (לא) - דותן, מה רצית להגיד?(דותן) האם זה מתקמפל בשבת? האם ה-Compiler יעבוד בשבת?(רן) בואו נקרא עוד קצת פנינים מהשפה - למשל - >>be_machriz זה >>cout, להדפסהיש פה עוד איזו מילה ביידיש שאני לא מזהה . . .בקיצור - משעשע(דותן) בינתיים אני גם מסתכל בקוד - וצריך לפרגן פה לבנאדם שכתב את זה: בחור בשם משה שור מחיפה, מהטכניון - קל”ב אליך . . .יש פה גם דברים מגניבים בקוד, כמו קובץ ++C שנקרא ani_maymin.cpp . . . בקיצור, גם הקוד עמוס בדברים כאלהזה כל כך חזק, שאני מאמין כבר שזה אמיתי . . . אני רואה שיש כאן הכשר מאיזשהו רב ל - Code base . . . זה מתחיל להיות כבר . . . צריך לבדוק את זה איתו.(רן) יש תעודת הכשר לקוד, יפה, הלך עם זה עד הסוף - כל הכבוד, משה!(רן) אני רואה שיש פה שניים - משה וגם עוד מישהו שתרם - יחיאל קלמנסון, שהוא דווקא מניו-יורק(דותן) אני חושב שזה אמיתי, זה נראה לי אמיתי, זו באמת שפה כשרה . . .(רן) לגמרי - סחטיין על העבודה חברים, אם אתם שומעים אותנווזהו - אחלה צחוק, תקראו קצת את הקוד, אני בטוח שתזהו הרבה יידיש גם אם אתם לא דוברי יידיש שוטף, בטוח שתזהו הרבה.זהו - זה הכל, כאן אנחנו מסיימים.תודה לכם אלון ודותן, היה משעשע ומחכים כרגיל - נתראה בפעם הבאה.הקובץ נמצא כאן, האזנה נעימה ותודה רבה לעופר פורר על התמלול

Captain Roy's Rocket Radio Show: The UK Podcast for the Culture Geek, Technology Nerd, and Creative Wizard

This week: Perceptron, Crusty Eyed Monster, Virus Diary, My Fictional 1980s Office, Dragonslayer, Batwoman, Tales from the Loop, and Lockdown Hobbies.

Captain Roy's Rocket Radio Show: The UK Podcast for the Culture Geek, Technology Nerd, and Creative Wizard

This week: Perceptron, Crusty Eyed Monster, Virus Diary, My Fictional 1980s Office, Dragonslayer, Batwoman, Tales from the Loop, and Lockdown Hobbies.Show Notes: https://roymathur.com/podcast/2020-04-16-captain-roys-rocket-radio-show.html

Captain Roy's Rocket Radio Show: The UK Podcast for the Culture Geek, Technology Nerd, and Creative Wizard

This week: Perceptron, Crusty Eyed Monster, Virus Diary, My Fictional 1980s Office, Dragonslayer, Batwoman, Tales from the Loop, and Lockdown Hobbies.Show Notes: https://roymathur.com/podcast/2020-04-16-captain-roys-rocket-radio-show.html

The History of Computing
Polish Innovations In Computing

The History of Computing

Play Episode Listen Later Jan 27, 2020 12:13


Computing In Poland Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us to innovate (and sometimes cope with) the future! Today we're going to do something a little different. Based on a recent trip to Katowice and Krakow, and a great visit to the Museum of Computer and Information Technology in Katowice, we're going to look at the history of computing in Poland. Something they are proud of and should be proud of. And I'm going to mispronounce some words. Because they are averse to vowels. But not really, instead because I'm just not too bright. Apologies in advance. First, let's take a stroll through an overly brief history of Poland itself. Atilla the Hun and other conquerors pushed Germanic tribes from Poland in the fourth century which led to a migration of Slavs from the East into the area. After a long period of migration, duke Mieszko established the Piast dynasty in 966, and they created the kingdom of Poland in 1025, which lasted until 1370 when Casimir the Great died without an heir. That was replaced by the Jagiellonian dynasty which expanded until they eventually developed into the Polish-Lithuanian Commonwealth in 1569. Turns out they overextended themselves until the Russians, Prussians, and Austria invaded and finally took control in 1795, partitioning Poland. Just before that, Polish clockmaker Jewna Jakobson built a mechanical computing machine, a hundred years after Pascal, in 1770. And innovations In mechanical computing continued on with Abraham Izrael Stern and his son through the 1800s and Bruno's Intergraph, which could solve complex differential equations. And so the borders changed as Prussia gave way to Germany until World War I when the Second Polish Republic was established. And the Poles got good at cracking codes as they struggled to stay sovereign against Russian attacks. Just as they'd struggled to stay sovereign for well over a century. Then the Germans and Soviets formed a pact in 1939 and took the country again. During the war, Polish scientists not only assisted with work on the Enigma but also with the nuclear program in the US, the Manhattan Project. Stanislaw Ulam was recruited to the project and helped with ENIAC by developing the Monte Carlo method along with Jon Von Neumann. The country remained partitioned until Germany fell in WWII and the Soviets were able to effectively rule the Polish People's Republic until a socal-Democratic movement swept the country in 1989, resulting in the current government and Poland moving from the Eastern Bloc to NATO and eventually the EU around the same time the wall fell in Berlin. Able to put the Cold War behind them, Polish cities are now bustling with technical innovation and is now home some of the best software developers I've ever met. Polish contributions to a more modern computer science began in 1924 when Jan Lukasiewicz developed Polish Notation, a way of writing mathematical expressions such that they are operator-first. during World War II when the Polish Cipher Bureau were the first that broke the Enigma encryption, at different levels from 1932 to 1939. They had been breaking codes since using them to thwart a Russian invasion in the 1920s and had a pretty mature operation at this point. But it was a slow, manUal process, so Marian Rejewski, one of the cryptographers developed a card catalog of permutations and used a mechanical computing device he invented a few years earlier called a cyclometer to decipher the codes. The combination led to the bomba kryptologiczna which was shown to the allies 5 weeks before the war started and in turn led to the Ultra program and eventually Colossus once Alan Turing got a hold of it, conceptually after meeting Rejewski. After the war he became an accountant to avoid being forced into slave cryptographic work by the Russians. In 1948 the Group for Mathematical Apparatus of the Mathematical Institute in Warsaw was formed and the academic field of computer research was formed in Poland. Computing continued in Poland during the Soviet-controlled era. EMAL-1 was started in 1953 but was never finished. The XYZ computer came along in 1958. Jack Karpiński built the first real vacuum tube mainframe in Poland, called the AAH in 1957 to analyze weather patterns and improve forecasts. He then worked with a team to build the AKAT-1 to simulate lots of labor intensive calculations like heat transfer mechanics. Karpinski founded the Laboratory for Artificial Intelligence of the Polish Academy of Sciences. He would win a UNESCO award and receive a 6 month scholarship to study in the US, which the polish government used to spy on American progress in computing. He came home armed with some innovative ideas from the West and by 1964 built what he called the Perceptron, a computer that could be taught to identify shapes and even some objects. Nothing like that had existed in Poland or anywhere else controlled by communist regimes at the time. From 65 to 68 he built the KAR-65, even faster, to study CERN data. By then there was a rising mainframe and minicomputer industry outside of academia in Poland. Production of the Odra mainframe-era computers began in 1959 in Wroclaw, Poland and his work was seen by them and Elwro as a threat do they banned him from publishing for a time. Elwro built a new factory in 1968, copying IBM standardization. In 1970, Karpiński realized he had to play ball with the government and got backing from officials in the government. He would then designed the k-202 minicomputer in 1971. Minicomputers were on the rise globally and he introduced the concept of paging to computer science, key in virtual memory. This time he recruited 113 programmers and hardware engineers and by 73 were using Intel 4004 chips to build faster computers than the DEC PDP-11. But the competitors shut him down. They only sold 30 and by 1978 he retired to Switzerland (that sounds better than fled) - but he returned to Poland following the end of communism in the country and the closing of the Elwro plant in 1989. By then the Personal Computing revolution was upon us. That had begun in Poland with the Meritum, a TRS-80 clone, back in 1983. More copying. But the Elwro 800 Junior shipped in 1986 and by 1990 when the communists split the country could benefit from computers being mass produced and the removal of export restrictions that were stifling innovation and keeping Poles from participating in the exploding economy around computers. Energized, the Poles quickly learned to write code and now graduate over 40,000 people in IT from universities, by some counts making Poland a top 5 tech country. And as an era of developers graduate they are founding museums to honor those who built their industry. It has been my privilege to visit two of them at this point. The description of the one in Krakow reads: The Interactive Games and Computers Museum of the Past Era is a place where adults will return to their childhood and children will be drawn into a lots of fun. We invite you to play on more than 20 computers / consoles / arcade machines and to watch our collection of 200 machines and toys from the '70's-'90's. The second is the Museum of Computer and Information Technology in Katowice, and the most recent that I had the good fortune to visit. Both have systems found at other types of computer history museums such as a Commodore PET but showcasing the locally developed systems and looking at them on a timeline it's quickly apparent that while Poland had begun to fall behind by the 80s, it was more a reflection of why the strikes throughout caused the Eastern Bloc to fall, because Russian influence couldn't. Much as the Polish-Lithuanian Commonwealth couldn't support Polish control of Lithuania in the late 1700s. There were other accomplishments such as The ZAM-2. And the first fully Polish machine, the BINEG. And rough set theory. And ultrasonic mercury memory.

Another Lousy Millennium: A Futurama Fan Podcast
Episode 86: Bender’s Game Part 2

Another Lousy Millennium: A Futurama Fan Podcast

Play Episode Listen Later Mar 25, 2019 56:46


There is no [dark matter] shortage, you moronic ass-brain! Listen in as Luke and Gabe discuss Futurama Season 6 Episode 10: Bender’s Game Part 2. Follow us on Twitter @ALMPod or on Facebook at facebook.com/almpod/. Check out our website at almpod.com. On this show: Gabe and Luke discuss how much exposition goes on during this part of Benders Game, and the movie as a whole Luke talks a bit about lipstick dog chemistry Gabe and Luke discuss “Hammer Therapy” as a metaphor for the cultural gap between perception and reality for mental health Luke does an OK impression of Dr. Perceptron.  Gabe does excellent impressions of Mom, Morbo, and rampaging killbots. Gabe and Luke discuss the finally, revealed, definitive location of Planet Express as revealed by Momcorp’s locator device

Canary Cry News Talk
115 “Bible Algos & Perceptrons” - 11.21.2018

Canary Cry News Talk

Play Episode Listen Later Nov 21, 2018 28:57


115 will bring, Robot arm that feeds us, romaine lettuce is a no go, FCC lets Elon launch internet satellites, an AI algorithm to customize your Bible, breakthroughs in neural networks and quantum computers called the Perceptron, and a Nephilim Update on Darksiders. This episode is dedicated to Doc Marquis, whose testimony coming out of the Illuminati gave us a glimpse of how truly evil the world might be, but more importantly, how amazing our Lord and Savior Jesus Christ truly is! (Doc Marquis Oct 26, 1956 - Nov 20, 2018)   AGG for the WEEK of Nov.15th-Nov. 21st YOU HEARD IT HERE FIRST FOLKS! (Updates on stories) Food Regulators to Share Oversight of Cell-Based Meat - WSJ Next generation of biotech food heading for grocery stores Catholic Exorcisms Are Gaining Popularity in the U.S. - The Atlantic   TECHNOLOGY, ROBOTS, AND AI OH MY! Algorithm may one day be able to alter Bible's style for its audience | Fox News Meet The World’s Most Advanced ‘Human Replacement Robot’ [Video] – 2oceansvibe.com Learning to Love Robots | The New Yorker Singapore introduces a robot cop with fat tires and a 360 degree camera Toilet-Scrubbing Robot Takes Over One of the World's Crappiest Jobs | Digital Trends Diners use chest-mounted robot arms to feed each other in unusual social experiment / Boing Boing The Problem With AI: Machines Are Learning Things, But Can’t Understand Them Breakthrough neural network paves the way for quantum AI An AI Tried to Write the Perfect Lexus Ad. Here’s a Scene-by-Scene Look at What It Was Thinking – Adweek Finally, a Machine That Can Finish Your Sentence - The New York Times One of the fathers of AI is worried about its future - MIT Technology Review This Game Uses Artificial Intelligence to Recruit New Players Technology innovations: The future of AI and blockchain - MIT Technology Review The Future of War: Autonomous AI and the Threat of ‘Killer Robots’ - Report - Sputnik International Science news: AI to create ‘personalised pills’ to 3D print at HOME | Daily Star The future of artificial intelligence depends on human wisdom | Salon.com Crazy in love? The Japanese man 'married' to a hologram | AFP.com Facial Recognition’s Growing Adoption Spurs Privacy Concerns - WSJ The Amazing Ways Artificial Intelligence Is Transforming Genomics and Gene Editing Dress Rehearsal For Death: Using Virtual Reality To Foster Empathy For Dying Patients | CommonHealth US tech company becomes first to microchip employees – JEWSNEWS The New York Times says movies about killer robots are bad for us. It’s wrong. - The Washington Post Alphabet gives bipedal robots the Schaft 'cos no one wants to buy its creepy machine maker • The Register Robots, super soldiers and DARPA’s most insane military inventions You've Come a Long Way, Disembodied Robot Baby Simone Giertz: What Can Making Useless Robots Teach Us About Joy? : NPR Recode Daily: Are killer robots and social-media soldiers the future of war? - Recode   BIOMEDICAL/GENETICS/TRANSHUMANISM Genetics Start-Up Wants to Sequence People's Genomes for Free - Scientific American 3D-Printed Organs From Living Cells Could Help Boost Senses | WIRED Human images from world's first total-body scanner unveiled   SOCIAL MEDIA/GOOGLE/AMAZON Amazon stores every conversation you have with Alexa Amazon Ordered to Hand Over Possible Recordings in Murder Case German states want social media law tightened: media Privacy concerns as Google absorbs DeepMind's health division Smearing Soros to stoke hate: You too, Facebook? (opinion) - CNN Facebook Increasingly Reliant on A.I. To Predict Suicide Risk : NPR   “THE FOUR HORSEMEN of the TECHNOCALYPSE!” Elon Musk's Boring Company is launching DIY watchtowers with bricks from tunnel dirt - Neil deGrasse Tyson: Elon Musk is the most important person in tech Elon Musk’s extracurricular antics reportedly spark a NASA safety probe at SpaceX | The FCC Just Approved SpaceX’s Plan to Launch 7,518 Internet Satellites   CONSPIRACY THEORIES AND SOMETIMES FACTS! Scientists acknowledge key errors in study of how fast the oceans are warming WikiLeaks Helped Hackers Rifle Through Stolen Emails: FBI Docs Anatomy of a Conspiracy Theory - POLITICO Magazine   SPACE/ALIEN/ETs/UFOs                                                    UFO news: Lake Tahoe sighting is ‘100 PERCENT’ proof aliens are visiting | Weird | News | Express.co.uk Arecibo message: What happened when people claimed aliens contacted them – and why we might never want to | The Independent Alien abduction HORROR: Aliens ‘cause human PARALYSIS while fully conscious’ | Weird | News | Express.co.uk Inside AlienCon, the Annual Gathering of 'Ancient Aliens' Fans | WIRED   NEPHILIM UPDATE Darksiders Lore Guide | KeenGamer

WIRED Business – Spoken Edition
An Old Technique Could Put Artificial Intelligence in Your Hearing Aid

WIRED Business – Spoken Edition

Play Episode Listen Later Nov 28, 2017 7:05


Dag Spicer is expecting a special package soon, but it's not a Black Friday impulse buy. The fist-sized motor, greened by corrosion, is from a historic room-sized computer intended to ape the human brain. It may also point toward artificial intelligence's future. Spicer is senior curator at the Computer History Museum in Mountain View, California. The motor in the mail is from the Mark 1 Perceptron, built by Cornell researcher Frank Rosenblatt in 1958.

Learning Machines 101
LM101-064: Stochastic Model Search and Selection with Genetic Algorithms (Rerun)

Learning Machines 101

Play Episode Listen Later May 15, 2017 28:04


In this rerun of episode 24 we explore the concept of evolutionary learning machines. That is, learning machines that reproduce themselves in the hopes of evolving into more intelligent and smarter learning machines. This leads us to the topic of stochastic model search and evaluation. Check out the blog with additional technical references at: www.learningmachines101.com 

Data Skeptic
[MINI] The Perceptron

Data Skeptic

Play Episode Listen Later Mar 10, 2017 14:46


Today's episode overviews the perceptron algorithm. This rather simple approach is characterized by a few particular features. It updates its weights after seeing every example, rather than as a batch. It uses a step function as an activation function. It's only appropriate for linearly separable data, and it will converge to a solution if the data meets these criteria. Being a fairly simple algorithm, it can run very efficiently. Although we don't discuss it in this episode, multi-layer perceptron networks are what makes this technique most attractive.

Machine Learning Guide
MLG 002 What is AI, ML, DS

Machine Learning Guide

Play Episode Listen Later Feb 9, 2017 64:10


Show notes at ocdevel.com/mlg/2 Updated! Skip to [00:29:36] for Data Science (new content) if you've already heard this episode. What is artificial intelligence, machine learning, and data science? What are their differences? AI history. Hierarchical breakdown: DS(AI(ML)). Data science: any profession dealing with data (including AI & ML). Artificial intelligence is simulated intellectual tasks. Machine Learning is algorithms trained on data to learn patterns to make predictions. Artificial Intelligence (AI) - Wikipedia Oxford Languages: the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. AlphaGo Movie, very good! Sub-disciplines Reasoning, problem solving Knowledge representation Planning Learning Natural language processing Perception Motion and manipulation Social intelligence General intelligence Applications Autonomous vehicles (drones, self-driving cars) Medical diagnosis Creating art (such as poetry) Proving mathematical theorems Playing games (such as Chess or Go) Search engines Online assistants (such as Siri) Image recognition in photographs Spam filtering Prediction of judicial decisions Targeting online advertisements Machine Learning (ML) - Wikipedia Oxford Languages: the use and development of computer systems that are able to learn and adapt without following explicit instructions, by using algorithms and statistical models to analyze and draw inferences from patterns in data. Data Science (DS) - Wikipedia Wikipedia: Data science is an interdisciplinary field that uses scientific methods, processes, algorithms and systems to extract knowledge and insights from noisy, structured and unstructured data, and apply knowledge and actionable insights from data across a broad range of application domains. Data science is related to data mining, machine learning and big data. History Greek mythology, Golums First attempt: Ramon Lull, 13th century Davinci's walking animals Descartes, Leibniz 1700s-1800s: Statistics & Mathematical decision making Thomas Bayes: reasoning about the probability of events George Boole: logical reasoning / binary algebra Gottlob Frege: Propositional logic 1832: Charles Babbage & Ada Byron / Lovelace: designed Analytical Engine (1832), programmable mechanical calculating machines 1936: Universal Turing Machine Computing Machinery and Intelligence - explored AI! 1946: John von Neumann Universal Computing Machine 1943: Warren McCulloch & Walter Pitts: cogsci rep of neuron; Frank Rosemblatt uses to create Perceptron (-> neural networks by way of MLP) 50s-70s: "AI" coined @Dartmouth workshop 1956 - goal to simulate all aspects of intelligence. John McCarthy, Marvin Minksy, Arthur Samuel, Oliver Selfridge, Ray Solomonoff, Allen Newell, Herbert Simon Newell & Simon: Hueristics -> Logic Theories, General Problem Solver Slefridge: Computer Vision NLP Stanford Research Institute: Shakey Feigenbaum: Expert systems GOFAI / symbolism: operations research / management science; logic-based; knowledge-based / expert systems 70s: Lighthill report (James Lighthill), big promises -> AI Winter 90s: Data, Computation, Practical Application -> AI back (90s) Connectionism optimizations: Geoffrey Hinton: 2006, optimized back propagation Bloomberg, 2015 was whopper for AI in industry AlphaGo & DeepMind

Learning Machines 101
LM101-059: How to Properly Introduce a Neural Network

Learning Machines 101

Play Episode Listen Later Dec 20, 2016 29:56


I discuss the concept of a “neural network” by providing some examples of recent successes in neural network machine learning algorithms and providing a historical perspective on the evolution of the neural network concept from its biological origins. For more details visit us at: www.learningmachines101.com  

Learning Machines 101
LM101-051: How to Use Radial Basis Function Perceptron Software for Supervised Learning[Rerun]

Learning Machines 101

Play Episode Listen Later May 24, 2016 29:04


This particular podcast is a RERUN of Episode 20 and describes step by step how to download free software which can be used to make predictions using a feedforward artificial neural network whose hidden units are radial basis functions. This is essentially a nonlinear regression modeling problem. We show the performance of this nonlinear learning machine is substantially better on test data set than the linear learning machine software presented in Episode 13. Basically performance for the linear learning machine was about 13% because the data set was specifically designed to be unlearnable by a linear learning machine, while the performance for the nonlinear machine learning software in this episode is about 70%. Again, I'm a little disappointed that only a few people have downloaded the software and tried things out. You can download windows executable, mac executable, or the MATLAB source code. It's important to actually experiment with real machine learning software if you want to learn about machine learning!  Check out:  www.learningmachines101.com to obtain transcripts of this podcast and download free machine learning software! Or tweet us at: @lm101talk    

THIS IS HORATIO
Natural Rhythm 36 Xavier Arak

THIS IS HORATIO

Play Episode Listen Later Dec 7, 2015 62:06


Xavier Arak http://www.emergingibiza.com/artists/xavier-arak/ https://soundcloud.com/xavierarak https://www.facebook.com/xaviarak http://classic.beatport.com/artist/xavier-arak/348058 https://twitter.com/xaviarak Javier Aracil, or Xavier Arak, is a young producer based in Ibiza and grew up in his hometown with the voice of Otis Redding, Marvin Gaye and Aretha Franklin, as well as that of his father, Javier Aracil, a famous soul singer of his time. Javier's musical culture expanded in London where he lived from the age of seventeen. He then resided China which was a turning point for him because this was where he discovered his skills as a DJ and began making his first appearances. From that time Javier has left his mark on the peninsula, and especially in Ibiza, for his groove , at venues such as, Space , Privilege Km 5 Ibiza, Plastik Beach, Moma, Sirocco, Ocean drive, Pacha,Hotel Pacha, Bfor, Destino Pacha, Ushuaia beach, Usuahia Tower, Dance Fair Ibiza 2014 and many more. Also got a international show in ASIA in 2015.Javier himself states that his “musical essence is telling a story in every song, trying to get people to listen, have feelings, as with soul music". Currently Xavier Arak is still working hard both in production and behind the decks, which is usually in the local city where he resides of Ibiza and is the resident of Beachouse Ibiza with Guy Gerber , Guti ,Solomun etc... Another great artist, Paul Reynolds, with whom he has shared the booth, states Xavier Arak's style is very similar to Henrik Schwarz, for his love of harmonies has a very personal sound; his music is made from the heart with a single purpose, to convey productions, carry forward the melodic and funky vibe, with floating harmonies, keys and placed percussion. Releases Xavier Arak Mental Fredoom Ep Separat Musik | 2015-11-12 Tao Xavier Arak, Larsen Factory Kommunikation Records | 2015-08-10 DTD Records Sampler 02 Strange People, Enfants Malins, Reezak, Evan Espinoza, Ermess, Adam Husa, Desta, Mohey, Alex Break, Xavier Arak DTD Records | 2015-07-20 Neruda EP Paul Reynolds, Xavier Arak Insist Music | 2015-06-29 You Never Learn Xavier Arak Blue Bull Music | 2015-06-15 Prophet Horn EP Jose Maria Ramon, William Medagli, Thallulah, Thorsten Hammer, Sebastian Beus a.k.a. Echorama, Simon Raw, Xavier Arak Separat Musik | 2015-05-15 Dash Deep Diggin 2014 04 2nica, 2Son, Afterboy, Anton Prize, Ato Rodriguez, Ayesha Pramanik, Bates (ie), Birnbaum Bomml Buam, Callendula, Dave Lawton, Dhatura, Duque, Dporto, Emanuel Odierna, Fabio Antunes, Flavio Kñada, Frank Chianese, Gul & Tha Kang, Herman Crantz, Hydrosphere, Kanapeh, Kiwi Funk, Sunk Afinity, Konig Balthasar, Kraxelhuber, Laz Loz, Marja V, Mr. Bib, Mr. Laz, Muldi, Nicolas Pourtale, Nudisco, Perceptron, Ridge, Sir Alex, Sotiris Ferfiris, Traveltech, Trestone, Victor Vilchez, Xavier Arak, Alex Hertz, Herman Crantz, Bermuda, Dhatura, Konig Balthasar, GC System, Rml, JB, Fabio Antunes, Francesco Carrieri, Agent Orange, Mr. Laz Dash Deep Records Alter Ego Xavier Arak, Raul Rodriguez Deeplomatic Recordings

Beyond Synth
Beyond Synth - 38 - Cohosted By Sunglasses Kid

Beyond Synth

Play Episode Listen Later Nov 10, 2015 93:39


If you’d like to support the show, please visit: https://www.patreon.com/beyondsynth The Beyond Synth theme song is by OGRE: https://ogresound.bandcamp.com/track/shore-thing  Today Andy is joined by SUNGLASSES KID! They chat and listen to some cool tracks! Check out SUNGLASSES KID here: https://soundcloud.com/sunglasseskid http://sunglasseskid.bandcamp.com/ https://twitter.com/sunglasseskid And Check out all the artists featured on today's program: ARCADE HIGH: https://soundcloud.com/arcade-high https://twitter.com/arcadehighmusic http://arcadehigh.bandcamp.com/ THE ASTRAL STEREO PROJECT: https://soundcloud.com/ncholdsworth https://twitter.com/Astral_Stereo http://theastralstereoproject.bandcamp.com/ BETAMAXX: https://soundcloud.com/betamaxx https://twitter.com/betamaxx80s http://betamaxxmusic.bandcamp.com/ CARPENTER BRUT: https://soundcloud.com/carpenter_brut https://twitter.com/carpenter_brut http://carpenterbrut.bandcamp.com/ CM88: https://soundcloud.com/cm-88 https://twitter.com/CM88_Music http://cm88.bandcamp.com/ D/A/D: https://soundcloud.com/dadmusic https://twitter.com/80sDAD http://dadmusic.bandcamp.com/ DALLAS CAMPBELL: https://soundcloud.com/dallas-campbell https://twitter.com/freezeyourbrain https://magichappened.bandcamp.com/ DR PERCEPTRON: https://soundcloud.com/dr-perceptron https://twitter.com/DocPerceptron https://www.facebook.com/DrPerceptron DROID BISHOP: https://soundcloud.com/droidbishop https://twitter.com/DroidBishop http://droidbishop.bandcamp.com/ FLOYDSHAVIOUS: https://soundcloud.com/floydshayvious https://floydshayvious.bandcamp.com/releases FUTURE HOLOTAPE: https://soundcloud.com/futureholotape https://twitter.com/futureholotape http://futureholotape.bandcamp.com/releases

Learning Machines 101
LM101-034: How to Use Nonlinear Machine Learning Software to Make Predictions (Feedforward Perceptrons with Radial Basis Functions)[Rerun]

Learning Machines 101

Play Episode Listen Later Aug 24, 2015 29:04


Welcome to the 34th podcast in the podcast series Learning Machines 101 titled "How to Use Nonlinear Machine Learning Software to Make Predictions". This particular podcast is a RERUN of Episode 20 and describes step by step how to download free software which can be used to make predictions using a feedforward artificial neural network whose hidden units are radial basis functions. This is essentially a nonlinear regression modeling problem. Check out: www.learningmachines101.comand follow us on twitter: @lm101talk

Learning Machines 101
LM101-024: How to Use Genetic Algorithms to Breed Learning Machines

Learning Machines 101

Play Episode Listen Later Mar 9, 2015 29:15


In this episode we introduce the concept of learning machines that can self-evolve using simulated natural evolution into more intelligent machines using Monte Carlo Markov Chain Genetic Algorithms. Check out: www.learningmachines101.com to obtain transcripts of this podcast and download free machine learning software!

Learning Machines 101
LM101-015: How to Build a Machine that Can Learn Anything (The Perceptron)

Learning Machines 101

Play Episode Listen Later Oct 27, 2014 30:07


In this 15th episode of Learning Machines 101, we discuss the problem of how to build a machine that can learn any given pattern of inputs and generate any desired pattern of outputs when it is possible to do so! It is assumed that the input patterns consists of zeros and ones indicating possibly the presence or absence of a feature.  Check out: www.learningmachines101.com to obtain transcripts of this podcast!!!

Musteranalyse/Pattern Analysis (PA) 2009 (Audio)
9 - Musteranalyse/Pattern Analysis (früher Mustererkennung 2) (PA) 2009

Musteranalyse/Pattern Analysis (PA) 2009 (Audio)

Play Episode Listen Later May 24, 2009 85:07


Mathematics and Physics of Anderson Localization: 50 Years After
On a perceptron version of the Generalized Random Energy Model

Mathematics and Physics of Anderson Localization: 50 Years After

Play Episode Listen Later Jan 20, 2009 57:39


Bolthausen, E (Zürich) Monday 15 December 2008, 11:30-12:30 Classical and Quantum Transport in the Presence of Disorder