Podcasts about supervised learning

  • 69PODCASTS
  • 97EPISODES
  • 35mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Apr 28, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about supervised learning

Latest podcast episodes about supervised learning

Voices from The Bench
360: Escape Some Workload with Dentscape AI - Ryan Axelson, Bill Chou, & Dr. Sylvie Liu

Voices from The Bench

Play Episode Listen Later Feb 17, 2025 66:50


Latest episode of Elvis on the Dental Fuel Podcast. Apple Podcasts: https://podcasts.apple.com/us/podcast/elvis-dahl-team-mistake/id1683707577?i=1000692238626 Spotify: https://open.spotify.com/episode/0dy9BPPvTJAp3sP5xg2Wha?si=cVafy05lTqK3R-ybFHD3Rw Make sure you visit Ivoclar (https://www.ivoclar.com/en_us) at LMT Lab Day Chicago 2025 (https://lmtmag.com/lmtlabday). Ivoclar will be in their usual spot at Grand Ballroom A&B in the East Tower, on the Gold Level. Right across from the registration desk. Register today! (https://lmtmag.com/ivoclar) Make sure you come see VOICES FROM THE BENCH recording from the Ballroom all weekend! Thanks to the AMAZING people at exocad (https://exocad.com/ids), Elvis and Barb will be recording for the first time at IDS in Cologne, Germany (https://www.english.ids-cologne.de/). March 25 - 28 in Hall 1, booth A040/C041. Come see us, be on the podcast, and see all the amazing things exocad is doing for your lab! In the ever-evolving world of dentistry, the integration of artificial intelligence (AI) is paving the way for significant advancements in efficiency and patient care. In our latest podcast episode, we had the pleasure of speaking with the innovative minds behind Dentscape AI, a company that is transforming the dental lab landscape. Dentscape AI is not just another tech startup; it's a groundbreaking solution designed specifically for dental labs. With their AI agent, dental technicians can now generate up to 100 crowns in just 10 minutes! This remarkable feat is achieved by simplifying workflows and reducing the workload of dental professionals, allowing them to focus on more complex tasks that require human creativity and expertise. The conversation began with the introduction of the Dentscape team, including co-founders Bill Cho and Dr. Sylvie Liu. Sylvie, a practicing dentist and lecturer in digital dentistry, shared her journey from traditional dentistry to entrepreneurship. Her passion for the artistry of dentistry led her to seek innovative solutions that would not only enhance her practice but also address the pressing challenges faced by dental labs. Bill explained the three key pillars that support their AI model: high-quality data, advanced algorithms, and robust computing power. By partnering with large dental labs, Dentscape AI ensures that their models are trained on real-world data, allowing for accuracy and reliability in crown design. As Ryan Axelson, another key member of the Dentscape team, pointed out, their goal is to provide tools that empower technicians, allowing them to enhance their workflow and ultimately improve patient care. You are invited to Ivoclar (https://www.ivoclar.com/en_us)'s IPS e.max Panel Discussion Friday, February 22nd stating at 3:00 at LMT Lab Day in Chicago. Our very own Barb Warner will be on stage with Jessica Birrell, Stephenie Goddard, Mike Roberts, Jed Archibald and Dr. Ken Malament as they dive into the world of e.max. After the panel discussion, Ivoclar will host a Happy Hour to commemorate this 20-year milestone. So, please join us by registering at Labday.com/Ivoclar. Make sure you visit Aidite (https://www.aidite.com/) at Booth E-26 during your visit at LMT Lab Day Chicago (https://lmtmag.com/lmtlabday)! They will be there showcasing their exciting new products and cutting-edge solutions in digital dentistry. Attendees can explore hands-on demonstrations of Biomic stain & glaze (https://www.aidite.com/detail/materials/Biomic_Stain_Glaze_130_2.html) techniques and some of their other innovative technologies. Aidite will also host engaging lectures in Grand Suite 2, East Tower, covering topics such as EZneer (https://www.aidite.com/detail/materials/EZneer_113_2.html), 3D Pro-Zir, (https://www.aidite.com/detail/materials/3D_Pro_Zir_111_2.html) Digital Dentures, and their Aidite Cloud design service (https://www.aiditecloud.com/). Even before you go, you can stay updated by following @AiditeNorthAmerica (https://www.instagram.com/aiditenorthamerica/) on all social media platforms. Don't miss the opportunity to see how Aidite is shaping the future of dental labs! Are you a dental lab in need of more talent to improve your bottom line and keep production on schedule? Are you a dental tech with great skills but feel you're being limited at your current lab? Well, the answer is here and this is precisely why WIN WIN GO (https://www.winwingo.com/) was created. The dental lab and dental tech community needed a place where labs and technicians can meet, talk about their needs and connect in ways that foster a win win outcome. As a tech. If you're ready to make a change, thinking about moving in the next year or just curious what's out there, sign up today. It's totally free. As a lab, you might be feeling the frustration of paying the big employment site so much and getting so few tech candidates. We understand they don't much care about our industry. WINWINGO.com is simply the best place for lab techs and lab owners to actively engage in creating their ideal future. WINWINGO.com, how dental techs find paradise. Special Guests: Bill Chou, Dr. Sylvie Liu, and Ryan Axelson.

Relay FM Master Feed
Conduit 94: Supervised Learning

Relay FM Master Feed

Play Episode Listen Later Feb 6, 2025 52:31


Thu, 06 Feb 2025 15:30:00 GMT http://relay.fm/conduit/94 http://relay.fm/conduit/94 Kathy Campbell and Jay Miller Jay needed a hole in the ceiling fixed. So they flew in Britnie's dad to show them how to do electric and house things. Much was learned. Jay needed a hole in the ceiling fixed. So they flew in Britnie's dad to show them how to do electric and house things. Much was learned. clean 3151 Subtitle: Unlocking Skills with a Plane TicketJay needed a hole in the ceiling fixed. So they flew in Britnie's dad to show them how to do electric and house things. Much was learned. This episode of Conduit is sponsored by: Pika: Start your happy blog. Get 20% off Pika Pro with code CONDUIT20 Jelly: A better way to share an inbox. Get 15% off your first year. Links and Show Notes: Checked Connections - Jay

Conduit
94: Supervised Learning

Conduit

Play Episode Listen Later Feb 6, 2025 52:31


Thu, 06 Feb 2025 15:30:00 GMT http://relay.fm/conduit/94 http://relay.fm/conduit/94 Supervised Learning 94 Kathy Campbell and Jay Miller Jay needed a hole in the ceiling fixed. So they flew in Britnie's dad to show them how to do electric and house things. Much was learned. Jay needed a hole in the ceiling fixed. So they flew in Britnie's dad to show them how to do electric and house things. Much was learned. clean 3151 Subtitle: Unlocking Skills with a Plane TicketJay needed a hole in the ceiling fixed. So they flew in Britnie's dad to show them how to do electric and house things. Much was learned. This episode of Conduit is sponsored by: Pika: Start your happy blog. Get 20% off Pika Pro with code CONDUIT20 Jelly: A better way to share an inbox. Get 15% off your first year. Links and Show Notes: Checked Connections - Jay

CSAIL Alliances Podcasts
MIT CSAIL Podcast: How AI will Change Your Job and The Potential of Self-Supervised Learning

CSAIL Alliances Podcasts

Play Episode Listen Later Jan 30, 2025 44:17


How AI will Change Your Job with MIT Economics Professor David Autor & The Potential of Self-Supervised Learning with CSAIL PhD Student Sharut Gupta Host: Kara Miller Part One: MIT Economics Professor David Autor says that AI is “not like a calculator where you just punch in the numbers and get the right answer. It's much harder to figure out how to be effective with it.” Offering unique insights into the future of work in an AI-powered world, Professor Autor explains his biggest worries, the greatest upside scenarios, and how he believes we should be approaching AI as a tool, and addresses how AI will impact jobs like nursing and skilled trades. Studies and papers referenced in conversation: AI and Product Innovation: https://aidantr.github.io/files/AI_innovation.p AI and the Gender Gap: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4759218 Robotics and Nursing Homes: https://www.nber.org/papers/w33116 Part Two: CSAIL PhD student Sharut Gutpa describes how self-supervised learning might bring about truly adaptable models which can respond to fast-changing environments, like consumer preferences. CSAIL Alliances connects business and industry to the people and research of MIT's Computer Science and Artificial Intelligence Labs. Learn more about CSAIL Alliances here. Each month, the CSAIL podcast features cutting-edge MIT and CSAIL experts discussing their current research, challenges and successes, as well as the potential impact of emerging tech. Learn more and listen to past episodes. Connect with CSAIL Alliances: On our site (https://cap.csail.mit.edu/about-us/meet-our-team) On X ( / csail_alliances ) On LinkedIn ( / mit-CSAIL )

Vanishing Gradients
Episode 28: Beyond Supervised Learning: The Rise of In-Context Learning with LLMs

Vanishing Gradients

Play Episode Listen Later Jun 9, 2024 65:38


Hugo speaks with Alan Nichol, co-founder and CTO of Rasa, where they build software to enable developers to create enterprise-grade conversational AI and chatbot systems across industries like telcos, healthcare, fintech, and government. What's super cool is that Alan and the Rasa team have been doing this type of thing for over a decade, giving them a wealth of wisdom on how to effectively incorporate LLMs into chatbots - and how not to. For example, if you want a chatbot that takes specific and important actions like transferring money, do you want to fully entrust the conversation to one big LLM like ChatGPT, or secure what the LLMs can do inside key business logic? In this episode, they also dive into the history of conversational AI and explore how the advent of LLMs is reshaping the field. Alan shares his perspective on how supervised learning has failed us in some ways and discusses what he sees as the most overrated and underrated aspects of LLMs. Alan offers advice for those looking to work with LLMs and conversational AI, emphasizing the importance of not sleeping on proven techniques and looking beyond the latest hype. In a live demo, he showcases Rasa's Calm (Conversational AI with Language Models), which allows developers to define business logic declaratively and separate it from the LLM, enabling reliable execution of conversational flows. LINKS The livestream on YouTube (https://www.youtube.com/live/kMFBYC2pB30?si=yV5sGq1iuC47LBSi) Alan's Rasa CALM Demo: Building Conversational AI with LLMs (https://youtu.be/4UnxaJ-GcT0?si=6uLY3GD5DkOmWiBW) Alan on twitter.com (https://x.com/alanmnichol) Rasa (https://rasa.com/) CALM, an LLM-native approach to building reliable conversational AI (https://rasa.com/docs/rasa-pro/calm/) Task-Oriented Dialogue with In-Context Learning (https://arxiv.org/abs/2402.12234) 'We don't know how to build conversational software yet' by Alan Nicol (https://medium.com/rasa-blog/we-don-t-know-how-to-build-conversational-software-yet-a18301db0e4b) Vanishing Gradients on Twitter (https://twitter.com/vanishingdata) Hugo on Twitter (https://twitter.com/hugobowne) Upcoming Livestreams Lessons from a Year of Building with LLMs (https://lu.ma/e8huz3s6?utm_source=vgan) VALIDATING THE VALIDATORS with Shreya Shanker (https://lu.ma/zz3qic45?utm_source=vgan)

Intel on AI
Multimodal AI, Self-Supervised Learning, Counterfactual Reasoning, and AI Agents with Vasudev Lal

Intel on AI

Play Episode Listen Later Jun 6, 2024 37:28


Discover the cutting-edge advancements in artificial intelligence with Vasudev Lal, Principal AI Research Scientist at Intel. This episode delves into the benefits of multimodal AI and the enhanced validity achieved through self-supervised learning. Vasudev also explores the applications of counterfactual reasoning in AI and the efficiency gains from using AI agents. Additionally, learn how leveraging multiple Gaudi 2 accelerators can significantly reduce LLM training times. Stay updated with the latest in AI technology and innovations by following #IntelAI and @IntelAI for more information. 

Vanishing Gradients
Episode 27: How to Build Terrible AI Systems

Vanishing Gradients

Play Episode Listen Later May 31, 2024 92:24


Hugo speaks with Jason Liu, an independent consultant who uses his expertise in recommendation systems to help fast-growing startups build out their RAG applications. He was previously at Meta and Stitch Fix is also the creator of Instructor, Flight, and an ML and data science educator. They talk about how Jason approaches consulting companies across many industries, including construction and sales, in building production LLM apps, his playbook for getting ML and AI up and running to build and maintain such apps, and the future of tooling to do so. They take an inverted thinking approach, envisaging all the failure modes that would result in building terrible AI systems, and then figure out how to avoid such pitfalls. LINKS The livestream on YouTube (https://youtube.com/live/USTG6sQlB6s?feature=share) Jason's website (https://jxnl.co/) PyDdantic is all you need, Jason's Keynote at AI Engineer Summit, 2023 (https://youtu.be/yj-wSRJwrrc?si=JIGhN0mx0i50dUR9) How to build a terrible RAG system by Jason (https://jxnl.co/writing/2024/01/07/inverted-thinking-rag/) To express interest in Jason's Systematically improving RAG Applications course (https://q7gjsgfstrp.typeform.com/ragcourse?typeform-source=vg) Vanishing Gradients on Twitter (https://twitter.com/vanishingdata) Hugo on Twitter (https://twitter.com/hugobowne) Upcoming Livestreams Good Riddance to Supervised Learning with Alan Nichol (CTO and co-founder, Rasa) (https://lu.ma/gphzzyyn?utm_source=vgj) Lessons from a Year of Building with LLMs (https://lu.ma/e8huz3s6?utm_source=vgj)

GPT Reviews
Hallucination is the Word of the Year

GPT Reviews

Play Episode Listen Later Dec 22, 2023 15:50


T Dictionary.com Word of the Year, ArXiv's move towards more accessible scientific research, and Bill Gates' thoughts on the potential impact of AI. The show also delves into cutting-edge AI research, including benchmarking and analyzing NLP paradigms for biomedical knowledge curation, a novel speech translation model, and the ability of LLMs to generate human-like opinions. Tune in to stay up-to-date on the latest developments in the world of artificial intelligence. Contact:  sergi@earkind.com Timestamps: 00:34 Introduction 02:00 The Dictionary.com Word of the Year is hallucinate. 03:28 ArXiv now offers papers in HTML format 05:22 Bill Gates on AI in 2024 06:46 Fake sponsor 08:48 Benchmarking and Analyzing In-context Learning, Fine-tuning and Supervised Learning for Biomedical Knowledge Curation: a focused study on chemical entities of biological interest 10:21 Speech Translation with Large Language Models: An Industrial Practice 12:17 ChatGPT as a commenter to the news: can LLMs generate human-like opinions? 14:20 Outro

Dances with Robots
Dances with Robots IRL: A Conversation with Catie Cuan

Dances with Robots

Play Episode Listen Later Nov 30, 2023 34:15


Sydney Skybetter sits down with choreorobotics innovator, Dr. Catie Cuan. They discuss her personal and professional trajectory, and try to answer the question: why dance with a robot? About Catie: An engineer, researcher, and artist, Dr. Catie Cuan is a pioneer in the nascent field of ‘choreorobotics' and works at the intersection of artificial intelligence, human-robot interaction, and art.  She is currently a Postdoc in Computer Science at Stanford University. Catie recently defended her PhD in robotics via the Mechanical Engineering department at Stanford, where she also completed a Master's of Science in Mechanical Engineering. The title of her PhD thesis is “Compelling Robot Behaviors through Supervised Learning and Choreorobotics”, which was funded by the National Institutes of Health, Google, and Stanford University. During her PhD, she led the first multi-robot machine learning project at Everyday Robots (Google X) and Robotics at Google (now a part of Google Deepmind). She has held artistic residencies at the Smithsonian, Everyday Robots (Google X), TED, and ThoughtWorks Arts. Catie is a prolific robot choreographer, having created works with nearly a dozen different robots, from a massive ABB IRB 6700 industrial robot to a tabletop IDEO + Moooi robot. Catie is also a 2023 International Strategy Forum (ISF) fellow at Schmidt Futures and the former co-founder of caali, an embodied media company. Read the transcript, and find more resources in our archive: https://www.are.na/choreographicinterfaces/dwr-ep-6-irl-a-conversation-with-choreoroboticist-catie-cuan Like, subscribe, and review here: https://podcasts.apple.com/us/podcast/dances-with-robots/id1715669152  What We Discuss with Catie (Timestamps): 0:00:15: Introduction to Dr. Catie Cuan 0:02:23: Catie's PhD thesis on supervised learning for compelling robot behaviors. 0:03:19: How Catie balanced her dance career with her work in tech. 0:05:35: The skepticism and terror of bringing dance into a STEM environment. 0:06:20: Navigating elite STEM environments as a woman of color. 0:07:41: The history of dance and robotics at Stanford University 0:11:56: Contrasts between STEM and embodied practices. 0:12:44: Catie's relationship with the CRCI community. 0:13:30: The importance of artists in contemplating the meaning of new technologies. 0:14:31: Challenges of creating a complex dance performance with robots. 0:16:24: Lack of templates for realizing installation, performance, and robotics research. 0:19:58: Safety considerations and rules for performing with robots. 0:20:51: Why Boston Dynamics Spot robots and their expressive capabilities. 0:23:32: Contemplating the ethical implications of robot applications. 0:25:27: The future of Choreo Robotics and the importance of imagination. 0:26:00: Dance's role in depicting a universe of creativity and joy. 0:27:35: Choreographers are essential for successful deployment of robots. 0:28:26: Robot dances becoming more prevalent in various contexts. 0:30:04: Dance is essential to culture and human identity. 0:31:20: Dancing with robots is not a novel concept. 0:32:00: Show credits & thanks The Dances with Robots Team Host: Sydney Skybetter Co-Host & Executive Producer: Ariane Michaud Archivist and Web Designer: Kate Gow Podcasting Consultant: Megan Hall Accessibility Consultant: Laurel Lawson Music: Kamala Sankaram Audio Production Consultant: Jim Moses Assistant Editor: Andrew Zukoski Student Associate: Rishika Kartik About CRCI The Conference for Research on Choreographic Interfaces (CRCI) explores the braid of choreography, computation and surveillance through an interdisciplinary lens. Find out more at www.choreographicinterfaces.org Brown University's Department of Theatre Arts & Performance Studies' Conference for Research on Choreographic Interfaces thanks the Marshall Woods Lectureships Foundation of Fine Arts, the Brown Arts Institute, and the Alfred P. Sloan Foundation for their generous support of this project. The Brown Arts Institute and the Department of Theatre Arts and Performance Studies are part of the Perelman Arts District.  

All TWiT.tv Shows (MP3)
This Week in Enterprise Tech 558: You Got Your AI In My Enterprise

All TWiT.tv Shows (MP3)

Play Episode Listen Later Aug 26, 2023 68:45


This week on This Week in Enterprise Tech, host Lou Maresca and co-hosts Curt Franklin and Brian Chee explore the key takeaways from the 2023 Black Hat and DEF CON cybersecurity conferences. They discuss the proliferation of AI, especially in relation to security. Guest Michael Amori, CEO of Virtualitics, talks about how AI is impacting data analytics and access. Curtis Franklin shares highlights from Black Hat and DEFCON 2023, noting generative AI was the dominant theme across both events. He breaks down differences between classic and generative AI models, quantifying risk, and other topics like IoT/OT security. IBM revealed Code Assistant for IBM Z, an AI code translation tool that can convert legacy COBOL code to Java. The hosts reflect on converting other legacy code, and the risks of AI-generated code. Michael Amori explains how Virtualitics is using AI and data visualization to help enterprises explore and understand their data, serving as an "AI assistant" for analysts. He discusses responsible and ethical AI, maintaining privacy, the need for explainability, and Virtualitics' tools like Network Extractor. Hosts: Louis Maresca, Brian Chee, and Curtis Franklin Guest: Michael Amori Download or subscribe to this show at https://twit.tv/shows/this-week-in-enterprise-tech. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: Miro.com/podcast kolide.com/twiet panoptica.app

This Week in Enterprise Tech (Video HD)
TWiET 558: You Got Your AI In My Enterprise - 2023 Black Hat and DefCon report, AI data analytics with Virtualitics

This Week in Enterprise Tech (Video HD)

Play Episode Listen Later Aug 26, 2023 68:45


This week on This Week in Enterprise Tech, host Lou Maresca and co-hosts Curt Franklin and Brian Chee explore the key takeaways from the 2023 Black Hat and DEF CON cybersecurity conferences. They discuss the proliferation of AI, especially in relation to security. Guest Michael Amori, CEO of Virtualitics, talks about how AI is impacting data analytics and access. Curtis Franklin shares highlights from Black Hat and DEFCON 2023, noting generative AI was the dominant theme across both events. He breaks down differences between classic and generative AI models, quantifying risk, and other topics like IoT/OT security. IBM revealed Code Assistant for IBM Z, an AI code translation tool that can convert legacy COBOL code to Java. The hosts reflect on converting other legacy code, and the risks of AI-generated code. Michael Amori explains how Virtualitics is using AI and data visualization to help enterprises explore and understand their data, serving as an "AI assistant" for analysts. He discusses responsible and ethical AI, maintaining privacy, the need for explainability, and Virtualitics' tools like Network Extractor. Hosts: Louis Maresca, Brian Chee, and Curtis Franklin Guest: Michael Amori Download or subscribe to this show at https://twit.tv/shows/this-week-in-enterprise-tech. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: Miro.com/podcast kolide.com/twiet panoptica.app

This Week in Enterprise Tech (MP3)
TWiET 558: You Got Your AI In My Enterprise - 2023 Black Hat and DefCon report, AI data analytics with Virtualitics

This Week in Enterprise Tech (MP3)

Play Episode Listen Later Aug 26, 2023 68:45


This week on This Week in Enterprise Tech, host Lou Maresca and co-hosts Curt Franklin and Brian Chee explore the key takeaways from the 2023 Black Hat and DEF CON cybersecurity conferences. They discuss the proliferation of AI, especially in relation to security. Guest Michael Amori, CEO of Virtualitics, talks about how AI is impacting data analytics and access. Curtis Franklin shares highlights from Black Hat and DEFCON 2023, noting generative AI was the dominant theme across both events. He breaks down differences between classic and generative AI models, quantifying risk, and other topics like IoT/OT security. IBM revealed Code Assistant for IBM Z, an AI code translation tool that can convert legacy COBOL code to Java. The hosts reflect on converting other legacy code, and the risks of AI-generated code. Michael Amori explains how Virtualitics is using AI and data visualization to help enterprises explore and understand their data, serving as an "AI assistant" for analysts. He discusses responsible and ethical AI, maintaining privacy, the need for explainability, and Virtualitics' tools like Network Extractor. Hosts: Louis Maresca, Brian Chee, and Curtis Franklin Guest: Michael Amori Download or subscribe to this show at https://twit.tv/shows/this-week-in-enterprise-tech. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: Miro.com/podcast kolide.com/twiet panoptica.app

All TWiT.tv Shows (Video LO)
This Week in Enterprise Tech 558: You Got Your AI In My Enterprise

All TWiT.tv Shows (Video LO)

Play Episode Listen Later Aug 26, 2023 68:45


This week on This Week in Enterprise Tech, host Lou Maresca and co-hosts Curt Franklin and Brian Chee explore the key takeaways from the 2023 Black Hat and DEF CON cybersecurity conferences. They discuss the proliferation of AI, especially in relation to security. Guest Michael Amori, CEO of Virtualitics, talks about how AI is impacting data analytics and access. Curtis Franklin shares highlights from Black Hat and DEFCON 2023, noting generative AI was the dominant theme across both events. He breaks down differences between classic and generative AI models, quantifying risk, and other topics like IoT/OT security. IBM revealed Code Assistant for IBM Z, an AI code translation tool that can convert legacy COBOL code to Java. The hosts reflect on converting other legacy code, and the risks of AI-generated code. Michael Amori explains how Virtualitics is using AI and data visualization to help enterprises explore and understand their data, serving as an "AI assistant" for analysts. He discusses responsible and ethical AI, maintaining privacy, the need for explainability, and Virtualitics' tools like Network Extractor. Hosts: Louis Maresca, Brian Chee, and Curtis Franklin Guest: Michael Amori Download or subscribe to this show at https://twit.tv/shows/this-week-in-enterprise-tech. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: Miro.com/podcast kolide.com/twiet panoptica.app

Life with AI
#69 - Supervised and self-supervised learning, tokenization, ChatGPT and document intelligence.

Life with AI

Play Episode Listen Later Aug 24, 2023 20:44


Hey guys, in this episode I talk about how AI algorithms are trained using supervised and self-supervised learning, how text tokenization works, how ChatGPT was trained and I also talked about document intelligence. This was a heavy technical content episode and I hope you enjoy it! Instagram: https://www.instagram.com/podcast.lifewithai/  Linkedin: https://www.linkedin.com/company/life-with-ai

The Invested Dads Podcast
Should Artificial Intelligence Be Regulated?

The Invested Dads Podcast

Play Episode Listen Later Aug 3, 2023 20:38 Transcription Available


Have you been curious as to the world of artificial intelligence and its potential for regulation? In this week's episode, the guys' simplify the complexities surrounding AI's ethical use, safety and security measures, accountability and transparency mechanisms, fair competition considerations, and the vital aspect of public trust. Listen now to explore the pros and cons of AI regulation and where the future will take us.For the transcript, show notes, and resources, visit theinvesteddads.com/190Sign up for our exclusive weekly newsletter here!The Invested Dads: Website | Instagram | Facebook | Spotify | Apple Podcasts

The Dan Nestle Show
110: What Translators can Teach Us about AI and Tech with Bill Lafferty

The Dan Nestle Show

Play Episode Listen Later Jul 28, 2023 74:02


In this episode, Dan welcomes translation and localization expert Bill Lafferty, the founder of Loc Navigator, a resource-rich newsletter that helps readers understand translation and localization as drivers of business growth. An accomplished Japanese-to-English legal translator, enterprise localization project leader, and localization technology consultant, Bill has lived at the intersection of technology and translation for nearly three decades. And full disclosure: he's been one of Dan's best friends for over 25 years, so he knows…quite a lot about the host of the show.   He and Dan reminisce a bit and then dig into the evolution of translation and localization and the proliferation of technology in the field. Perhaps more than any other profession, translators have had to deal with ever-improving AI tools, and adaptation and skill enhancement have been par for the course. Now, with ChatGPT and LLMs performing better translations than any previous disruptive development, will translators survive?    Find out as Bill and Dan uncover some of the misconceptions about where AI is taking the translation profession - and by extension, other language-intensive jobs. In this episode:   Delve into the crucial role localization has in enhancing global business performance. Understand the influence human proficiency has on the nuance-rich translation industry. Scrutinize the landscape of the translation field in the new age of AI and machine translation. Ascertain the advantages and pitfalls of forging a career path in translation and localization. Embrace the importance of fostering continuous learning in the dynamic tech-driven world. Listen in and hear… All about the world of translation The concept of localization of languages Perceptions about Google Translate Human-oriented tasks in translation What makes Bill so interested in language translation  The beauty of the Japanese language The intersection of software and local translation Legal translation and the room for growth  Where machine translation fails  ChatGPT and how it is impacting translation Neuro-machine translations, usages, and limitations On-the-fly speech translation and pixelation The difference between localization and translation Understanding taxonomy and how it plays within your business  The Block Chain and opportunities it brings How AI helps increase the capacity and quality of communications Notable Quotes: “A lot of language learning comes from an openness to rhythm and sound.” – (12:49), Bill “There's different levels to feeling accomplished when speaking a second language.” – (17:11), Bill “It was more important for me at the right age to make this really big career change into something different because I felt life would be a little less heavy.” – (20:07), Bill “Savvy, freelance translators are learning to use these technologies and harness them and use them as accelerators.” – (30:28), Bill “Low level jobs that would've been steps along the way are not exactly necessary anymore.” – (33:58), Dan “The amount of words being created everyday is just incredible.” – (35:58), Bill  “New tools require new operators.” – (36:28), Dan “Localization is one of the key drivers for global growth for any business.” – (51:03), Bill  “Translation is an industry that has always been shaped by infusions of technology.” – (57:23), Bill “Making something less complex is a huge value-add in this economy right now.” – (1:00:18), Bill “The better you can be at helping your customers or stakeholders to escape complexity, the more they're going to thank you.” – (1:00:32), Bill “In my opinion ChatGPT promotes better mental health.” – (1:02:26), Bill “For freelance translators, don't let all this noise about technology be something that is going to derail your career. Do what you love.” – (1:12:33), Bill Dan Nestle Links The Dan Nestle Show (libsyn.com) Daniel Nestle | LinkedIn The Dan Nestle Show | Facebook Dan Nestle (@dsnestle) | Twitter Bill Lafferty Links Loc Navigator Bill Lafferty - Twitter Bill Lafferty – Linkedin Timestamped summary of this episode, courtesy of Capsho: 00:00:00 - Introduction, Dan introduces the podcast and his guest, Bill Lafferty, who has a long career in translation and localization. They discuss the importance of translation in today's global economy. 00:02:37 - The Perception of Translation, Dan and Bill talk about the misconception that translation is a commoditized service and the challenge of overcoming this perception. They highlight the need for human-oriented tasks in translation and the importance of delivering translations that feel natural to the target audience. 00:06:09 - The Commoditization of Translation, Bill discusses the commoditization of the translation industry and the long tail of language service providers. He emphasizes the importance of understanding clients' goals and finding solutions that fit their budget. 00:07:52 - AI and Translation, Dan and Bill touch on the advancements in AI and its impact on the translation industry. They discuss the complexities of translating multiple languages and character sets, and how translation has been grappling with these challenges for a long time. 00:11:02 - Bill's Journey in Translation, Bill shares his journey in translation, from studying Japanese in college to being enchanted by the language. He discusses his experience as a legal translator and the rewards of learning to read Japanese at a high level. 00:15:13 - The Meticulousness of Translation, The guest discusses the meticulous nature of translation, where one stroke or change in a character can completely change the meaning. They also highlight the challenge of finding the opposite meaning from the same character given a different conjugation. 00:15:50 - Curiosity and Connection, The guest emphasizes the importance of curiosity and an innate sense of curiosity in learning a second language. They also express the desire to reach out and connect with others through language. 00:16:26 - Shades of Meaning in Japanese, The guest compares languages like Japanese and Chinese to Romance languages, highlighting the complexity and multiple shades of meaning in characters. They discuss the challenge of translating legal contracts and documents from English to Japanese. 00:18:20 - Missing the Social Dynamic, The guest shares their reason for stopping their translation career, expressing a desire for more socialization and connection with people. They enjoy talking and developing relationships, which led them to their current role in sales and client development. 00:19:47 - The Desire for Hands-On Translation, The guest expresses a desire to return to translation and be hands-on, as they enjoy diving in and being tactile. They mention the possibility of translating short stories or engaging in more creative translation in the future. 00:30:57 - Impact of Technology on Solopreneurs and Small Business Owners, Technology advancements like Chat GPT can disrupt industries, potentially putting solopreneurs and small business owners out of business or forcing them onto new career paths. 00:32:21 - Disruption in the Creative World, The creative world, including industries like copywriting, marketing consulting, and translation, is experiencing disruption as technology continues to advance. Jobs that were once necessary may become obsolete as AI tools like Chat GPT become more sophisticated. 00:34:15 - The Role of Supervised Learning and Machine Translation, In machine translation, there is a distinction between supervised learning and unsupervised learning. While Chat GPT goes through a filtration process, unsupervised machine translation can still be improved with technology advancements like neural machine translation. 00:35:33 - Balancing Machine Translation and Human Translation, When expanding into new markets, companies must consider the balance between using machine translation and human translation. While machine translation may be budget-friendly, it's important to ensure that the first impression of a product or brand is accurate and well-translated. 00:38:43 - New Skills and Opportunities, The introduction of AI tools like Chat GPT creates new opportunities for those who possess skills in manipulating and guiding these tools. Curiosity and a command of language are valuable assets in utilizing AI for oneself or clients. 00:46:38 - The Potential of Machine Translation, The guest discusses the potential of machine translation, mentioning the possibility of real-time translation and the ability to overlay mouth movements in different languages. However, he notes that the marriage of technology and translation is still not fully developed. 00:47:56 - The Advancements in Machine Translation, The guest acknowledges that developments in machine translation are already happening, citing examples such as instant translation on LinkedIn. He emphasizes that while machine translation has improved, it still comes with certain limitations and caveats. 00:49:49 - Localization vs. Translation, The host and guest discuss localization, highlighting its importance in global business growth. They explain that localization goes beyond translation as it involves adapting content to different languages, cultures, and markets. They note that localization should be considered early on in the development process. 00:51:18 - Historical Challenges in Localization, The guest explains that the historical paradigm of starting in a single market and expanding later has led to localization being an afterthought for many businesses. However, he mentions that forward-thinking companies are now incorporating localization into their product development from the beginning. 00:56:21 - The Future of Localization, The guest sees a positive future for localization, as technology continues to advance and automation becomes more desired. However, he highlights the need for strategic consulting and human involvement to ensure accurate and effective localization. The challenges ahead include keeping up with evolving technology and untangling the complexities of AI in localization workflows. 01:02:41 - Promoting Better Mental Health, The conversation starts with both Dan and Bill sharing their experiences of unhappiness in various positions throughout their careers. They discuss how using Chat GPT has helped them explore new ideas and perspectives, leading to a sense of fulfillment and better mental health. 01:03:26 - Empowering Website Development, Bill shares a specific example of how Chat GPT helped him enhance his website, Loknavigator.com. He explains how he was able to program interactive tiles and generate personalized suggestions based on user inputs. This experience opened his mind to new possibilities and boosted his confidence in web development. 01:04:51 - Chat GPT for Code Generation, The conversation highlights the effectiveness of Chat GPT in generating code snippets. Bill mentions that he used Chat GPT to find the snippets of code he needed to improve his website's functionality. They discuss the power of AI in code generation and its potential for improving workflow efficiency. 01:06:11 - Mindset Shifts and Pivoting, Dan and Bill discuss how using Chat GPT has helped them shift their mindsets and approach tasks and projects from new perspectives. They also mention the importance of having someone to bounce ideas off and validate their thinking. Chat GPT serves as a helpful tool in moments of panic or uncertainty. 01:09:12 - Overcoming Learning Hurdles, The conversation emphasizes how Chat GPT accelerates the learning process and helps overcome learning hurdles. Bill shares his personal accomplishment in using Chat GPT.   *Notes were created by humans, with help from Capsho, my preferred AI show notes assistant.

Science (Video)
Minimally Supervised Learning and AI with Sanjoy Dasgupta - Science Like Me

Science (Video)

Play Episode Listen Later May 31, 2023 28:15


In this program, UC San Diego computer science professor Sanjoy Dasgupta talks about unsupervised learning, which is a mix of AI, statistics and algorithms. He is one of the few machine-learning researchers whose work combines algorithmic theory with geometry and mathematical statistics. Series: "Science Like Me" [Science] [Show ID: 38939]

University of California Audio Podcasts (Audio)
Minimally Supervised Learning and AI with Sanjoy Dasgupta - Science Like Me

University of California Audio Podcasts (Audio)

Play Episode Listen Later May 31, 2023 28:15


Sanjoy Dasgupta, a UC San Diego professor, delves into unsupervised learning, an innovative fusion of AI, statistics, and algorithms, seeking to enable machines to learn from their environment without explicit instructions. His unique approach blends algorithmic theory with geometry and mathematical statistics, aiming to mimic human learning capabilities. This method broadens understanding of data interpretation, enhances pattern recognition, and improves decision-making processes. Through his work, Dasgupta provides fresh insights into data science application in machine learning research, exemplifying the potential of human-like learning processes in machines. Discover more about Dasgupta's ground-breaking approach. Series: "Science Like Me" [Science] [Show ID: 38939]

Science (Audio)
Minimally Supervised Learning and AI with Sanjoy Dasgupta - Science Like Me

Science (Audio)

Play Episode Listen Later May 31, 2023 28:15


Sanjoy Dasgupta, a UC San Diego professor, delves into unsupervised learning, an innovative fusion of AI, statistics, and algorithms, seeking to enable machines to learn from their environment without explicit instructions. His unique approach blends algorithmic theory with geometry and mathematical statistics, aiming to mimic human learning capabilities. This method broadens understanding of data interpretation, enhances pattern recognition, and improves decision-making processes. Through his work, Dasgupta provides fresh insights into data science application in machine learning research, exemplifying the potential of human-like learning processes in machines. Discover more about Dasgupta's ground-breaking approach. Series: "Science Like Me" [Science] [Show ID: 38939]

UC San Diego (Audio)
Minimally Supervised Learning and AI with Sanjoy Dasgupta - Science Like Me

UC San Diego (Audio)

Play Episode Listen Later May 31, 2023 28:15


Sanjoy Dasgupta, a UC San Diego professor, delves into unsupervised learning, an innovative fusion of AI, statistics, and algorithms, seeking to enable machines to learn from their environment without explicit instructions. His unique approach blends algorithmic theory with geometry and mathematical statistics, aiming to mimic human learning capabilities. This method broadens understanding of data interpretation, enhances pattern recognition, and improves decision-making processes. Through his work, Dasgupta provides fresh insights into data science application in machine learning research, exemplifying the potential of human-like learning processes in machines. Discover more about Dasgupta's ground-breaking approach. Series: "Science Like Me" [Science] [Show ID: 38939]

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Latest AI Trends: Anthropic's Claude AI can now digest an entire book like The Great Gatsby in seconds - Google announces PaLM 2, its answer to GPT-4, 17 AI and machine learning terms everyone needs to know

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store

Play Episode Listen Later May 14, 2023 6:29


Anthropic's Claude AI can now digest an entire book like The Great Gatsby in secondsAnthropic's Claude AI demonstrates an impressive leap in natural language processing capabilities by digesting entire books, like The Great Gatsby, in just seconds. This groundbreaking AI technology could revolutionize fields such as literature analysis, education, and research.OpenAI peeks into the “black box” of neural networks with new researchOpenAI has published groundbreaking research that provides insights into the inner workings of neural networks, often referred to as "black boxes." This research could enhance our understanding of AI systems, improve their safety and efficiency, and potentially lead to new innovations.The AI race heats up: Google announces PaLM 2, its answer to GPT-4Google has announced the development of PaLM 2, a cutting-edge AI model designed to rival OpenAI's GPT-4. This announcement marks a significant escalation in the AI race as major tech companies compete to develop increasingly advanced artificial intelligence systems.Leak of MSI UEFI signing keys stokes fears of “doomsday” supply chain attackA recent leak of MSI UEFI signing keys has sparked concerns about a potential "doomsday" supply chain attack. The leaked keys could be exploited by cybercriminals to compromise the integrity of hardware systems, making it essential for stakeholders to address the issue swiftly and effectively.Google's answer to ChatGPT is now open to everyone in the US, packing new featuresGoogle has released its ChatGPT competitor to the US market, offering users access to advanced AI-powered conversational features. This release brings new capabilities and enhancements to the AI landscape, further intensifying the competition between major tech companies in the AI space.AI gains “values” with Anthropic's new Constitutional AI chatbot approachAnthropic introduces a novel approach to AI development with its Constitutional AI chatbot, which is designed to incorporate a set of "values" that guide its behavior. This groundbreaking approach aims to address ethical concerns surrounding AI and create systems that are more aligned with human values and expectations.Spotify ejects thousands of AI-made songs in purge of fake streamsSpotify has removed thousands of AI-generated songs from its platform in a sweeping effort to combat fake streams. This purge highlights the growing concern over the use of AI in generating content that could distort metrics and undermine the value of genuine artistic works.17 AI and machine learning terms everyone needs to know: ANTHROPOMORPHISM, BIAS, CHATGPT, BING, BARD, ERNIE, EMERGENT BEHAVIOR, GENERATIVE AI, HALLUCINATION, LARGE LANGUAGE MODEL, NATURAL LANGUAGE PROCESSING, NEURAL NETWORK, PARAMETERS, 14. PROMPT, REINFORCEMENT LEARNING, TRANSFORMER MODEL, SUPERVISED LEARNING

Reasons to Believe Podcast
AI with an Off Switch? and Self-Supervised Learning | Stars, Cells, and God

Reasons to Believe Podcast

Play Episode Listen Later May 10, 2023 60:44


Join Jeff Zweerink and computer scientist Dustin Morley as they discuss new discoveries taking place at the frontiers of science that have theological and philosophical implications, including the reality of God's existence.   AI with an Off-Switch? As we contemplate what a world with true AI (general or super, rather than narrow, artificial intelligence) looks like, the question of how we interact with AI inevitably arises. Specifically, what do we do when AI pursues a path that is harmful to humanity? One scenario put forth is to install an off switch that we control, but would the AI leave the off switch enabled? One study showed that programming uncertainty into the AI about its objective may provide incentives for the AI to leave the off switch functional. However, that uncertainty diminishes the AI's effectiveness in obtaining its purpose. We discuss some of the apologetic implications of this study.   References: The Off-Switch Game   Self-Supervised Learning Recent major breakthroughs in public-facing artificial intelligence (AI) such as OpenAI's ChatGPT and Tesla's self-driving software have achieved success in part due to complex, multi-component deep learning model architectures where each of the components can be trained or fine-tuned while leaving the other components fixed—effectively decoupling different steps or subtasks from each other. A new paper (still in preprint) has demonstrated significant success with self-supervised learning, pushing the envelope on this level of AI versatility even further. What does this mean for the near-term future of AI, and what implications does it have for the age-old comparison between AI and human intelligence?   References: Blockwise Self-Supervised Learning at Scale

PaperPlayer biorxiv neuroscience
Lightning Pose: improved animal pose estimation via semi-supervised learning, Bayesian ensembling, and cloud-native open-source tools

PaperPlayer biorxiv neuroscience

Play Episode Listen Later Apr 28, 2023


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2023.04.28.538703v1?rss=1 Authors: Biderman, D., Whiteway, M. R., Hurwitz, C., Greenspan, N. R., Lee, R. S., Vishnubhotla, A., Schartner, M., Huntenburg, J. M., Khanal, A., Meijer, G. T., Noel, J.-P., Pan-Vazquez, A., Socha, K. Z., Urai, A. E., The International Brain Laboratory,, Warren, R., Noone, D., Pedraja, F., Cunningham, J., Sawtell, N. B., Paninski, L. Abstract: Copy rights belong to original authors. Visit the link for more info Podcast created by Paper Player, LLC

Machine Learning Street Talk
#114 - Secrets of Deep Reinforcement Learning (Minqi Jiang)

Machine Learning Street Talk

Play Episode Listen Later Apr 16, 2023 167:15


Patreon: https://www.patreon.com/mlst Discord: https://discord.gg/ESrGqhf5CB Twitter: https://twitter.com/MLStreetTalk In this exclusive interview, Dr. Tim Scarfe sits down with Minqi Jiang, a leading PhD student at University College London and Meta AI, as they delve into the fascinating world of deep reinforcement learning (RL) and its impact on technology, startups, and research. Discover how Minqi made the crucial decision to pursue a PhD in this exciting field, and learn from his valuable startup experiences and lessons. Minqi shares his insights into balancing serendipity and planning in life and research, and explains the role of objectives and Goodhart's Law in decision-making. Get ready to explore the depths of robustness in RL, two-player zero-sum games, and the differences between RL and supervised learning. As they discuss the role of environment in intelligence, emergence, and abstraction, prepare to be blown away by the possibilities of open-endedness and the intelligence explosion. Learn how language models generate their own training data, the limitations of RL, and the future of software 2.0 with interpretability concerns. From robotics and open-ended learning applications to learning potential metrics and MDPs, this interview is a goldmine of information for anyone interested in AI, RL, and the cutting edge of technology. Don't miss out on this incredible opportunity to learn from a rising star in the AI world! TOC Tech & Startup Background [00:00:00] Pursuing PhD in Deep RL [00:03:59] Startup Lessons [00:11:33] Serendipity vs Planning [00:12:30] Objectives & Decision Making [00:19:19] Minimax Regret & Uncertainty [00:22:57] Robustness in RL & Zero-Sum Games [00:26:14] RL vs Supervised Learning [00:34:04] Exploration & Intelligence [00:41:27] Environment, Emergence, Abstraction [00:46:31] Open-endedness & Intelligence Explosion [00:54:28] Language Models & Training Data [01:04:59] RLHF & Language Models [01:16:37] Creativity in Language Models [01:27:25] Limitations of RL [01:40:58] Software 2.0 & Interpretability [01:45:11] Language Models & Code Reliability [01:48:23] Robust Prioritized Level Replay [01:51:42] Open-ended Learning [01:55:57] Auto-curriculum & Deep RL [02:08:48] Robotics & Open-ended Learning [02:31:05] Learning Potential & MDPs [02:36:20] Universal Function Space [02:42:02] Goal-Directed Learning & Auto-Curricula [02:42:48] Advice & Closing Thoughts [02:44:47] References: - Why Greatness Cannot Be Planned: The Myth of the Objective by Kenneth O. Stanley and Joel Lehman https://www.springer.com/gp/book/9783319155234 - Rethinking Exploration: General Intelligence Requires Rethinking Exploration https://arxiv.org/abs/2106.06860 - The Case for Strong Emergence (Sabine Hossenfelder) https://arxiv.org/abs/2102.07740 - The Game of Life (Conway) https://www.conwaylife.com/ - Toolformer: Teaching Language Models to Generate APIs (Meta AI) https://arxiv.org/abs/2302.04761 - OpenAI's POET: Paired Open-Ended Trailblazer https://arxiv.org/abs/1901.01753 - Schmidhuber's Artificial Curiosity https://people.idsia.ch/~juergen/interest.html - Gödel Machines https://people.idsia.ch/~juergen/goedelmachine.html - PowerPlay https://arxiv.org/abs/1112.5309 - Robust Prioritized Level Replay: https://openreview.net/forum?id=NfZ6g2OmXEk - Unsupervised Environment Design: https://arxiv.org/abs/2012.02096 - Excel: Evolving Curriculum Learning for Deep Reinforcement Learning https://arxiv.org/abs/1901.05431 - Go-Explore: A New Approach for Hard-Exploration Problems https://arxiv.org/abs/1901.10995 - Learning with AMIGo: Adversarially Motivated Intrinsic Goals https://www.researchgate.net/publication/342377312_Learning_with_AMIGo_Adversarially_Motivated_Intrinsic_Goals PRML https://www.microsoft.com/en-us/research/uploads/prod/2006/01/Bishop-Pattern-Recognition-and-Machine-Learning-2006.pdf Sutton and Barto https://web.stanford.edu/class/psych209/Readings/SuttonBartoIPRLBook2ndEd.pdf

Radiology AI Podcasts | RSNA
Weakly-Supervised Learning for Global, Examination Labels and Code-Sharing Practices- Part 2

Radiology AI Podcasts | RSNA

Play Episode Listen Later Apr 7, 2023 27:25


Co-hosts Dr. Paul Yi and Dr. Ali Tenjani speak with Jacopo Teneggi and Dr. Jeremias Sulam about Jacopo's trainee award winning research Weakly-Supervised Learning Substantially Reduces the Number of Labels Required for Intracranial Hemorrhage Detection on Head CT.

Radiology AI Podcasts | RSNA
Weakly-Supervised Learning for Global, Examination Labels and Code-Sharing Practices- Part 1

Radiology AI Podcasts | RSNA

Play Episode Listen Later Mar 17, 2023 20:28


Co-hosts Dr. Paul Yi and Dr. Ali Tenjani speak with Jacopo Teneggi and Dr. Jeremias Sulam about Jacopo's trainee award wining research Weakly-Supervised Learning Substantially Reduces the Number of Labels Required for Intracranial Hemorrhage Detection on Head CT.

Super Prompt: Generative AI w/ Tony Wan
AI Voice Profiling Revolutionizes Healthcare | Are You Sick by the Sound of Your Voice | CTO/Entrepreneur Mario Arancibia | Episode 9

Super Prompt: Generative AI w/ Tony Wan

Play Episode Listen Later Feb 20, 2023 61:56


I speak with CTO and Chilean entrepreneur Mario Arancibia, about AI his company has developed and deployed which screens for diseases, such as Covid-19 based on the sound of our voice. Speaking a simple phrase into your phone, such as the days of the week, the AI can tell based on your voice profile if you have Covid. Or not. The AI can be trained to screen for other respiratory illnesses, and conditions as far ranging as obesity, and drug  alcohol use. All from  the sound of our voice. Soon AI will know more about your health than you do. [Note: Mario's views are his own, and not necessarily that of his company.]We laugh. We cry. We iterate.Check out what THE MACHINES and one human say about the Super Prompt podcast:“I'm afraid I can't do that.” — HAL9000“These are not the droids you are looking for." — Obi-Wan“Like tears in rain.” — Roy Batty“Hasta la vista baby.” — T1000"I'm sorry, but I do not have information after my last knowledge update in January 2022." — GPT3

AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion
AI Today Podcast: AI Glossary – Machine Learning Approaches: Supervised Learning, Unsupervised Learning, Reinforcement Learning

AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion

Play Episode Listen Later Feb 8, 2023 9:37


In this episode of the AI Today podcast hosts Kathleen Walch and Ron Schmelzer define terms related to Machine Learning Approaches including Supervised Learning, Unsupervised Learning, Reinforcement Learning and explain how they relate to AI and why it's important to know about them. Show Notes: FREE Intro to CPMAI mini course CPMAI Training and Certification AI Glossary Glossary Series: Artificial Intelligence AI Glossary Series – Machine Learning, Algorithm, Model Glossary Series: Probabilistic & Deterministic Glossary Series: Classification & Classifier, Binary Classifier, Multiclass Classifier, Decision Boundary Glossary Series: Regression, Linear Regression Glossary Series: Clustering, Cluster Analysis, K-Means, Gaussian Mixture Model Glossary Series: Goal-Driven Systems & Roboadvisor Understanding the Goal-Driven Systems Pattern of AI Continue reading AI Today Podcast: AI Glossary – Machine Learning Approaches: Supervised Learning, Unsupervised Learning, Reinforcement Learning at AI & Data Today.

Super Prompt: Generative AI w/ Tony Wan
AI Beats Human Master | Alpha Go by DeepMind | Supervised Learning | Episode 7

Super Prompt: Generative AI w/ Tony Wan

Play Episode Listen Later Feb 6, 2023 44:51


Alpha Go AI plays the game of GO against a human world champion. Unexpected moves by both man (9-dan Go champion Lee Sedol) and machine (Alpha Go). Supposedly, this televised Go match woke up China's leadership  to the potential of AI. In the game of Go, players take turns placing black and white tiles on a 19×19 grid. The number of board positions in Go is greater than the number of atoms in the observable universe. We discuss the documentary Alpha Go which tells the story of Alpha Go (created by DeepMind, acquired by Google), and the human Go champions it plays against.  Who will you cheer for: man or machine? I speak again with my friend Maroof Farook, an AI Engineer at Nvidia. [Note: Maroof's views are his and not that of his employer.]  Please enjoy our conversation.We laugh. We cry. We iterate.Check out what THE MACHINES and one human say about the Super Prompt podcast:“I'm afraid I can't do that.” — HAL9000“These are not the droids you are looking for." — Obi-Wan“Like tears in rain.” — Roy Batty“Hasta la vista baby.” — T1000"I'm sorry, but I do not have information after my last knowledge update in January 2022." — GPT3

Super Prompt: Generative AI w/ Tony Wan
GPT-3 | ChatGPT Under the Hood | Natural Language Processing | Episode 2

Super Prompt: Generative AI w/ Tony Wan

Play Episode Listen Later Jan 2, 2023 29:00


I speak again with my friend, Maroof Farooq, an AI engineer at Nvidia. [Note: Maroof's views are his own, and not that of his employer.]  We discuss a breakthrough in natural language processing AI called GPT3 created by the research lab, OpenAI. This episode was recorded prior to the launch of ChatGPT (chatbot built on top of GPT-3) and is a good introduction on how GPT works under the hood. We dive into supervised vs. unsupervised learning, what GPT3 stands for (spoiler alert: Generative Pre-trained Transformer), what the heck those words mean, and how GPT3 can impersonate famous people like Isaac Asimov, Isaac Newton, the Hulk (yeah, the buff, green superhero), and someday… YOU! Please enjoy this episode. We laugh. We cry. We iterate.Check out what THE MACHINES and one human say about the Super Prompt podcast:“I'm afraid I can't do that.” — HAL9000“These are not the droids you are looking for." — Obi-Wan“Like tears in rain.” — Roy Batty“Hasta la vista baby.” — T1000"I'm sorry, but I do not have information after my last knowledge update in January 2022." — GPT3

Super Prompt: Generative AI w/ Tony Wan
"Hot Dog. Not Hot Dog." AI from Silicon Valley, the TV Series | How to Build and Train AI | Image Classification | Episode 1

Super Prompt: Generative AI w/ Tony Wan

Play Episode Listen Later Dec 26, 2022 34:53


I speak w/ Maroof Farooq, an AI engineer at Nvidia.  [Note: Maroof's views are his own, and not that of his employer.] We walk through how to build AI  from scratch using the fictitious example of the Seefood [Sic] app from the HBO television series, Silicon Valley. We learn about image classification, how to acquire a dataset, and how to train the AI. Join us as we build a super-impressive AI that can recognize hot dogs of all shapes and sizes. Learn what it takes to go from there to an AI that can recognize foods of all kinds. Maybe even pizza. Join us as we begin our deep dive into the world of AI, starting with the humble hot dog. Today: Shazam for food. Tomorrow: Judgement Day. Buckle up folks. It's going to be a wild ride.We laugh. We cry. We iterate.Check out what THE MACHINES and one human say about the Super Prompt podcast:“I'm afraid I can't do that.” — HAL9000“These are not the droids you are looking for." — Obi-Wan“Like tears in rain.” — Roy Batty“Hasta la vista baby.” — T1000"I'm sorry, but I do not have information after my last knowledge update in January 2022." — GPT3

PaperPlayer biorxiv neuroscience
Inference of Presynaptic Connectivity from Temporally Blurry Spike Trains by Supervised Learning

PaperPlayer biorxiv neuroscience

Play Episode Listen Later Oct 20, 2022


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2022.10.20.513050v1?rss=1 Authors: Vareberg, A. D., Eizadi, J., Ren, X., Hai, A. Abstract: Reconstruction of neural network connectivity is a central focus of neuroscience. The ability to use neuronal connection information to predict activity at single unit resolution and decipher its effect on whole systems can provide critical information about behavior and cognitive processing. Neuronal sensing modalities come in varying forms, but there is yet to exist a modality that can deliver readouts that sufficiently address the spatiotemporal constraints of biological nervous systems. This necessitates supplementary approaches that rely on mathematical models to mitigate physical limitations and decode network features. Here, we introduce a simple proof-of-concept model that addresses temporal constraints by reconstructing presynaptic connections from temporally blurry data. We use a variation of the perceptron algorithm to process firing rate information at multiple time constraints for a heterogenous feed-forward network of excitatory, inhibitory, and unconnected presynaptic units. We evaluate the performance of the algorithm under these conditions and determine the optimal learning rate, firing rate, and the ability to reconstruct single unit spikes for a given degree of temporal blur. We then test our method on a physiologically relevant configuration by sampling network subpopulations of leaky integrate-and-fire neuronal models displaying bursting firing patterns and find comparable learning rates for optimized reconstruction of network connectivity. Our method provides a recipe for reverse engineering neural networks based on limited data quality that can be extended to more complicated readouts and connectivity distributions relevant to multiple brain circuits. Copy rights belong to original authors. Visit the link for more info Podcast created by Paper Player, LLC

Der Data Analytics Podcast
5 Minuten: KI Erklärt im Schnelldurchlauf

Der Data Analytics Podcast

Play Episode Listen Later Apr 21, 2022 4:53


Was bedeutet Künstliche Intelligenz? Und untergeordnet Supervised Learning, Unsupervised Learning und Reinforcement Learning. .. mit der Technik Deep Learning?

Lex Fridman Podcast
#258 – Yann LeCun: Dark Matter of Intelligence and Self-Supervised Learning

Lex Fridman Podcast

Play Episode Listen Later Jan 22, 2022 171:40


Yann LeCun is the Chief AI Scientist at Meta, professor at NYU, Turing Award winner, and one of the seminal researchers in the history of machine learning. Please support this podcast by checking out our sponsors: – Public Goods: https://publicgoods.com/lex and use code LEX to get $15 off – Indeed: https://indeed.com/lex to get $75 credit – ROKA: https://roka.com/ and use code LEX to get 20% off your first order – NetSuite: http://netsuite.com/lex to get free product tour – Magic Spoon: https://magicspoon.com/lex and use code LEX to get $5 off EPISODE LINKS: Yann's Twitter: https://twitter.com/ylecun Yann's Facebook: https://www.facebook.com/yann.lecun Yann's Website: http://yann.lecun.com/

Delicate Database with Aaron
Machine Learning Algorithms

Delicate Database with Aaron

Play Episode Listen Later Jan 17, 2022 12:13


Happy New Year Everyone! Hope you all enjoyed the holiday period. I'm baaaack and in today's episode, I discuss a subcategory of machine learning - Supervised Learning. I break down the basics of Supervised Learning, what it is and how it works. Get in touch and let me know what you thought! Twitter: @Delicate_Data Email: timicode54@gmail.com --- Send in a voice message: https://anchor.fm/delicatedatabase/message

Life with AI
#31 - How to make autonomous vehicles drive in different places? Understanding domain adaptation and semi-supervised learning.

Life with AI

Play Episode Listen Later Dec 30, 2021 20:48


Hey guys, in this episode I explain how an autonomous vehicle can learn to drive in a specific city and be able to generalize the driving knowledge to any other city. To explain it, I show the concept of of semi-supervised learning and domain adaptation. I explain the the ideia with the first proposed architecture and what we have today as state of the art for this problem. Instagram: https://www.instagram.com/podcast.lifewithai/ Linkedin: https://www.linkedin.com/company/life-with-ai Code: https://github.com/filipelauar/projects/blob/main/domain_adaptation_semi_supervised_learning_MNIST_pytorch.ipynb

Vida com IA
#31 - Como fazer carros autônomos dirigirem em todos os lugares? Explicando domain adaptation e semi-supervised learning.

Vida com IA

Play Episode Listen Later Dec 30, 2021 22:01


Fala galera, nesse episódio eu explico como um carro autônomo pode ser treinado para dirigir em um país, na Alemanha por exemplo, e mesmo assim ser capaz de dirigir no Brasil com tudo tão diferente. Para explicar isso, eu falo sobre o conceito de domain adaptation (adaptação de domínio) e também semi-supervised learning (aprendizado semi supervisionado). Eu explico a ideia com a primeira arquitetura proposta and também falo sobre o estado da arte que temos hoje para resolver o problema. Instagram: https://www.instagram.com/podcast.lifewithai/ Linkedin: https://www.linkedin.com/company/life-with-ai Código: https://github.com/filipelauar/projects/blob/main/domain_adaptation_semi_supervised_learning_MNIST_pytorch.ipynb

Conversations On Science
Mathilde Caron, Self-Supervised Learning Research

Conversations On Science

Play Episode Listen Later Dec 16, 2021 47:23


Mathilde Caron is a PhD. candidate at the French National Institute for Research in Digital Science and Technology and at Facebook AI (Meta AI). She does the majority of her research in the field of Machine learning called self-supervised learning. She has a few first authorships on important academic papers in the space. Her work: https://scholar.google.com/citations?user=eiB0s-kAAAAJ&hl=fr You can donate to this podcast at this bitcoin address: 33wejXuGGDtQj9GPwCgjwPxPq4dc4muZjg --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/idris-sunmola/support

technology phd research supervised learning learning research digital science french national institute
The Gradient Podcast
Alex Tamkin on Self-Supervised Learning and Large Language Models

The Gradient Podcast

Play Episode Listen Later Nov 11, 2021 70:41


In episode 15 of The Gradient Podcast, we talk to Stanford PhD Candidate Alex TamkinSubscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSAlex Tamkin is a fourth-year PhD student in Computer Science at Stanford, advised by Noah Goodman and part of the Stanford NLP Group. His research focuses on understanding, building, and controlling pretrained models, especially in domain-general or multimodal settings.We discuss:Viewmaker Networks: Learning Views for Unsupervised Representation LearningDABS: A Domain-Agnostic Benchmark for Self-Supervised LearningOn the Opportunities and Risks of Foundation ModelsUnderstanding the Capabilities, Limitations, and Societal Impact of Large Language ModelsMentoring, teaching and fostering a healthy and inclusive research cultureScientific communication and breaking down walls between fieldsPodcast Theme: “MusicVAE: Trio 16-bar Sample #2” from "MusicVAE: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music" Get full access to The Gradient at thegradientpub.substack.com/subscribe

Vida com IA
#24 - Dicas para treinar redes neurais, data augmentation e self-supervised learning com Fernando Santos.

Vida com IA

Play Episode Listen Later Nov 11, 2021 35:08


Fala galera, nesse episódio eu recebo como convidado o Fernando Santos, pesquisador de pós-doc e professor na USP. No episódio a gente fala sobre várias dicas para treinar redes neurais e alguma técnicas mais avançadas, como curriculum learning e distance learning, e principalmente sobre uso de data augmentation e auto supervisão (self-supervised learning). Instagram: https://www.instagram.com/podcast.lifewithai/ Linkedin: https://www.linkedin.com/company/life-with-ai Linkedin do Fernando: https://www.linkedin.com/in/fernando-persan/ Código do tutorial: https://github.com/maponti/trainingdeepnetworks

Life with AI
#24 - Tips for training neural networks, data augmentation and self-supervised learning with Fernando Santos.

Life with AI

Play Episode Listen Later Nov 11, 2021 15:30


Hey guys, in this episode I talk about the main points of the episode that I recorded in Portuguese with Fernando Santos, a post doc researcher and professor at USP. In this episode, I talk about tips to train neural networks and some techniques that are really improving the convergence of the training, like data augmentation and self-supervised learning. Instagram: https://www.instagram.com/podcast.lifewithai/ Linkedin: https://www.linkedin.com/company/life-with-ai Fernando's Linkedin: https://www.linkedin.com/in/fernando-persan/ Tutorial's code: https://github.com/maponti/trainingdeepnetworks

Papers Read on AI
Self-Supervised Learning by Estimating Twin Class Distributions

Papers Read on AI

Play Episode Listen Later Oct 29, 2021 29:55


We present TWIST, a novel self-supervised representation learning method by classifying large-scale unlabeled datasets in an end-to-end way. We employ a siamese network terminated by a softmax operation to produce twin class distributions of two augmented images. Without supervision, we enforce the class distributions of different augmentations to be consistent. In the meantime, we regularize the class distributions to make them sharp and diverse. Specifically, we minimize the entropy of the distribution for each sample to make the class prediction for each sample assertive and maximize the entropy of the mean distribution to make the predictions of different samples diverse. In this way, TWIST can naturally avoid the trivial solutions without specific designs such as asymmetric network, stop-gradient operation, or momentum encoder. Different from the clustering-based methods which alternate between clustering and learning, our method is a single learning process guided by a unified loss function. As a result, TWIST outperforms state-of-the-art methods on a wide range of tasks, including unsupervised classification, linear classification, semi-supervised learning, transfer learning, and some dense prediction tasks such as detection and segmentation. Codes and pre-trained models are given on: https://github.com/bytedance/TWIST 2021: Feng Wang, T. Kong, Rufeng Zhang, Huaping Liu, Hang Li https://arxiv.org/pdf/2110.07402v3.pdf

The Gradient Podcast
Yann LeCun on his Start in Research and Self-Supervised Learning

The Gradient Podcast

Play Episode Listen Later Aug 5, 2021 55:48


In episode 6 of The Gradient Podcast, we interview Deep Learning pioneer Yann LeCun.Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterYann LeCun is the VP & Chief AI Scientist at Facebook and Silver Professor at NYU and he was also the founding Director of Facebook AI Research and of the NYU Center for Data Science. He famously pioneered the use of Convolutional Neural Nets for image processing in the 80s and 90s, and is generally regarded as one of the people whose work was pivotal to the Deep Learning revolution in AI. In fact he is the recipient of the 2018 ACM Turing Award (with Geoffrey Hinton and Yoshua Bengio) for "conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing". Theme: “MusicVAE: Trio 16-bar Sample #2” from "MusicVAE: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music". Get full access to The Gradient at thegradientpub.substack.com/subscribe

Soft Robotics Podcast
Clip" Sergey Levine: Generalization In Reinforcement Learning VS Supervised Learning"

Soft Robotics Podcast

Play Episode Listen Later Jun 1, 2021 2:59


Clip" Sergey Levine: Generalization In Reinforcement Learning VS Supervised Learning"

Yannic Kilcher Videos (Audio Only)
Yann LeCun - Self-Supervised Learning: The Dark Matter of Intelligence (FAIR Blog Post Explained)

Yannic Kilcher Videos (Audio Only)

Play Episode Listen Later May 3, 2021 58:36


#selfsupervisedlearning​ #yannlecun​ #facebookai​ Deep Learning systems can achieve remarkable, even super-human performance through supervised learning on large, labeled datasets. However, there are two problems: First, collecting ever more labeled data is expensive in both time and money. Second, these deep neural networks will be high performers on their task, but cannot easily generalize to other, related tasks, or they need large amounts of data to do so. In this blog post, Yann LeCun and Ishan Misra of Facebook AI Research (FAIR) describe the current state of Self-Supervised Learning (SSL) and argue that it is the next step in the development of AI that uses fewer labels and can transfer knowledge faster than current systems. They suggest as a promising direction to build non-contrastive latent-variable predictive models, like VAEs, but ones that also provide high-quality latent representations for downstream tasks. OUTLINE: 0:00​ - Intro & Overview 1:15​ - Supervised Learning, Self-Supervised Learning, and Common Sense 7:35​ - Predicting Hidden Parts from Observed Parts 17:50​ - Self-Supervised Learning for Language vs Vision 26:50​ - Energy-Based Models 30:15​ - Joint-Embedding Models 35:45​ - Contrastive Methods 43:45​ - Latent-Variable Predictive Models and GANs 55:00​ - Summary & Conclusion Paper (Blog Post): https://ai.facebook.com/blog/self-sup...​ My Video on BYOL: https://www.youtube.com/watch?v=YPfUi...​ ERRATA: - The difference between loss and energy: Energy is for inference, loss is for training. - The R(z) term is a regularizer that restricts the capacity of the latent variable. I think I said both of those things, but never together. - The way I explain why BERT is contrastive is wrong. I haven't figured out why just yet, though :) Video approved by Antonio. Abstract: We believe that self-supervised learning (SSL) is one of the most promising ways to build such background knowledge and approximate a form of common sense in AI systems. Authors: Yann LeCun, Ishan Misra Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick​ YouTube: https://www.youtube.com/c/yannickilcher​ Twitter: https://twitter.com/ykilcher​ Discord: https://discord.gg/4H8xxDF​ BitChute: https://www.bitchute.com/channel/yann...​ Minds: https://www.minds.com/ykilcher​ Parler: https://parler.com/profile/YannicKilcher​ LinkedIn: https://www.linkedin.com/in/yannic-ki...​ BiliBili: https://space.bilibili.com/1824646584​ If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannick...​ Patreon: https://www.patreon.com/yannickilcher​ Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n

Machine Learning Podcast - Jay Shah
Solving Dark Matter of Intelligence, Self-Supervised Learning | Ishan Mishra, Facebook AI ​

Machine Learning Podcast - Jay Shah

Play Episode Listen Later Apr 9, 2021 47:50


Ishan is a Research Scientist at Facebook AI. Much of his recent research work revolves around self-supervised learning is known for this works like including SwAV and PIRL. He completed his Ph.D. from CMU with Martial Hebert and Abhinav Gupta. and his thesis was titled “Visual Learning with Minimal Human Supervision”Ishan's homepage: http://imisra.github.io/About the Host:Jay is a Ph.D. student at Arizona State University, doing research on building Interpretable AI models for Medical Diagnosis.Jay Shah: https://www.linkedin.com/in/shahjay22/You can reach out to https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

3 minute lesson
Supervised learning | Artificial intelligence

3 minute lesson

Play Episode Listen Later Feb 2, 2021 2:57


Episode 217. Topic: Supervised learning. Theme: Artificial intelligence. How does a machine learn to do a task? One of the simpler algorithms is supervised learning. How does it work and when is it implemented?

PaperPlayer biorxiv bioinformatics
scPretrain: Multi-task self-supervised learning for cell type classification

PaperPlayer biorxiv bioinformatics

Play Episode Listen Later Nov 20, 2020


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.11.18.386102v1?rss=1 Authors: Zhang, R., Luo, Y., Ma, J., Zhang, M., Wang, S. Abstract: Rapidly generated scRNA-seq datasets enable us to understand cellular differences and the function of each individual cell at single-cell resolution. Cell type classification, which aims at characterizing and labeling groups of cells according to their gene expression, is one of the most important steps for single-cell analysis. To facilitate the manual curation process, supervised learning methods have been used to automatically classify cells. Most of the existing supervised learning approaches only utilize annotated cells in the training step while ignoring the more abundant unannotated cells. In this paper, we proposed scPretrain, a multi-task self-supervised learning approach that jointly considers annotated and unannotated cells for cell type classification. scPretrain consists of a pre-training step and a fine-tuning step. In the pre-training step, scPretrain uses a multi-task learning framework to train a feature extraction encoder based on each dataset's pseudo-labels, where only unannotated cells are used. In the fine-tuning step, scPretrain fine-tunes this feature extraction encoder using the limited annotated cells in a new dataset. We evaluated scPretrain on 60 diverse datasets from different technologies, species and organs, and obtained a significant improvement on both cell type classification and cell clustering. Moreover, the representations obtained by scPretrain in the pre-training step also enhanced the performance of conventional classifiers such as random forest, logistic regression and support vector machines. scPretrain is able to effectively utilize the massive amount of unlabelled data and be applied to annotating increasingly generated scRNA-seq datasets. Copy rights belong to original authors. Visit the link for more info

PaperPlayer biorxiv neuroscience
Reinforcing neuron extraction and spike inference in calcium imaging using deep self-supervised learning

PaperPlayer biorxiv neuroscience

Play Episode Listen Later Nov 17, 2020


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.11.16.383984v1?rss=1 Authors: Li, X., Zhang, G., Wu, J., Zhang, Y., Zhao, Z., Lin, X., Qiao, H., Xie, H., Wang, H., Fang, L., Dai, Q. Abstract: Calcium imaging is inherently susceptible to detection noise especially when imaging with high frame rate or under low excitation dosage. We developed DeepCAD, a self-supervised learning method for spatiotemporal enhancement of calcium imaging without requiring any high signal-to-noise ratio (SNR) observations. Using this method, detection noise can be effectively suppressed and the imaging SNR can be improved more than tenfold, which massively improves the accuracy of neuron extraction and spike inference and facilitate the functional analysis of neural circuits. Copy rights belong to original authors. Visit the link for more info

PaperPlayer biorxiv bioinformatics
Deep Semi-Supervised Learning Improves Universal Peptide Identification of Shotgun Proteomics Data

PaperPlayer biorxiv bioinformatics

Play Episode Listen Later Nov 14, 2020


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.11.12.380881v1?rss=1 Authors: Halloran, J. T., Urban, G., Rocke, D. M., Baldi, P. F. Abstract: In proteomic analysis pipelines, machine learning post-processors play a critical role in improving the accuracy of shotgun proteomics analysis. Most often performed in a semi-supervised manner, such post-processors accept the peptide-spectrum matches (PSMs) and corresponding feature vectors resulting from a database search, train a machine learning classifier, and recalibrate PSM scores based on the resulting trained parameters, often leading to significantly more identified peptides across q-value thresholds. However, current state-of-the-art post-processors rely on shallow machine learning methods, such as SVMs, gradient boosted decision trees, and linear discriminant analysis. In contrast, the powerful learning capabilities of deep models have displayed superior performance to shallow models in an ever-growing number of other fields. In this work, we show that deep neural networks (DNNs) significantly improve the recalibration of shotgun proteomics data compared to the most accurate and widely used post-processors, such as Percolator and PeptideProphet. Furthermore, we show that DNNs are able to adaptively analyze complex datasets and features for more accurate universal post-processing, leading to both improved Prosit analysis and markedly better recalibration of recently developed p-value scoring functions. Copy rights belong to original authors. Visit the link for more info

Why of AI
What Is Self-Supervised Learning? This vs Other Machine Learning Types

Why of AI

Play Episode Listen Later Sep 3, 2020 5:35


What is self-supervised learning? This episode explores this exciting and promising area while comparing the differences between self-supervised learning and other machine learning types such as supervised and unsupervised learning.    SUBSCRIBE – YouTube: https://bit.ly/aiwalexs | Alex's Newsletter: https://www.whyofai.com/newsletter | LEARN – Artificial Intelligence Courses and Certifications at Why of AI: https://www.whyofai.com | Alex's Book: https://www.whyofai.com/ai-book | Alex's Book on Amazon: https://amzn.to/2O54wQU |  SOCIAL – Twitter: https://twitter.com/alexcastrounis | LinkedIn: https://www.linkedin.com/in/alexcastrounis | © Why of AI 2021. All Rights Reserved.Support the show (https://www.buymeacoffee.com/alexcastrounis/)

PaperPlayer biorxiv neuroscience
Deep Feature Extraction for Resting-State Functional MRI by Self-Supervised Learning and Application to Schizophrenia Diagnosis

PaperPlayer biorxiv neuroscience

Play Episode Listen Later Aug 24, 2020


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.08.22.260406v1?rss=1 Authors: Hashimoto, Y., Ogata, Y., Honda, M., Yamashita, Y. Abstract: In this study, we propose a novel deep-learning technique for functional MRI analysis. We introduced an "identity feature" by a self-supervised learning schema, in which a neural network is trained solely based on the MRI-scans; furthermore, training does not require any explicit labels. The proposed method demonstrated that each temporal slice of resting state functional MRI contains enough information to identify the subject. The network learned a feature space in which the features were clustered per subject for the test data as well as for the training data; this is unlike the features extracted by conventional methods including region of interests pooling signals and principle component analysis. In addition, using a simple linear classifier for the identity features, we demonstrated that the extracted features could contribute to schizophrenia diagnosis. The classification accuracy of our identity features was higher than that of the conventional functional connectivity. Our results suggested that our proposed training scheme of the neural network captured brain functioning related to the diagnosis of psychiatric disorders as well as the identity of the subject. Our results together highlight the validity of our proposed technique as a design for self-supervised learning. Copy rights belong to original authors. Visit the link for more info

Tic-Tac-Toe the Hard Way
Lessons learned

Tic-Tac-Toe the Hard Way

Play Episode Listen Later Jul 22, 2020 33:01


What have we learned about machine learning and the human decisions that shape it? And is machine learning perhaps changing our minds about how the world outside of machine learning — also known as the world — works?

Tic-Tac-Toe the Hard Way
Head to Head: The Even Bigger ML Smackdown!

Tic-Tac-Toe the Hard Way

Play Episode Listen Later Jul 22, 2020 24:26


Yannick and David’s systems play against each other in 500 games. Who’s going to win? And what can we learn about how the ML may be working by thinking about the results?

Tic-Tac-Toe the Hard Way
Enter tic-tac-two

Tic-Tac-Toe the Hard Way

Play Episode Listen Later Jul 22, 2020 21:20


David’s variant of tic-tac-toe that we’re calling tic-tac-two is only slightly different but turns out to be far more complex. This requires rethinking what the ML system will need in order to learn how to play, and how to represent that data.

Tic-Tac-Toe the Hard Way
Head to Head: the Big ML Smackdown!

Tic-Tac-Toe the Hard Way

Play Episode Listen Later Jul 22, 2020 25:19


David and Yannick’s tic-tac-toe ML agents face-off against each other in tic-tac-toe!

Tic-Tac-Toe the Hard Way
Give that model a treat! : Reinforcement learning explained

Tic-Tac-Toe the Hard Way

Play Episode Listen Later Jul 22, 2020 26:04


Switching gears, we focus on how Yannick’s been training his model using reinforcement learning. He explains the differences from David’s supervised learning approach. We find out how his system performs against a player that makes random tic-tac-toe moves.

Tic-Tac-Toe the Hard Way
Beating random: What it means to have trained a model

Tic-Tac-Toe the Hard Way

Play Episode Listen Later Jul 22, 2020 17:14


David did it! He trained a machine learning model to play tic-tac-toe! How did his model do against a player that makes random tic-tac-toe moves?

Tic-Tac-Toe the Hard Way
From tic-tac-toe moves to ML model

Tic-Tac-Toe the Hard Way

Play Episode Listen Later Jul 22, 2020 21:37


Once we have the data we need—thousands of sample games—how do we turn it into something the ML can train itself on? That means understanding how training works, and what a model is.

Tic-Tac-Toe the Hard Way
What does a tic-tac-toe board look like to machine learning?

Tic-Tac-Toe the Hard Way

Play Episode Listen Later Jul 22, 2020 23:26


David delves into questions around data and training for his model including: What does a tic-tac-toe board “look” like to ML? Plus, an intro to reinforcement learning, the approach Yannick will be taking.

Tic-Tac-Toe the Hard Way
Introducing Tic-Tac-Toe the Hard Way

Tic-Tac-Toe the Hard Way

Play Episode Listen Later Jul 21, 2020 2:09


Introducing the podcast where a writer and a software engineer explore the human choices that shape machine learning systems by building competing tic-tac-toe agents. Brought to you by Google's People + AI Research team.

Tic-Tac-Toe the Hard Way
Howdy, and the myth of “pouring in data”

Tic-Tac-Toe the Hard Way

Play Episode Listen Later Jul 21, 2020 22:01


David and Yannick get started on their project to build competing machine learning systems that play tic-tac-toe. They discuss the human choices that will shape their systems along the way.

V-Next: The Future is Now
How Organizations can Harness the Power of Artificial Intelligence (AI)

V-Next: The Future is Now

Play Episode Listen Later Jun 3, 2020 59:31


In this episode I talk to Katie King, well known AI author and CEO of AI in business. I had the pleasure to catch up with her an hour to discuss her journey in the AI space, tips and perspectives on how to think about AI, common misconceptions, her work with the All-Party Parliament, and the impacts of COVID-19 on the AI industry.

Let's Talk AI
AI News Talk #7: Mechanisms for AI Safety, Beyond Supervised Learning, and AI for Science

Let's Talk AI

Play Episode Listen Later Apr 26, 2020 27:03


Stanford AI Lab PhDs Andrey Kurenkov and Sharon Zhou discuss this week's major AI news stories.Check out all the stories discussed here and more at www.skynettoday.comTheme: Deliberate Thought Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 3.0 License

Tentang Data
S01E07 Self-supervised Learning di Computer Vision

Tentang Data

Play Episode Listen Later Apr 10, 2020 31:21


Meski lebih dulu sukses dengan transfer learning, kesuksesan self-supervised learning di NLP justru yang kemudian menginspirasi peneliti di bidang computer vision untuk mengatasi masalah pelabelan data yang mahal. Bersama G. Wesley P. Data (@gwesleypdata), saya membahas risetnya terkait hal ini yang dilakukan di Active Vision Lab di University of Oxford. Kami juga membahas lab computer vision apa saja yang ada di University of Oxford dan apa yang membedakan masing-masing lab tersebut.

KI in der Industrie
Kurz KI - ML-Verfahren für die Industrie

KI in der Industrie

Play Episode Listen Later Mar 11, 2020 22:59


Prof. Dr. Oliver Niggemann vom Institut für Automatisierungstechnik der Universität der Bundeswehr in Hamburg erklärt die wichtigsten KI-Begriffe für die Industrie. Von Auto-ML über explainable AI hin zu supervised und unsupervised learning - dabei immer Fokus: Die Frage, welche Rolle das für die Industrie spielt.

Leading NLP Ninja
ep47 (ICLR): ALBERT: A Lite BERT for Self-supervised Learning of Language Representations

Leading NLP Ninja

Play Episode Listen Later Jan 12, 2020 25:34


ICLR 2020より,factorized embeddingとパラメタ共有によるパラメタ削減及び文順序予測タスクを採用したALBERTを解説しました. 今回紹介した記事はこちらのissueで解説しています. https://github.com/jojonki/arXivNotes/issues/348 サポーターも募集中です. https://www.patreon.com/jojonki --- Support this podcast: https://anchor.fm/lnlp-ninja/support

Rage Against the Data
IA spiegata a mia nonna - Supervised Learning

Rage Against the Data

Play Episode Listen Later Nov 9, 2019 2:21


Prima puntata della playlist "Intelligenza Artificiale spiegata a mia nonna". L'obiettivo è quello di realizzare una serie di video spiegando cos'è l'AI, utilizzando la matematica il meno possibile per rendere fruibile determinati argomenti a tutti. Video youtube: https://www.youtube.com/watch?v=XY1TwYOuOMg&t=39s

Lex Fridman Podcast
Yann LeCun: Deep Learning, Convolutional Neural Networks, and Self-Supervised Learning

Lex Fridman Podcast

Play Episode Listen Later Aug 31, 2019 76:07


Yann LeCun is one of the fathers of deep learning, the recent revolution in AI that has captivated the world with the possibility of what machines can learn from data. He is a professor at New York University, a Vice President & Chief AI Scientist at Facebook, co-recipient of the Turing Award for his work on deep learning. He is probably best known as the founder of convolutional neural networks, in particular their early application to optical character recognition. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to

The Banana Data Podcast
Prioritizing training data, model interpretability, and dodging an AI Winter

The Banana Data Podcast

Play Episode Listen Later Aug 16, 2019 27:19


This episode, Triveni and Will tackle the value, ethics, and methods for good labeled data, while also weighing the need for model interpretability and the possibility of an impending AI winter.  Triveni will also take us through a step-by-step of the decisions made by a Random Forest algorith  As always, be sure to rate and subscribe!  Be sure to check out the articles we mentioned this week: The Side of Machine Learning You're Undervaluing and How to Fix it by Matt Wilder (LabelBox) The Hidden Costs of Automated Thinking by Jonathan Zittrain (The New Yorker) Another AI Winter Could Usher in a Dark Period for Artificial Intelligence by Eleanor Cummins (PopSci)

Machine learning
Ml genetic algorithms, supervised learning, deep learning with Ben Taylor

Machine learning

Play Episode Listen Later Jun 27, 2019 23:13


Zeff.ai strategies for short term business projects with high return on investment. ”High ROI projects are easy to measure in the short term. ”

Data Science at Home
Episode 66: More intelligent machines with self-supervised learning

Data Science at Home

Play Episode Listen Later Jun 25, 2019 18:56


In this episode I talk about a new paradigm of learning, which can be found a bit blurry and not really different from the other methods we know of, such as supervised and unsupervised learning. The method I introduce here is called self-supervised learning. Enjoy the show!   Don't forget to subscribe to our Newsletter at amethix.com and get the latest updates in AI and machine learning. We do not spam. Promise!   References Deep Clustering for Unsupervised Learning of Visual Features Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey  

Reversim Podcast
370 ThetaRay and Unsupervised Learning

Reversim Podcast

Play Episode Listen Later May 27, 2019


פודקאסט מספר 370 של רברס עם פלטפורמה - אורי ורן מארחים בכרכור את אתי גבירץ מחברת ThetaRay לשיחה על תוכנית הלימודים למכונות וילדים מוצלחים.אתי היא VP Product Management ב-ThetaRay - חברת בינה מלאכותית שמשלבת טכנולגיות Big Data עם אלגוריתמים ייחודים “שלומדים אינטואיטיבית” (Unsupervised Learning) שפותחו בחברה.הפלטפורמה משמשת אירגונים פיננסיים גלובאליים במלחמה בפשעים כלכליים (הלבנת הון, מימון טרור, סחר באנשים ושאר רעות חולות).הפתרונות גנריים לחלוטין ויכולים לשמש גם למקרים אחרים, אבל כרגע המיקוד הוא בתחום הפיננסי.שתי שאלות כלליות לפני הצלילה לטכנולוגיה - הלבנת הון נשמע אכן קשור לנושאים פיננסיים - איך סחר בנשים (למשל) מתקשר?בסופו של דבר צריך להעביר את הכסף . . .לחברה יש מיזם Pro bono עם עמותה בארה”ב, שמחפשת סימנים לסחר באנשים באמצעות מאגר מידע גדול, שחלקו כלכלי, ו-ThetaRay עוזרת למצוא נקודות שצריך לבדוק.לא מעט פשעים לאחרונה מתבצעים תוך שימוש במטבעות קריפטוגרפיים (נכון?) - האם יש ממשק גם לכיוון הזה?התחושה כנראה נכונה, אין כרגע ממשק פעיל בתחום אבל בהחלט יש מחקרים.אז נתחיל - מה זה Machine Learning מבחינתכם? איך זה משרת את החברה?ראשית - מוטיבציה: למה מכונות צריכות בכלל ללמוד?דמיינו ערימה של חפצים בצבעים שונים. למען הפשטות - רק צבעים אחידים, ואף אחד לא עיוור-צבעים (אין גברים בקהל, נכון?)למיין חמישים פריטים כאלה בשעה - לא בעיה, ונשאר המון זמןלמיין “הר” של כאלה (כמה מיליארדים) - כן בעיה, לא בשעה ולא ביום, חוץ מזה שגם ממש לא בא לכם לעשות את זה אלא לתת למישהו אחר (משימה פשוטה וחזרתית עם צורך ב-Throughput גבוה - מזכיר את פרק 363 על ה-GPU).אנחנו כבר יודעים למיין - “רק” צריך ללמד את המכונה לעשות את זה. איך? כמו שמלמדים ילדים: “זה תפוח אדום”, “זה אגס צהוב” וכו’. בפעם הבאה שואלים “מה זה?” ונותנים פידבק על התשובה, עד שיוסי הילד המוצלח לומד באמצעות דוגמאות - גם על המקרה הספציפי וגם להשליך על דברים אחרים (“כרבולת של תרנגול זה גם אדום” וכו’).כולל לרוב גם מקרים של “תרנגול זה תפוח” ודיונים על טבעונות, אבל זה כבר עניין אחר.באופן דומה ניתן ללמד מכונה להבדיל בין צבעים למשל - מתן המון דוגמאות ואז בחינה של התוצאה, תיקון ושוב עד לתוצאה הרצויה - ועכשיו המכונה יודעת למיין את הפריטים לפי צבע במקומכם.השלב הבא: באמצע הערימה יש פריט סגול . . כזה לא היה לנו קודם. מה עכשיו?אפשר להגיד “זה דומה לכחול” ואז לסווג ככחול בסבירות בינונית. האם זו טעות? תלוי בהגדרה.אפשר להגיד “זה דומה לכחול וגם לאדום” ולהגיד שלא ניתן לסווג בבטחון. האם זו טעות? שוב. תלוי מה רוצים, ומה הוא רווח הסמך שהוגדר.המקביל בדוגמא שלנו - יוסי לא יודע מה זה חציל כי לא היה קודם (ועוד לא ביקר אצל עובד).מה שראינו כאן זה דוגמא למגבלה: המכונה “לא יודעת” את מה שלא “לימדו” אותה קודם - Supervised Learning: מישהו מפקח על הלמידה.יש הגדרה של ה-Training Set - הקטיגוריות שיכללו, כמה דוגמאות בכל אחת, מהם ה - thresholds ולזיהוי כו’.המדדים להצלחה הם דיוק (Accuracy, Precision) וכיסוי (Coverage, Recall).מבחינת Detection יש התייחסות גם ל Detection Rateהכל חשוב ברמה העסקית - מהי המטרה שהמכונה משרתת? כאן יש גם כלים כמו ROC Curve או Confusion matrix שבאמצעותם מגדירים את הסף הנדרשמה חשוב כאן יותר - דיוק או כיסוי?מכונה שלומדת “טוב יותר” מצריכה פחות פשרות - אבל תמיד יש דעיכה (deterioration) וככל שמתרחקים מקבוצת הלמידה יש סיכוי גבוה יותר לטעות.חוץ מאצל יוסי.וכמו את יוסי - גם את המכונה צריך להמשיך ללמד.אז מה לגבי Unsupervised Learning?כל מה שדיברנו עליו עד עכשיו מוגדר כ”למידה קונספטואלית”. על מנת להבין Unsupervised Learning, נלך שוב לדוגמא של לימוד ילדים - והפעם באמצעות התבוננות (Observation) ולמידה אינטואיטיבית, בלי שמישהו אחר יגדיר זאת עבורם מראש.אחד הדברים שנלמדים כך זה ההגדרה של “נורמטיבי” - ומהי אנומליה.דוגמא - דני בן ה-3, כבר “ראה עולם” (בכל זאת, בן 3) וגם כבר יודע להתנסח ולהסיק מסקנות. יום אחד הוא רואה בסופרמרקט בפעם הראשונה אדם בכסא גלגלים - ומצביע - “שונה”. למה שונה? “כי הוא הולך עם גלגלים במקום רגליים”.יש כאן שני דברים - (1) זיהוי של אנומליה ו (2) הסבר (Evidence, Forensics) - ב-ThetaRay זה מכונה Trigger Feature.לפי מה הילד החליט? ע”פ הנורמה ולפי מה שהוא נחשף אליו קודם (הידוע לו עד כה).המשך הדוגמא - באותו הסופרמרקט נמצאת גם גלית, שגרה באותה השכונה - ובמשפחה שלה יש דוד שנעזר בכסא גלגלים. גלית עוברת באותו מקום ולא מצביעה.כתלות בהגדרת המשימה - דני דייק (וזיהה) וגלית פיספסה (בהנחה שהמשימה היא למצוא אנומליה ולא לסווג לקבוצות חדשות). לגלית לא “הפריע” שום דבר.חשוב להבחין כאן בין Unsupervised ל-Ungoverned- גם גלית וגם דני נחשפו לעולם ע”י ההורים שלהם, והלמידה שלהם הייתה “ממושטרת” (Governed) - ההורים החליטו על חוגים, על טיולים, מקום מגורים, צפייה בטלוויזייה וכו’. גלית גדלה במשפחה שהשפיעה על יכולת שלה לזהות (למשל) ששימוש בכסא גלגלים הוא משהו שקורה באחוזים קטנים יחסית באוכלוסיה (שוב - תלוי בהגדרות של איזו אוכלוסיה ומתי ואיך).בתנאים הללו ותחת ההגדרות הללו - דני יצליח יותר מגלית (יש קשר בין האופן בו האלגוריתם לומד והגדרת המשימה).ברמה הפילוסופית יש כאן משהו מעניין - דני הצליח כי הוא לא ראה דוגמא כזו בעבר, וחגית נכשלה כי היא כן ראתה דוגמא כזו - שזה הפוך מהדוגמא הקודמת (יוסי נכשל כי לא ראה “סגול” לפני כן).בחזרה לענייני ההגדרות - אם ניקח אלגוריתם שמטרתו לקחת ערימה של נתונים (הצעות למכירת מוצרים למשל) ולסווג אותם (Clustering) - אופן הביצוע דומה: בכל פעם שנתקלים במשהו חדש יוצרים קבוצה חדשה.זו עוד דוגמא ל Unsupervised Learning, שנקראת Clustering  - היכולת למצוא באוכלוסיה קטיגוריות שאינן מוגדרות מראש (שזה באיזשהו מקום קצת “לחפש מתחת לפנס”).אם אנחנו יודעים להגדיר מהי “אוכלוסיה מייצגת” (שזו אומנות בפני עצמה - ד”ש לגילדת ה-Data Scientists), אפשר לקחת אלגוריתם Clustering (שהוא Unsupervised), שלא יודע כמה קטיגוריות יש בתוך האוכלוסיה (אז פסלנו את K-Means, שמניח מספר קטיגוריות מוגדר), ולהריץ על מנת “למצוא קבוצות דומות”, ע”פ ספים מוגדרים של שונות, או ע”י הגדרת סוג המימדים שמעניינים אותנו.דוגמא: ב-ThetaRay מזהים אנומליות - ומזהים פשעים, מתוך הנחה שהאזרחים באופן כללי שומרי חוק, ותוך הנחה שפשע הוא אנומליהזה לא תמיד נכון, ולכל מדינה יש את הספים שלה - דוגמא שבה לקוח פוטנציאלי אמר שיש לו מערכות במדינה כלשהי שבה 40% מהאוכלוסיה מעורבות בהונאות (בדרום אמריקה! לא כאן, מה פתאום?). יש כאן הפרה של הנחת הבסיס והמערכת כנראה לא  תיהיה יעילה (כי זה גם לא רוב מוחלט שמאפשר לזהות את הקבוצה המשלימה).מרגע שזיהינו אנומליות, השלב הבא הוא להפוך אותן למשהו שאפשר לעבוד איתו (Actionable). על מנת שאנליסט יוכל לחקור הלבנת הון למשל, צריך לתאר לו (פורמאלית) מה זה.אלו ה - Trigger Features שהזכרנו קודם - וזה משהו שצריך להיות מסוגלים להסביר (בסופו של דבר אלו תיאורים מתימטיים שהוגדרו בדר”כ ע”י Data Scientist)יש כאן אלמנט של Feature Engineering - התהליך בסופו של דבר מנוהל (Governed), וה - Data Scientist משמש כ”הורה” בעולם בו האלגוריתמים לומדים (ראיתם Person of Interest? אז Finch).התחלנו עם למה בכלל צריך Machine Learning, המשכנו להבדל בין למידה שהיא Supervised לעומת Unsupervised ואז ספציפית לתוך Governed Unsupervised Learning ע”י הכתבה של Features.האם מדובר בהגדרות ספציפיות (“זה הטקטסט שאנחנו מחפשים”) או בהגדרות כלליות של שפה וכלים?בפתרון אין heuristics או הגדרות של מה “נורמאלי” - ה-Features מוגדרים כמימדים בעולם, והאלגוריתם לומד אותם.המטרה היא לזהות פשעים ולא חצילים - צריך להנגיש את המידע הרלוונטי באופן שיאפשר לזהות אנומליות באופן מיטבי (ביחס לאובייקטים אחרים וביחס לעצמו).צריך גם להגדיר מהי טרנזאקציה כספית, מה האפשרויות וכל מידע אחר שאפשר להנגיש (מי האדם? מי השותפים האפשריים שלו? ועוד)בשלב הבא - Clustering ומציאת דימיון בין האנומליות ומציאת צורות התנהגות חדשות שלא חשבנו עליהן מראש.הסיווג הוא לשתי קבוצות

TechSNAP
399: Ethics in AI

TechSNAP

Play Episode Listen Later Mar 15, 2019 38:48


Machine learning promises to change many industries, but with these changes come dangerous new risks. Join Jim and Wes as they explore some of the surprising ways bias can creep in and the serious consequences of ignoring these problems.

Yesaya Software Podcast
Machine Learning imekuja, Usiachwe Nyuma

Yesaya Software Podcast

Play Episode Listen Later Feb 4, 2019 4:36


Ebwana mambo vipi jina langu ni Yesaya. Ninakukaribisha kwenye mfululizo wa vipindi kwa njia ya sauti yaani Podcast. Dhumuni langu ni kukuelimisha, kukuhabarisha na kukuletea mijadala mbalimbali kuhusu Teknolojia ya Habari na Mawasiliano - TEHAMA, na mimi nitajikita zaidi kwenye mifumo ya kompyuta. Leo tutaangalia kwa nini hupaswi acha nyuma na hili joto la Machine Learning na kwa wepesi tuu ni vipi Machine Learning inafanya kazi. Kumbuka wiki iliyopita nllikupa chachu tuu ya Machine Learning na kama bado hujasikia toleo lililopita nakusihi sasa usikilize ili twende pamoja. Na kwa tarifa tue, katika mwaka 2018 Machine Learning ilionekana kama moja ya Teknolojia zilizo zungumzwa sana, bado hata mapema mwaka 2019 ni mojawapo na teknolojia zinazongumwa kwa wingi. Si miaka mingi sana tangu Machine Learning ipate umaarufu na hata kampuni kupata uvuvi juu ya matumizi yake. Lakini sasa makampuni mengi yanahaha uhitaji tumia Machine Learning katika kutoa huduma zao. Imekuwa kama ni hitaji la muhimu katika mifumo. Kama ilivyokuwa muhimu kwa kampuni iwe na website itakayo weza onekana vizuri kwenye simu. Ni karibuni tuu makampuni yatahitaji mifumo ambayo itaweza toa ripoti zilizofanyiwa analysis kwa Machine Learning ili kuweza saidia toa maamuzi yatayoweza kuleta tija kwa makampuni. Machine Learning inatusaidia fanya Kazi za kibinadamu bora, haraka na rahisi kuliko wakati mwingine wowote. Na tukitazama mbele katika miaka iajyo, Machine Learning itatusaidia kufanya mambo ambayo hatungeweza kuyafanya wenyewe. Tunashuruku kuwa, si Kazi ngumu tena kuweza faidika na matunda ya Machine Learning sasa. Vyombo (tools) zimekuwa vizuri na rahisi sasa, unachohitaji ni taarifa, wetengeneza mifumo (developers) na hiari kuanza tumia Machine Learning. Tutazame kwa uchoche tuu namna ambavyo Machine Learning inafanya Kazi, Machine Learning inatumia taarifa kujibu maswali. Na hapa nimefupisha tuu, ili kuweza elewa kwa urahisi namna ambavyo Machine Learning inafanya Kazi. Sasa ngoja kwa urahisi kabisa tuangalia Machine Learning inavyofanya kazi. Ni lazıma tuwe na taarifa ili tuweze ifundisha Kompyuta. Na hapa nitakupa mfano mtoto anashika moto kwa mara kwanza ataangua na atatunza taarifa ya picha ya mato na maumivu yake baadae akiona hali hiyo hato thubuti kushika tena. Kwa namna hii tunaipa taarifa kompyuta ambazo zina maswali na majibu (kama ukigusa hapa utaungua) kisha yenyewe itajifunza kupitia taarifa hizo na baadae jambo jipya likija basi itaweza kulinganisha tariffa zilizopo kuweza jibu hilo jambo. Mfano kwa upande wa kutambua magonjwa, kompyuta itatumia taarifa za vipimo za wagonjwa wengi waliohudumiwa kwa miaka kadhaa kutoka maeneo mbalimbali na taarifa hizii ni pamoja na matokeo ya vipimo vyao kama alikuwa na ugonjwa au lah. Sasa basi, badare ya kujifunza, mgonjwa mmoja akija kompyuta itaweza kulinganisha na taarifa hizi kwa kila jambo kama umri, jinsia, urefu, wapi wanatokea na kadhalika. Ikimaliza inaweza toka utabiri ni kwa asilimia ngapi mgonjwa huyu anaweza kuwa na ugonjwa husika au lah. Kumbuka, ili Machine Learning ifanya kazi kwa ubora zaidi ni lazima kuwepo na taarifa za kutosha ili kuweza kutabiri kwa ufasaha zaidi. Na hapa tumejadili mfano mmoja tuu na hapa nimeeleza kwa kutumia nia moja ya kujifunza ya Machine Learning yaani Supervised Learning. Kwa haya machache ninaamini kuna jambo ambalo umeweza jifunza hapa, na hii ni moja ya jitahada kuweka hamasa au sensitisation kwa developers wa ndani kuona namna ambayo tunaweza tengeneza mifumo ya kompyuta. Mpaka wakati mwingine, jina langu ni Yesaya.

Computer Science/Software Engineering College Courses Review
B.10 - Costa Rica Big Data School (Supervised Learning methods)

Computer Science/Software Engineering College Courses Review

Play Episode Listen Later Dec 13, 2018 78:14


Bonus round! I attended the Costa Rica Big Data School, a five-day event where two speakers from the Texas Advanced Computing Center spoke about current computational subjects like object-oriented programming in Python, High Performance Computing (HPC), Hadoop, and other important technologies. Hope you guys can find valuable knowledge here!

Reversim Podcast
Summit 2018: We don't need no labels: the future of pretraining and self-supervised learning / Bar Vinograd

Reversim Podcast

Play Episode Listen Later Dec 7, 2018


SLAB. Innovation Podcast powered by iteratec
Natural Language Processing und Deep Learning bei iteratec

SLAB. Innovation Podcast powered by iteratec

Play Episode Listen Later Dec 4, 2018 19:03


Robin Otto berichtet über seine Masterarbeit, die er im Bereich Machine Learning bei iteratec geschrieben hat.

KI2go - mit Tobias Budig // Künstliche Intelligenz zum Mitnehmen

Wie lernen Maschinen? Wo liegt die Schwierigkeit dabei? Künstliche Intelligenz, insbesonder maschinelles Lernen ist keine Hexerei. In der Ersten Folge der kleinen Reihe über maschinelles Lernen geht es um das "Supervised Learning". Hierbei ist in dem Datensatz das Ergebnis bekannt. Aus dem Zusammenhang Inputdaten und Ergebnis wird versucht ein möglichts gutes mathematisches Modell zu finden. Nun können zu neuen unbekannten Inpudaten die Ergebnisse vorhergesagt werden.

Herr Mies will's wissen
HMww17 – Machine Learning mit Dr. Shirin Glander

Herr Mies will's wissen

Play Episode Listen Later Jan 30, 2018 50:43


In der aktuellen Episode gibt Dr. Shirin Glander (Twitter, Homepage) uns ein paar Einblicke in das Thema Machine Learning. Wir klären zunächst, was Machine Learning ist und welche Möglichkeiten es bietet bevor wir etwas mehr in die Tiefe gehen. Wir beginnen mit Neuronalen Netzen und Entscheidungsbäumen und wie sich diese unterschieden. Hier kommen wir natürlich auch nicht an Supervised Learning, Unsupervised Learning und Reinforcement Learning vorbei. Wichtig bei der Arbeit mit Machine Learning sind die verwendeten Daten: Hier beginnt man mit Testdaten und Trainingsdaten, welche man mit Hilfe von Feature Engineering für die jeweilige Aufgabe optimieren kann. Shirin erzählt, wie sie mit Daten arbeitet und wie sie die richtigen Algorithmen findet. Eine wichtige Rolle spielen hier R und R Studio, welches sich besonders für statistische Analysen eignet. Gerade die Visualisierung der Daten ist hier hilfreich um selbige besser zu verstehen. Aber auch die Möglichkeiten Reports zu erzeugen und beispielsweise als PDF zu exportieren überzeugen. Wenn ihr R für Machine Learning einsetzen wollt, solltet ihr Euch auch caret ansehen. Shirin organisiert übrigens auch MünsteR, die R Users group in Münster. Wenn ihr Euch näher mit Machine Learning beschäftigen wollt, solltet ihr Euch Datacamp oder Coursera ansehen. Wenn ihr Euch für R interessiert schaut Euch die R Bloggers an Am Ende sprechen wir auch noch kurz über Deep Dreaming. Den passenden Generator hierfür, findet ihr unter deepdreamgenerator.com. Bücher zum Thema Praxiseinstieg Machine Learning mit Scikit-Learn und TensorFlow Einführung in Machine Learning mit Python

NLP Highlights
48 - Incidental Supervision: Moving Beyond Supervised Learning, with Dan Roth

NLP Highlights

Play Episode Listen Later Jan 29, 2018 27:52


AAAI 2017 paper, by Dan Roth. In this episode we have a conversation with Dan about what he means by "incidental supervision", and how it's related to ideas in reinforcement learning and representation learning. For many tasks, there are signals you can get from seemingly unrelated data that will help you in making predictions. Leveraging the international news cycle to learn transliteration models for named entities is one example of this, as is the current trend in NLP of using language models or other multi-task signals to do better representation learning for your end task. Dan argues that we need to be thinking about this more explicitly in our research, instead of learning everything "end-to-end", as we will never have enough data to learn complex tasks directly from annotations alone. https://www.semanticscholar.org/paper/Incidental-Supervision-Moving-beyond-Supervised-Le-Roth/2997dcfc6d5ffc262d57d0a26f74d091de096573

digital kompakt | Business & Digitalisierung von Startup bis Corporate
Genetic Algorithms – Coden nach dem Vorbild menschlicher DNA | Black Box: Tech #12

digital kompakt | Business & Digitalisierung von Startup bis Corporate

Play Episode Listen Later Nov 24, 2017 49:16


Sind Genetische Algorithmen der nächste logische Schritt in der KI-Forschung? Wie funktioniert Evolution bei Algorithmen? Was sind die Vorteile und die Grundidee von Artificial Life? In dieser Folge von "Black Box: Tech" diskutieren Joel Kaczmarek und Johannes Schaback mit NaturalMotion-Gründer Torsten Reil über Genetische Algorithmen sowie über die Frage, wie sich Erkenntnisse der Evolutionsforschung und Biologie auf Algorithmen und die KI-Forschung applizieren lassen. Du erfährst... 1) …was Genetic Algorithms sind und wie sie funktionieren 2) …was die Grundidee von Artificial Life ist 3) …wie Mutationen und Rekombinationen bei Genetic Algorithms funktionieren

Naked Scientists, In Short Special Editions Podcast
AI learning without human guidance

Naked Scientists, In Short Special Editions Podcast

Play Episode Listen Later Oct 30, 2017 6:08


In 2016, the world champion Lee Sedol was beaten at the ancient boardgame of Go - by a machine. It was part of the AlphaGo programme, which is a series of artificially intelligent systems designed by London-based company DeepMind. AlphaGo Zero, the latest iteration of the programme, can learn to excel at the boardgame of Go without any help from humans.So what applications could AI learning independently have for our day-to-day lives? Katie Haylor spoke to computer scientist Satinder Singh from the University of Michigan, who specialises in an area within artificial intelligence called... Like this podcast? Please help us by supporting the Naked Scientists

Naked Scientists Special Editions Podcast
AI learning without human guidance

Naked Scientists Special Editions Podcast

Play Episode Listen Later Oct 29, 2017 6:08


In 2016, the world champion Lee Sedol was beaten at the ancient boardgame of Go - by a machine. It was part of the AlphaGo programme, which is a series of artificially intelligent systems designed by London-based company DeepMind. AlphaGo Zero, the latest iteration of the programme, can learn to excel at the boardgame of Go without any help from humans.So what applications could AI learning independently have for our day-to-day lives? Katie Haylor spoke to computer scientist Satinder Singh from the University of Michigan, who specialises in an area within artificial intelligence called... Like this podcast? Please help us by supporting the Naked Scientists

NLP Highlights
12 - Supervised Learning of Universal Sentence Representations from Natural Language Inference Data

NLP Highlights

Play Episode Listen Later Jun 1, 2017 19:39


Learning Machines 101
LM101-063: How to Transform a Supervised Learning Machine into a Policy Gradient Reinforcement Learning Machine

Learning Machines 101

Play Episode Listen Later Apr 19, 2017 22:04


This 63rd episode of Learning Machines 101 discusses how to build reinforcement learning machines which become smarter with experience but do not use this acquired knowledge to modify their actions and behaviors. This episode explains how to build reinforcement learning machines whose behavior evolves as the learning machines become increasingly smarter. The essential idea for the construction of such reinforcement learning machines is based upon first developing a supervised learning machine. The supervised learning machine then “guesses” the desired response and updates its parameters using its guess for the desired response! Although the reasoning seems circular, this approach in fact is a variation of the important widely used machine learning method of Expectation-Maximization. Some applications to learning to play video games, control walking robots, and developing optimal trading strategies for the stock market are briefly mentioned as well. Check us out at: www.learningmachines101.com   

Learning Machines 101
LM101-062: How to Transform a Supervised Learning Machine into a Value Function Reinforcement Learning Machine

Learning Machines 101

Play Episode Listen Later Mar 18, 2017 31:05


This 62nd episode of Learning Machines 101 (www.learningmachines101.com)  discusses how to design reinforcement learning machines using your knowledge of how to build supervised learning machines! Specifically, we focus on Value Function Reinforcement Learning Machines which estimate the unobservable total penalty associated with an episode when only the beginning of the episode is observable. This estimated Value Function can then be used by the learning machine to select a particular action in a given situation to minimize the total future penalties that will be received. Applications include: building your own robot, building your own automatic aircraft lander, building your own automated stock market trading system, and building your own self-driving car!!

WashingTECH Tech Policy Podcast with Joe Miller
Ep. 70: Race, Genetics and Reconciliation with Alondra Nelson

WashingTECH Tech Policy Podcast with Joe Miller

Play Episode Listen Later Dec 27, 2016 21:22


Alondra Nelson (@alondra) is the Dean of Social Science at Columbia University. An interdisciplinary social scientist, she writes about the intersections of science, technology, medicine, and inequality. She is author of the award-winning book Body and Soul: The Black Panther Party and the Fight Against Medical Discrimination. Her latest book, The Social Life of DNA: Race, Reparations and Reconciliation after the Genome, was published in January. In this episode, we discussed: the meaning and importance of "racial reconciliation" and the potential for genetic research in helping to promote it. the extent to which the concept of race is based on biology as opposed to being socially-constructed. the role of DNA evidence in historical analysis. key national priorities policymakers ought to focus on as they consider ways in which genetic research can help to advance social equality. Resources Columbia University Division of Social Science The Social Life of DNA: Race, Reparations, and Reconciliation After the Genome by Alondra Nelson Dark Matters on the Surveillance of Blackness by Simone Browne   NEWS ROUNDUP FCC Republican Commissioners Ajit Pai and Michael O'Rielly sent a letter to associations representing Internet Service Providers saying they plan to roll back the FCC's net neutrality rules. The FCC passed the landmark rules which state that ISPs must treat all internet traffic equally, without prioritizing their own content, in 2015. The rules were subsequently upheld by a 3-judge DC Circuit Panel. A complete reversal of the rules would take some time, since a public comment period would need to be conducted first. Ajit Pai, who is expected to serve as the interim FCC Chairman once current Chairman Wheeler resigns in January, has said the days of the net neutrality rules are quote-unquote "numbered". -- The FCC has passed new rules enabling consumers who are deaf and hard of hearing to communicate. Previously, those who are deaf and hard of hearing had to rely on clunky, so-called teletype (TTY) devices to communicate with others. TTY devices converted tones into text and required the recipients to read on paper. Under the new rules, the FCC will now require wireless carriers and device manufacturers to enable "real time" text messaging, or RTT standard, which allows messaging recipients to see, in real time, what deaf and hard of hearing individuals are communicating. Sam Gustin has the story in Motherboard. -- Researchers at Google, UT Austin, and the Toyota Technological Institute in Chicago have devised a new way to test algorithms for biases. Examples of biases in machine learning have included computer programs that take data and target black neighborhoods, show advertisements for payday loans to African Americans and Latinos, or display executive-level jobs only to white male applicants. The approach developed by the researchers, entitled the Equality of Opportunity in Supervised Learning, would enable algorithms to determine that particular demographic groups were more likely to have particular behaviors, but would not target or exclude all individuals based on their race, ethnicity or gender, simply because some individuals within a particular sample had the behaviors. For example, if the algorithm determined that white women were in general more likely to buy wine, and then conclude that someone who bought wine was likely to be a white woman, that would be less biased than excluding non-white women from ad campaigns for white wine. Hannah Devlin has the story in The Guardian. Separately, the White House released a report warning of the dangers of Artificial Intelligence (AI) on the workforce. The report concludes AI can lead to significant economic opportunities, but have detrimental impact on millions of workers. -- Nokia has sued Apple for patent infringement in Germany and in a federal court in Texas, accusing Apple of not renewing some patents the mobile industry relies on, and which Nokia now relies on for profit. Apple is stating that Nokia is acting like a patent troll by extorting Apple and not licensing the patents on reasonable terms. Nate Lanxon, Ian King and Joel Rosenblatt have the story at Bloomberg. -- Two consumer groups have filed a Federal Trade Commission complaint against Google accusing it of privacy violations after the company updated its privacy policy back in June. Consumer Watchdog and the Privacy Rights Clearinghouse claim the company had its users opt-in to a privacy change in which the company allegedly merged data from several Google services without providing adequate notice. Craig Timberg has the story in the Washington Post. -- Pinterest released its diversity data, and while the company hit some of its internal hiring goals, black employment at the company remains at 2% with Hispanic employment at 4% of the company's total, tech and non-tech workforce. -- Facebook released its annual Global Government Requests report showing a 27% uptick globally in the number of government requests for user data, to over 59,000 total requests. -- Finally, HUD Secretary Julian Castro announced a major White House initiative to help students living in HUD-assisted housing to gain access to computers and the internet at home. In the partnership between HUD, New York City Mayor Bill DeBlasio, the New York City Housing Authority and T-Mobile, 5,000 families living in public housing in the Bronx will get internet connected tablets. The ConnectHome program has thus far reached 43 states, with other major partners including Google Fiber, Comcast, AT&T, Sprint, Best Buy, the Boys and Girls Club of America, PBS, and others.

Medical Education Podcasts
Value of supervised learning events in predicting doctors in difficulty - Mumtaz Patel interview

Medical Education Podcasts

Play Episode Listen Later Jun 27, 2016 21:38


Identifies the principles that inform our understanding about how SLEs can work in predicting doctors in difficulty. Read the accompanying editorial: http://onlinelibrary.wiley.com/enhanced/doi/10.1111/medu.12996

Learning Machines 101
LM101-051: How to Use Radial Basis Function Perceptron Software for Supervised Learning[Rerun]

Learning Machines 101

Play Episode Listen Later May 24, 2016 29:04


This particular podcast is a RERUN of Episode 20 and describes step by step how to download free software which can be used to make predictions using a feedforward artificial neural network whose hidden units are radial basis functions. This is essentially a nonlinear regression modeling problem. We show the performance of this nonlinear learning machine is substantially better on test data set than the linear learning machine software presented in Episode 13. Basically performance for the linear learning machine was about 13% because the data set was specifically designed to be unlearnable by a linear learning machine, while the performance for the nonlinear machine learning software in this episode is about 70%. Again, I'm a little disappointed that only a few people have downloaded the software and tried things out. You can download windows executable, mac executable, or the MATLAB source code. It's important to actually experiment with real machine learning software if you want to learn about machine learning!  Check out:  www.learningmachines101.com to obtain transcripts of this podcast and download free machine learning software! Or tweet us at: @lm101talk    

Linear Digressions
Unlabeled Supervised Learning--whaaa?

Linear Digressions

Play Episode Listen Later Jan 7, 2016 12:35


In order to do supervised learning, you need a labeled training dataset. Or do you...? Relevant links: http://www.cs.columbia.edu/~dplewis/candidacy/goldman00enhancing.pdf

Linear Digressions
A Criminally Short Introduction to Semi Supervised Learning

Linear Digressions

Play Episode Listen Later Dec 3, 2015 9:12


Because there are more interesting problems than there are labeled datasets, semi-supervised learning provides a framework for getting feedback from the environment as a proxy for labels of what's "correct." Of all the machine learning methodologies, it might also be the closest to how humans usually learn--we go through the world, getting (noisy) feedback on the choices we make and learn from the outcomes of our actions.

Computer Science (video)
Supervised Learning of Similarity

Computer Science (video)

Play Episode Listen Later Apr 16, 2010 45:17


If you experience any technical difficulties with this video or would like to make an accessibility-related request, please send a message to digicomm@uchicago.edu. Greg Shakhnarovich delivers a lecture as part of the University of Chicago Theory Seminars hosted by the Computer Science Department.

Computer Science (audio)
Supervised Learning of Similarity

Computer Science (audio)

Play Episode Listen Later Apr 16, 2010 66:41


Greg Shakhnarovich delivers a lecture as part of the University of Chicago Theory Seminars hosted by the Computer Science Department.