POPULARITY
How can we use eye tracking to understand the workings of the brain? What about optimizing eye-tracking performance by harnessing the power of AI? How do you create a versatile eye tracker that can be used for virtually any cognitive research need? Meet Kirill Korotaev, the trailblazing Co-Founder & CEO of Purple Gaze! He is on a mission to answer these questions and bridge the existing gaps with FOXIE. This easy-to-use, affordable, customizable, AI-powered eye-tracking system revolutionizes cognitive data collection. Kirill's journey is a remarkable fusion of entrepreneurship, cognitive science, and the boundless potential of neurotechnology. But it goes even further. Picture this: Immersing yourself in the vibrant Argentine neuroscience community, presenting your vision in a language you've secretly mastered for a month. That's precisely what Kirill did, surprising his local team members and forging profound connections at Universidad Torcuato Di Tella. With over five years of experience in management and marketing, Kirill brings a wealth of knowledge to the neurotech realm. He's not just a CEO; he's a global keynote speaker who envisions neurotechnology's transformative potential to enhance our lives. In this episode, we'll journey through Kirill's entrepreneurial odyssey, delve into his cognitive science background, and explore the groundbreaking strides at Purple Gaze. Join us as we navigate the convergence of technology, neuroscience, and the relentless pursuit of a brighter future. Prepare to be captivated by Kirill's insights and unshakable passion for the world of neurotechnology. Welcome to 'Neurocareers: Doing the Impossible! About the Podcast Guest: Join the Purple Gaze team: https://purplegaze.io/careers Follow Purple Gaze on social media: YouTube: https://youtube.com/@purplegaze X: https://x.com/purplegazing LinkedIn: https://www.linkedin.com/company/purple-gaze/ Connect with Kirill on LinkedIn: https://www.linkedin.com/in/kirillkorotaev/ Recommended communities and content: NeuroTechX: https://neurotechx.com Neuromatch: https://neuromatch.io/ LessWrong: https://lesswrong.com Deep Learning AI: https://deeplearning.ai Alpha Signal newsletter: https://alphasignal.ai Yannic Kilcher: https://www.youtube.com/@YannicKilcher About the Podcast Host: The Neurocareers podcast is brought to you by The Institute of Neuroapproaches (https://www.neuroapproaches.org/) and its founder, Milena Korostenskaja, Ph.D. (Dr. K), a neuroscience educator, research consultant, and career coach for people in neuroscience and neurotechnologies. As a professional coach with a background in the field, Dr. K understands the unique challenges and opportunities job applicants face in this field and can provide personalized coaching and support to help you succeed. Here's what you'll get with one-on-one coaching sessions from Dr. K: Identification and pursuit of career goals Guidance on job search strategies, resume and cover letter development, and interview preparation Access to a network of professionals in the field of neuroscience and neurotechnologies Ongoing support and guidance to help you stay on track and achieve your goals You can always schedule a free neurocareer consultation/coaching session with Dr. K at https://neuroapproaches.as.me/free-neurocareer-consultation Subscribe to our Nerocareers Newsletter to stay on top of all our cool neurocareers news at updates https://www.neuroapproaches.org/neurocareers-news
Yannic Kilcher, a leading ML YouTuber and DeepJudge CTO, teams up with Jon Krohn this week to delve into the open-source ML community, the technology powering Yannic's Swiss-based startup, and the significant implications of adversarial examples in ML. Tune in as they also unpack Yannic's approach to tracking ML research, future AI prospects and his startup challenges. This episode is brought to you by Gurobi (https://gurobi.com/sds), the Decision Intelligence Leader, and by CloudWolf (https://www.cloudwolf.com/sds), the Cloud Skills platform. Interested in sponsoring a SuperDataScience Podcast episode? Visit JonKrohn.com/podcast for sponsorship information. In this episode you will learn: • About OpenAssistant project [03:39] • Alignment issues in open-source vs closed-source [08:36] • Alternative formulas vital for crafting superior LLMs [20:29] • Strategies to foster open-source LLM ecosystems [27:07] • Yannic's pioneering work in legal document processing at DeepJudge [31:31] • Comprehensive overview of adversarial examples [1:04:02] • The future AI's landscape [1:18:08] • Startup challenges [1:25:35] Additional materials: www.superdatascience.com/733
On this episode of The Geek in Review, hosts Marlene Gebauer and Greg Lambert explore innovations in legal search with Paulina Grnarova and Yannic Kilcher, co-founders of DeepJudge. This semantic search engine for legal documents leverages proprietary AI developed by experts with backgrounds from Google and academic AI research. As PhDs from ETH Zurich, Grnarova and Kilcher recognized lawyers needed better access to institutional knowledge rather than constantly reinventing the wheel. DeepJudge moves beyond traditional keyword searches to a deeper integration of search and generative AI models like GPT-3. Partnerships provide financial support and key insights – advisors include execs from Recommind and Kira Systems while collaborations with law firms shape real-world product capabilities. Discussing product development, Kilcher explains connecting search to language models allows generating summaries grounded in internal data without ethical or security risks of training individual models. Grnarova finds the core problem of connecting users to full knowledge translates universally across firms, though notes larger US firms devote more resources to knowledge management and data science teams. When asked about the future of AI, Grnarova expresses excitement for AI and humans enhancing each other rather than replacing human roles. Kilcher predicts continued growth in model scale and capability, requiring innovations to sustain rapid progress. They aim to leverage academic research and industry experience to build AI that augments, not displaces, professionals. DeepJudge stands out for its co-founder expertise and proprietary AI enabling semantic search to tap into institutional knowledge. Instead of reinventing the wheel, lawyers can find relevant precedents and background facts at their fingertips. As Kilcher states, competitive advantage lies in accumulated know-how – their technology surfaces this asset. The future of DeepJudge lies in combining search and generative models for greater insights. Links: Contact DeepJudge: info@deepjudge.ai Contact Us: Twitter: @gebauerm, or @glambert Threads: @glambertpod or @gebauerm66 Email: geekinreviewpodcast@gmail.com Music: Jerry David DeCicca Transcript
Welcome to today's AI Lawyer Talking Tech podcast, your daily download on the innovations shaping the legal landscape. We'll be reviewing the latest on AI tools, e-discovery updates, advances in contract automation, and more legal tech developments making waves. USC Launches Innovative Online Certificate Programs: Law and Government, Technology and Entrepreneurship Law10 Oct 2023Bluefield Daily TelegraphEpisode 51: Legal Case Management Series – An Interview with Matt Bright of Intuit QuickBooks10 Oct 2023LexBlogLegal Tech Developers: Harness the Power of Payments Using GraphQL10 Oct 2023LexBlogLaw Insider Joins Forces with TermScout to Offer Subscribers AI-Driven Suite of Contract Review Tools10 Oct 2023WebWire | Recent HeadlinesLexBlog Develops AI Integration to Help Legal Bloggers and Marketers Plan, Refine and Publicize their Blog Posts10 Oct 2023LawSitesAwards Recognize The People Out Here Making Lawyering Suck Less10 Oct 2023Above The LawLawNext Podcast: Building A Subscription Law Practice, with Fidu Cofounder Kimberly Bennett10 Oct 2023LawSitesFasoo Partners with iManage to Enhance Data Security in the Legal Industry10 Oct 2023WKRGClients more optimistic about AI than legal professionals, says Clios 2023 Legal Trends Report10 Oct 2023Illinois State Bar AssociationWhen Culture and Strategy Clash10 Oct 2023LawVisionHolland & Knight Data Privacy and Security Report: September 202310 Oct 2023Holland & KnightDocgility announces Integration with Microsoft Azure Open AI Service to Provide Generative AI Suggestions to Contracts10 Oct 2023Big CountryBridging The Gap: Aligning HR Strategies With Business Realities10 Oct 2023Forbes.comGoogle Wins Against Sonos Lawsuit Accusing Tech Giant of Infringing Multi-Room Audio Patents10 Oct 2023Hngn.comPaulina Grnarova and Yannic Kilcher from DeepJudge.AI: Unlocking Institutional Knowledge: How AI is Transforming Legal Search (TGIR Ep. 224)10 Oct 2023LexBlogUK law gives legal weight to tech underpinning web3 innovations | The Crypto Mile10 Oct 2023AOL UKClio Launches Duo, an AI Powered Partner for Legal Professionals09 Oct 2023Real Lawyers Have BlogsSeyfarth Attorneys to Present Lexology Webinar on AI Guidance for Lawyers09 Oct 2023Seyfarth ShawClio's New AI Tools: Transforming Legal Practice and Redefining Legal Success?09 Oct 2023LexBlogAI-Enabled Document Platforms - Gavel Assists Legal Professionals in Generating Automated Workflows09 Oct 2023Trend HunterColorado Attorney General Upholds Commitment to Diversity, Equity, and Inclusion in Formal Legal Opinion09 Oct 2023Gibson DunnClients more optimistic about AI than legal professionals, says Clios 2023 Legal Trends Report09 Oct 2023ABA JournalA(nother) California “Regulatory Takings” Case Heads to the Supreme Court10 Oct 2023Legal PlanetEight GDPR Questions when Adopting Generative AI10 Oct 2023Debevoise Data BlogWater Cooler Talk: ‘The Matrix' Opens Our Eyes to Generative AI in the Workforce09 Oct 2023GenAI-Lexology#GLTHweek | Data leaks and cybersecurity awareness10 Oct 2023Golbal Legal Tech HubFTC Publishes Blog Post Summarizing Consumer Concerns with AI Systems09 Oct 2023GenAI-LexologyWayne Spillett, Vodafone: ‘Set an Engaging Vision + Experiment'10 Oct 2023Artificial LawyerGoogle Launches Broad Core Algorithm Update for October 202310 Oct 2023Legal Marketing & Technology BlogBiden Administration's Voluntary AI Safety Agreement05 Oct 2023GenAI-LexologyPreview Key Cases in the Upcoming Supreme Court Term With Justia Webinars09 Oct 2023Legal Marketing & Technology BlogSiri, Lower My Legal Fees! Wait, Can AI Do That?09 Oct 2023Legaltech on Medium
LawNext host Bob Ambrogi is just back from Orlando, Florida, where he attended ILTACON, the annual conference of the International Legal Technology Association. Mic in hand, he recorded a series of brief interviews with some of the legal tech companies that were in attendance. Over the next two episodes of this podcast, we will share those interviews, starting today with the legal tech startups at ILTACON. Once again this year, the conference featured a Startup Hub in the exhibit hall, where 27 startups had booths showing their products. Although Bob was not able to interview all 27, he did have the opportunity to speak briefly with a dozen of them. Today's episode features those 12 brief interviews. We speak with: Joshua Aaron, CEO and cofounder, Aiden Technologies. Anthony Seale, CEO, Legatics. Yannic Kilcher, cofounder and CTO, DeepJudge. James Ding, CEO and cofounder, DraftWise. William Dougherty, cofounder, Capacity. Horace Wu, CEO and founder, and Paul Farrell, director of customer success, Syntheia. Noah Wittman, cofounder and CEO of Priviom. Ariel Nacson, cofounder and chief customer officer, CiteRight, together with Colin LaChance, CEO, Jurisage Inc. Mat Rotenberg, cofounder and CEO, Dashboard Legal. Yohei Fujii, cofounder and CEO, BoostDraft. Floor Blindenbach, founder and CEO, Organizing4Innovation. Julien Steel, head of product, Henchman. In the next episode, we'll feature interviews with some of the established legal tech companies that were there. Thank You To Our Sponsors: This episode of LawNext is generously made possible by our sponsors. We appreciate their support and hope you will check them out. Nota, the online business banking platform designed specifically for solo and small law firms. Paradigm, home to the practice management platforms PracticePanther, Bill4Time, MerusCase and LollyLaw; the e-payments platform Headnote; and the legal accounting software TrustBooks. Universal Migrator, the easiest way to move your firm's data and documents from one app to another. Overture.Law, The first attorney-to-attorney referral platform that lets you easily generate referral fees for clients you can't serve. If you enjoy listening to LawNext, please leave us a review wherever you listen to podcasts.
Der Podcast rund um Künstliche Intelligenz von und mit Roland Becker und Dr. Sirko Straube. Roland und Sirko sprechen mit Yannic Kilcher über Open Source AI, das Projekt Open Assistant und sein Startup DeepJudge.Yannic Kilcher ist ein deutscher Informatiker und YouTuber, der für seine Videos zum Thema künstliche Intelligenz und Technologie bekannt ist. Mit seinem YouTube-Kanal "Yannic Kilcher" erreicht er ein breites Publikum. Yannic hat einen Abschluss in Informatik und ist Gründer von DeepJudge, sowie Co-Founder des OpenAssistant Projekts. Sein Fachwissen und seine Leidenschaft für die neuesten Entwicklungen machen ihn zu einer wichtigen Stimme in der Welt der Technologie.// Shownotes:• Yannic Kilcher: https://www.ykilcher.com/• GPT-4chan: https://youtu.be/efPrtcLdcdM• Yannic auf YouTube: https://www.youtube.com/@YannicKilcher• Open Assistant: https://open-assistant.io/de• DeepJudge: https://deepjudge.ai/// Mit dabei:• Hosts: Roland Becker, Dr. Sirko Straube• Sidekick & Produktion: Julian Keller• Gast: Yannic Kilcher// Über uns:• Website: https://thinkreactor.com// Folge uns:• Instagram: https://thinkreactor.com/instagram• Twitter: https://thinkreactor.com/twitter• Facebook: https://thinkreactor.com/facebook// Höre uns:• Soundcloud: https://thinkreactor.com/soundcloud• Apple Podcasts: https://thinkreactor.com/apple• Google Podcasts: https://thinkreactor.com/google• Spotify: https://thinkreactor.com/spotify• Deezer: https://thinkreactor.com/deezer• TuneIn: https://thinkreactor.com/tunein• Audio Now: https://thinkreactor.com/audionow• Stitcher: https://thinkreactor.com/stitcher• Feed: https://thinkreactor.com/feed Hosted on Acast. See acast.com/privacy for more information.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Building a transformer from scratch - AI safety up-skilling challenge, published by Marius Hobbhahn on October 12, 2022 on The AI Alignment Forum. It is not always obvious whether your skills are sufficiently good to work for one of the various AI safety and alignment organizations. There are many options to calibrate and improve your skills including just applying to an org or talking with other people within the alignment community. One additional option is to test your skills by working on projects that are closely related to or a building block of the work being done in alignment orgs. By now, there are multiple curricula out there, e.g. the one by Jacob Hilton or the one by Gabriel Mukobi. One core building block of these curricula is to understand transformers in detail and a common recommendation is to check if you can build one from scratch. Thus, my girlfriend and I have recently set ourselves the challenge to build various transformers from scratch in PyTorch. We think this was a useful exercise and want to present the challenge in more detail and share some tips and tricks. You can find our code here. Building a transformer from scratch The following is a suggestion on how to build a transformer from scratch and train it. There are, of course, many details we omit but I think it covers the most important basics. Goals From the ground up we want to Build the attention mechanism Build a single-head attention mechanism Build a multi-head attention mechanism Build an attention block Build one or multiple of a text classification transformer, BERT or GPT. The quality of the final model doesn't have to be great, just clearly better than random. Train the model on a small dataset. We used the polarity dataset for binary text sentiment classification. We used the AG_NEWS dataset (PyTorch built-in) for BERT and GPT. Test that the model actually learned something We looked at the first batch of the test data to see if the model predicted something plausible. We compared the test loss of a random network with the test loss of the trained network to see if our model is better. Bonus goals Visualize one attention head Visualize how multiple attention heads attend to the words of an arbitrary sentence Reproduce the grokking phenomenon (see e.g. Neel's and Tom's piece). Answer some of the questions in Jacob Hilton's post. Soft rules For this calibration challenge, we used the following rules. Note, that these are “soft rules” and nobody is going to enforce them but it's in your interest to make some rules before you start. We were allowed to read papers such as Attention is all you need or the GPT-3 paper. allowed to read tutorials on attention such as The illustrated transformer (as long as they don't contain code snippets). allowed to look at tutorials to build generic models in PyTorch as long as they don't contain NLP architectures. allowed to watch videos such as the ones from Yannic Kilcher on NLP not allowed to look at the source code of any transformer or attention mechanism before you have implemented it ourselves. In case we struggle a lot, we can take a peek after we tried and failed to implement one building block ourselves. We found Andrej Karpathy's code helpful for the GPT implementation. allowed to replace a part with a PyTorch implementation once we have demonstrated that it is equivalent. For example, once we have shown that our attention mechanism produces the same output for the same input as the PyTorch attention mechanism, we can use the PyTorch code block. allowed to use generic PyTorch functions that are not directly related to the task. For example, we don't have to write the embedding layer, linear layer or layer-norm from scratch. Things to look out for Here are some suggestions on what to look out for during the project Do I understand the tut...
Congress makes progress on federal privacy law A bipartisan group of lawmakers in the House and Senate introduced a privacy bill last week that some observers including the Washington Post say faces a steep uphill battle. But the bill would require companies that use your data to only collect that data which is necessary for their businesses to function properly. It also proposes a number of other things such as a requirement for the Federal Trade Commission to keep a database of data brokers. Amazon may have to pay for work-from-home equipment A federal judge denied Amazon's motion to dismiss a lawsuit brought by a California engineer who works for the company from home. The employee brought a class-action lawsuit against Amazon for failing to reimburse him for equipment and internet service required to carry out job-related duties. The case now heads to trial in California. Family sues Meta for daughter's self-harm, eating disorder Remember the Facebook papers? that trove of documents company whistleblower Frances Haugen released last year showing that Facebook knew it was harming the self-esteem of young girls and continued doing it anyway? Well, the family of a 19 year old young woman is now suing Meta in the Northern District of California saying the company turned her daughter from a bright and happy child into one that has engaged in self-harm and been hospitalized for depression. YouTube proves you can train AI to spew out hate speech automatically by feeding it 4Chan posts A YouTuber ran an experiment training an algorithm on typical 4Chan posts spewing hate speech. For those who don't know, 4Chan is a public bulletin board where anyone can make posts anonymously. 4Chan is known for its users' racist, sexist, and nihilistic posts. So YouTuber and AI researcher Yannic Kilcher took 3.3 million 4Chan threads and fed them into an algorithm. He then set the algorithm to start posting on 4Chan, and lo and behold, the algorithm produced vile posts of its own. The research is important because it suggests that any bad actor can set an algorithm to post fake or misleading information by simply using a single algorithm. Republicans jump behind Elon Musk's bid for Twitter Republicans jumped in to defend Elon Musk in his accusations that Twitter was attempting to thwart his $44 billion offer to buy the platform. Musk argued that Twitter was refusing to provide important documentation about bots on the platform. Texas AG Ken Paxton then launched an investigation into whether Twitter was using unlawful means to obstruct the deal. Twitter ended up granting Musk access to the information he requested, while arguing that Musk's accusations about bots was simply a pretext to back out of the deal. Since Musk officially announced his plan to purchase the platform back in May, conservatives have rejoined the platform in droves. Feds to investigate Tesla's autopilot crashes There's a reason why you keep see ing Tesla's bashed apart. It's because the autopilot feature isn't working properly in many of these vehicles, and it's causing the cars to crash into stationary vehicles, including police cars. That's according to the National Highway Traffic Safety Administration, which has stepped up the probe it began conducting into Tesla's autopilot feature last year.
Today I had the pleasure of interviewing Yannic Kilcher. Yannic is a YouTuber covering state of the art Machine Learning research topics. He has a PhD from ETH Zurich and is currently the CTO of DeepJudge, a LegalTech NLP startup. In this episode we learn about how Yannic decided on a PHD in Ai, how he is able to make advanced research so digestable, and the reason why he wears sunglasses on camera. I hope you enjoy the epsisode, I know I enjoyed our conversation.
In this episode, I'm speaking with the one and only, Yannic Kilcher! We talk about sunglasses
In this episode of Debugged, Medha Gupta sits down with Yannic Kilcher, the CTO of DeepJudge, a law and technology based startup in Zurich, Switzerland, and a YouTube personality. They discuss his experience founding DeepJudge and his reason for creating easy-to-digest videos on YouTube to help others understand the latest AI research. He also discusses his interests in artificial intelligence and its potential limits as it increases in importance and use through the STEM fields; he follows that up with a small discussion on the legal and ethical implications of AI. Furthermore, Kilcher gives advice about getting started in computer science research and working on making the next big discovery in computer science. Moreover, Kilcher reveals his interest in music through his rendezvous as a street musician and research he's done at the intersection of music and tech. Finally, we have an update on all of the latest trends in the tech world with an all new Pop Culture Byte with host Meryl Mathew. Make sure to tune into Episode 8 of the Debugged podcast!
In this episode of Debugged, Medha Gupta sits down with Yannic Kilcher, the CTO of DeepJudge, a law and technology based startup in Zurich, Switzerland, and a YouTube personality. They discuss his experience founding DeepJudge and his reason for creating easy-to-digest videos on YouTube to help others understand the latest AI research. He also discusses his interests in artificial intelligence and its potential limits as it increases in importance and use through the STEM fields; he follows that up with a small discussion on the legal and ethical implications of AI. Furthermore, Kilcher gives advice about getting started in computer science research and working on making the next big discovery in computer science. Moreover, Kilcher reveals his interest in music through his rendezvous as a street musician and research he's done at the intersection of music and tech. Finally, we have an update on all of the latest trends in the tech world with an all new Pop Culture Byte with host Meryl Mathew. Make sure to tune into Episode 8 of the Debugged podcast!
I'm joined by @YannicKilcher , who runs a popular machine learning/AI YouTube channel, to discuss Tesla's FSD approach, Lidar, AI humanoid robot, Tesla Dojo, and latest advances in artificial intelligence by OpenAI, DeepMind and others. Yannic Kilcher on Twitter: https://twitter.com/ykilcher Yannic Kilcher on YouTube: https://www.youtube.com/c/yannickilcher Yannic Kilcher analyzing Elon Musk's FSD comments on Lex Fridman: https://www.youtube.com/watch?v=rd3R_G6_UfY Tiimestamps: 00:00 - Introduction 00:58 - Impressions of Tesla FSD 03:00 - Fundamentals of vision and autonomous driving 06:10 - Computer mistakes vs human driving mistakes 09:05 - Lidar integrating with vision 11:50 - Pros and cons of Lidar vs vision only 13:20 - Who is in the lead for physical world AI 15:16 - Can Tesla expand into other fields of AI? 17:30 - Importance of data to physical world AI? 18:35 - Data advantage in autonomous driving? 20:!6 - AI expertise vs data advantage 21:35 - Tesla Dojo 25:44 - Limits to neural nets performance 29:00 - Tracking AI improvements 31:25 - Github Copilot, DeepMind AlphaCode, OpenAI Codex 34:27 - Why is artificial intelligence important? 37:00 - Augment jobs or replace? 38:36 - Who's leading - OpenAI, DeepMind/Google, Facebook, etc. 30:20 - Nvidia thoughts 41:45 - GPT-4 expectations 32:15 - Main challenges in creating AI human robot 48:30 - Conclusion Social
I'm joined by @Yannic Kilcher , who runs a popular machine learning/AI YouTube channel, to discuss Tesla's FSD approach, Lidar, AI humanoid robot, Tesla Dojo, and latest advances in artificial intelligence by OpenAI, DeepMind and others. Yannic Kilcher on Twitter: https://twitter.com/ykilcher Yannic Kilcher on YouTube: https://www.youtube.com/c/yannickilcher Yannic Kilcher analyzing Elon Musk's FSD comments on Lex Fridman: https://www.youtube.com/watch?v=rd3R_G6_UfY Tiimestamps: 00:00 - Introduction 00:58 - Impressions of Tesla FSD 03:00 - Fundamentals of vision and autonomous driving 06:10 - Computer mistakes vs human driving mistakes 09:05 - Lidar integrating with vision 11:50 - Pros and cons of Lidar vs vision only 13:20 - Who is in the lead for physical world AI 15:16 - Can Tesla expand into other fields of AI? 17:30 - Importance of data to physical world AI? 18:35 - Data advantage in autonomous driving? 20:!6 - AI expertise vs data advantage 21:35 - Tesla Dojo 25:44 - Limits to neural nets performance 29:00 - Tracking AI improvements 31:25 - Github Copilot, DeepMind AlphaCode, OpenAI Codex 34:27 - Why is artificial intelligence important? 37:00 - Augment jobs or replace? 38:36 - Who's leading - OpenAI, DeepMind/Google, Facebook, etc. 30:20 - Nvidia thoughts 41:45 - GPT-4 expectations 32:15 - Main challenges in creating AI human robot 48:30 - Conclusion Social
There once was a time when AI researchers could expect to read every new paper published in the field on the arXiv, but today, that's no longer the case. The recent explosion of research activity in AI has turned keeping up to date with new developments into a full-time job. Fortunately, people like YouTuber, ML PhD and sunglasses enthusiast Yannic Kilcher make it their business to distill ML news and papers into a digestible form for mortals like you and me to consume. I highly recommend his channel to any TDS podcast listeners who are interested in ML research — it's a fantastic resource, and literally the way I finally managed to understand the Attention is All You Need paper back in the day. Yannic is joined me to talk about what he's learned from years of following, reporting and doing AI research, including the trends, the challenges and the opportunities that he expects are going to shape the course of AI history in coming years. --- Intro music: ➞ Artist: Ron Gelinas ➞ Track Title: Daybreak Chill Blend (original mix) ➞ Link to Track: https://youtu.be/d8Y2sKIgFWc --- Chapters: - 0:00 Intro - 1:20 Yannic's path into ML - 7:25 Selecting ML news - 11:45 AI ethics → political discourse - 17:30 AI alignment - 24:15 Malicious uses - 32:10 Impacts on persona - 39:50 Bringing in human thought - 46:45 Math with big numbers - 51:05 Metrics for generalization - 58:05 The future of AI - 1:02:58 Wrap-up
Yannic ist Machine Learning Experte aus ganzem Herzen. Er ist Startup CTO seiner eigenen Deep Learning Firma bei Tag und betreibt seinen eigenen Youtube Kanal zur neusten ML-Forschung bei Nacht. Ideale Voraussetzungen um von ihm mehr über die aktuellen Trends in dem Bereich zu lernen. Außerdem geht es um die Probleme und Chancen von intelligenten Algorithmen in unserem Alltag.Youtube Kanal: youtube.com/yannickilcherTwitter: twitter.com/ykilcher
In episode 9 of The Gradient Podcast, we interview Yannic Kilcher, an AI researcher and educator.Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSEvan is an AI safety veteran who's done research at leading AI labs like OpenAI, and whose experience also includes stints at Google, Ripple andYelp. He currently works at the Machine Intelligence Research Institute (MIRI) as a Research Fellow, and joined me to talk about his views on AI safety, the alignment problem, and whether humanity is likely to survive the advent of superintelligent AI.Podcast Theme: “MusicVAE: Trio 16-bar Sample #2” from "MusicVAE: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music". Get full access to The Gradient at thegradientpub.substack.com/subscribe
In episode 8 of The Gradient Podcast, we interview Yannic Kilcher, an AI researcher and educator.Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSYannic graduated with his PhD from ETH Zurich's data analytics lab and is now the Chief Technology Officer of DeepJudge, a company building the next-generation AI-powered context-sensitive legal document processing platform. He famously produces videos on his very popular Youtube channel, which cover machine learning research papers, programming, and issues of the AI community, and the broader impact of AI in society.Check out his Youtube channel here and follow him on Twitter herePodcast Theme: “MusicVAE: Trio 16-bar Sample #2” from "MusicVAE: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music". Get full access to The Gradient at thegradientpub.substack.com/subscribe
The field of Artificial Intelligence was founded in the mid 1950s with the aim of constructing “thinking machines” - that is to say, computer systems with human-like general intelligence. Think of humanoid robots that not only look but act and think with intelligence equal to and ultimately greater than that of human beings. But in the intervening years, the field has drifted far from its ambitious old-fashioned roots. Dr. Ben Goertzel is an artificial intelligence researcher, CEO and founder of SingularityNET. A project combining artificial intelligence and blockchain to democratize access to artificial intelligence. Ben seeks to fulfil the original ambitions of the field. Ben graduated with a PhD in Mathematics from Temple University in 1990. Ben's approach to AGI over many decades now has been inspired by many disciplines, but in particular from human cognitive psychology and computer science perspective. To date Ben's work has been mostly theoretically-driven. Ben thinks that most of the deep learning approaches to AGI today try to model the brain. They may have a loose analogy to human neuroscience but they have not tried to derive the details of an AGI architecture from an overall conception of what a mind is. Ben thinks that what matters for creating human-level (or greater) intelligence is having the right information processing architecture, not the underlying mechanics via which the architecture is implemented. Ben thinks that there is a certain set of key cognitive processes and interactions that AGI systems must implement explicitly such as; working and long-term memory, deliberative and reactive processing, perc biological systems tend to be messy, complex and integrative; searching for a single “algorithm of general intelligence” is an inappropriate attempt to project the aesthetics of physics or theoretical computer science into a qualitatively different domain. TOC is on the YT show description https://www.youtube.com/watch?v=sw8IE3MX1SY Panel: Dr. Tim Scarfe, Dr. Yannic Kilcher, Dr. Keith Duggar Artificial General Intelligence: Concept, State of the Art, and Future Prospects https://sciendo.com/abstract/journals... The General Theory of General Intelligence: A Pragmatic Patternist Perspective https://arxiv.org/abs/2103.15100
Yannic Kilcher has a Master's in CS from ETH and now he is a PhD student and researcher at ETH in the Data Analytics Lab by day and an AI YouTuber by night. ————————————————————————————— Connect with me here: ✉️ My weekly email newsletter: jousef.substack.com
#machinelearning #phd #howto This video is advice for new PhD students in the field of Machine Learning in 2021 and after. The field has shifted dramatically in the last few years and navigating grad school can be very hard, especially when you're as clueless as I was when I started. The video is a personal recount of my mistakes and what I've learned from them. If you already have several published papers and know what to do, this video is not for you. However, if you are not even sure where to start, how to select a topic, or what goes in a paper, you might benefit from this video, because that's exactly how I felt. Main Takeaways: - Select niche topics rather than hype topics - Write papers that can't be rejected - Don't be discouraged by bad reviews - Take reviewing & teaching seriously - Keep up your focus - Conferences are for networking - Internships are great opportunities - Team up with complementary skills - Don't work too hard OUTLINE: 0:00 - Intro & Overview 1:25 - Thesis Topic Selection 4:25 - How To Publish Papers 5:35 - Dealing With Reviewers 6:30 - How To Be A Reviewer 7:40 - Take Teaching Seriously 8:30 - Maintain Focus 10:20 - Navigating Conferences 12:40 - Internships 13:40 - Collaborations 14:55 - Don't Forget To Enjoy Transcript: https://www.notion.so/Yannic-Kilcher-... Credits to Lanz for editing Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yann... Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-ki... BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannick... Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Yannic Kilcher is PhD candidate at ETH Zurich researching deep learning, structured learning, and optimization for large and high-dimensional data. He produces videos on his enormously popular Youtube channel breaking down recent ML papers. Follow Yannic on Twitter: https://twitter.com/ykilcher (https://twitter.com/ykilcher) Check out Yannic's excellent Youtube channel: https://www.youtube.com/channel/UCZHmQk67mSJgfCCTn7xBfew (https://www.youtube.com/channel/UCZHmQk67mSJgfCCTn7xBfew) Listen to the ML Street Talk podcast: https://podcasts.apple.com/us/podcast/machine-learning-street-talk/id1510472996 (https://podcasts.apple.com/us/podcast/machine-learning-street-talk/id1510472996) Every Thursday I send out the most useful things I've learned, curated specifically for the busy machine learning engineer. Sign up here: http://bitly.com/mle-newsletter (http://bitly.com/mle-newsletter) Follow Charlie on Twitter: https://twitter.com/CharlieYouAI (https://twitter.com/CharlieYouAI) Subscribe to ML Engineered: https://mlengineered.com/listen (https://mlengineered.com/listen) Comments? Questions? Submit them here: http://bit.ly/mle-survey (http://bit.ly/mle-survey) Take the Giving What We Can Pledge: https://www.givingwhatwecan.org/ (https://www.givingwhatwecan.org/) Timestamps: 02:40 Yannic Kilcher 07:05 Research for his PhD thesis and plans for the future 12:05 How he produces videos for his enormously popular Youtube channel 21:50 Yannic's research process: choosing what to read and how he reads for understanding 27:30 Why ML conference peer review is broken and what a better solution looks like 45:20 On the field's obsession with state of the art 48:30 Is deep learning is the future of AI? Is attention all you need? 56:10 Is AI overhyped right now? 01:01:00 Community Questions 01:13:30 Yannic flips the script and asks me about what I do 01:25:30 Rapid fire questions Links: https://www.youtube.com/channel/UCZHmQk67mSJgfCCTn7xBfew (Yannic's amazing Youtube Channel) https://www.notion.so/Yannic-Kilcher-e93c81f81100464399e173867815e380 (Yannic's Google Scholar) https://discord.gg/4H8xxDF (Yannic's Community Discord Channel) On the Measure of Intelligence: https://arxiv.org/abs/1911.01547 (arXiv paper) and https://www.youtube.com/watch?v=3_qGrmD6iQY (Yannic's video series) https://www.youtube.com/watch?v=Uumd2zOOz60 (How I Read a Paper: Facebook's DETR (Video Tutorial)) https://www.youtube.com/watch?v=TrdevFK_am4 (An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (Paper Explained)) https://fs.blog/2014/09/peter-thiel-zero-to-one/ (Zero to One) https://www.penguin.co.uk/books/104/1049544/the-gulag-archipelago/9781784871512.html (The Gulag Archipelago)