Podcasts about Word2vec

  • 45PODCASTS
  • 59EPISODES
  • 43mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Apr 2, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Word2vec

Latest podcast episodes about Word2vec

Machine Learning Street Talk
Eiso Kant (CTO poolside) - Superhuman Coding Is Coming!

Machine Learning Street Talk

Play Episode Listen Later Apr 2, 2025 96:28


Eiso Kant, CTO of poolside AI, discusses the company's approach to building frontier AI foundation models, particularly focused on software development. Their unique strategy is reinforcement learning from code execution feedback which is an important axis for scaling AI capabilities beyond just increasing model size or data volume. Kant predicts human-level AI in knowledge work could be achieved within 18-36 months, outlining poolside's vision to dramatically increase software development productivity and accessibility. SPONSOR MESSAGES:***Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***Eiso Kant:https://x.com/eisokanthttps://poolside.ai/TRANSCRIPT:https://www.dropbox.com/scl/fi/szepl6taqziyqie9wgmk9/poolside.pdf?rlkey=iqar7dcwshyrpeoz0xa76k422&dl=0TOC:1. Foundation Models and AI Strategy [00:00:00] 1.1 Foundation Models and Timeline Predictions for AI Development [00:02:55] 1.2 Poolside AI's Corporate History and Strategic Vision [00:06:48] 1.3 Foundation Models vs Enterprise Customization Trade-offs2. Reinforcement Learning and Model Economics [00:15:42] 2.1 Reinforcement Learning and Code Execution Feedback Approaches [00:22:06] 2.2 Model Economics and Experimental Optimization3. Enterprise AI Implementation [00:25:20] 3.1 Poolside's Enterprise Deployment Strategy and Infrastructure [00:26:00] 3.2 Enterprise-First Business Model and Market Focus [00:27:05] 3.3 Foundation Models and AGI Development Approach [00:29:24] 3.4 DeepSeek Case Study and Infrastructure Requirements4. LLM Architecture and Performance [00:30:15] 4.1 Distributed Training and Hardware Architecture Optimization [00:33:01] 4.2 Model Scaling Strategies and Chinchilla Optimality Trade-offs [00:36:04] 4.3 Emergent Reasoning and Model Architecture Comparisons [00:43:26] 4.4 Balancing Creativity and Determinism in AI Models [00:50:01] 4.5 AI-Assisted Software Development Evolution5. AI Systems Engineering and Scalability [00:58:31] 5.1 Enterprise AI Productivity and Implementation Challenges [00:58:40] 5.2 Low-Code Solutions and Enterprise Hiring Trends [01:01:25] 5.3 Distributed Systems and Engineering Complexity [01:01:50] 5.4 GenAI Architecture and Scalability Patterns [01:01:55] 5.5 Scaling Limitations and Architectural Patterns in AI Code Generation6. AI Safety and Future Capabilities [01:06:23] 6.1 Semantic Understanding and Language Model Reasoning Approaches [01:12:42] 6.2 Model Interpretability and Safety Considerations in AI Systems [01:16:27] 6.3 AI vs Human Capabilities in Software Development [01:33:45] 6.4 Enterprise Deployment and Security ArchitectureCORE REFS (see shownotes for URLs/more refs):[00:15:45] Research demonstrating how training on model-generated content leads to distribution collapse in AI models, Ilia Shumailov et al. (Key finding on synthetic data risk)[00:20:05] Foundational paper introducing Word2Vec for computing word vector representations, Tomas Mikolov et al. (Seminal NLP technique)[00:22:15] OpenAI O3 model's breakthrough performance on ARC Prize Challenge, OpenAI (Significant AI reasoning benchmark achievement)[00:22:40] Seminal paper proposing a formal definition of intelligence as skill-acquisition efficiency, François Chollet (Influential AI definition/philosophy)[00:30:30] Technical documentation of DeepSeek's V3 model architecture and capabilities, DeepSeek AI (Details on a major new model)[00:34:30] Foundational paper establishing optimal scaling laws for LLM training, Jordan Hoffmann et al. (Key paper on LLM scaling)[00:45:45] Seminal essay arguing that scaling computation consistently trumps human-engineered solutions in AI, Richard S. Sutton (Influential "Bitter Lesson" perspective)

Yarukinai.fm
264. AIの潮目

Yarukinai.fm

Play Episode Listen Later Mar 2, 2025 55:23


話したこと opトーク カーシェアリング タイムズカー 三井のカーシェアーズ(旧カレコ) 車関連 ブンブンジャー(スーパー戦隊シリーズ) 首都高の激ムズ要因「右側合流」なぜ多い 出入口もJCTも“ランプは右”が当たり前!? 後方確認キビシ~! ハイエース アルファード(トヨタ) 高速道路の合流のコツ AI & 開発支援ツール Devin(AIエンジニア) GitHub Copilot Cursor(AIコードエディタ) Claude(Anthropic) cline Windsurf Editor by Codeium 「AI Code Agents 祭り」~ 2025 Winter ~ OpenHands Grok(XのAI) AIと仕事の未来について(MIT Technology Review) 人月の神話 - Wikipedia Word2vec - Wikipedia 話してる人 マーク(tetuo41) 会社員 須貝(sugaishun) 会社員 駿河(snowlong) 在宅社長

MLOps.community
The Impact of UX Research in the AI Space // Lauren Kaplan // #272

MLOps.community

Play Episode Listen Later Nov 13, 2024 68:19


Lauren Kaplan is a sociologist and writer. She earned her PhD in Sociology at Goethe University Frankfurt and worked as a researcher at the University of Oxford and UC Berkeley. The Impact of UX Research in the AI Space // MLOps Podcast #272 with Lauren Kaplan, Sr UX Researcher. // Abstract In this MLOps Community podcast episode, Demetrios and UX researcher Lauren Kaplan explore how UX research can transform AI and ML projects by aligning insights with business goals and enhancing user and developer experiences. Kaplan emphasizes the importance of stakeholder alignment, proactive communication, and interdisciplinary collaboration, especially in adapting company culture post-pandemic. They discuss UX's growing relevance in AI, challenges like bias, and the use of AI in research, underscoring the strategic value of UX in driving innovation and user satisfaction in tech. // Bio Lauren is a sociologist and writer. She earned her PhD in Sociology at Goethe University Frankfurt and worked as a researcher at the University of Oxford and UC Berkeley. Passionate about homelessness and Al, Lauren joined UCSF and later Meta. Lauren recently led UX research at a global Al chip startup and is currently seeking new opportunities to further her work in UX research and AI. At Meta, Lauren led UX research for 1) Privacy-Preserving ML and 2) PyTorch. Lauren has worked on NLP projects such as Word2Vec analysis of historical HIV/AIDS documents presented at TextXD, UC Berkeley 2019. Lauren is passionate about understanding technology and advocating for the people who create and consume Al. Lauren has published over 30 peer-reviewed research articles in domains including psychology, medicine, sociology, and more.” // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Podcast on AI UX https://open.substack.com/pub/aistudios/p/how-to-do-user-research-for-ai-products?r=7hrv8&utm_medium=ios 2024 State of AI Infra at Scale Research Report https://ai-infrastructure.org/wp-content/uploads/2024/03/The-State-of-AI-Infrastructure-at-Scale-2024.pdf Privacy-Preserving ML UX Public Article https://www.ttclabs.net/research/how-to-help-people-understand-privacy-enhancing-technologies Homelessness research and more: https://scholar.google.com/citations?user=24zqlwkAAAAJ&hl=en Agents in Production: https://home.mlops.community/public/events/aiagentsinprod Mk.gee Si (Bonus Track): https://open.spotify.com/track/1rukW2Wxnb3GGlY0uDWIWB?si=4d5b0987ad55444a --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Lauren on LinkedIn: https://www.linkedin.com/in/laurenmichellekaplan?utm_source=share&utm_campaign=share_via&utm_content=profile&utm_medium=ios_app

Machine Learning Street Talk
Patrick Lewis (Cohere) - Retrieval Augmented Generation

Machine Learning Street Talk

Play Episode Listen Later Sep 16, 2024 73:46


Dr. Patrick Lewis, who coined the term RAG (Retrieval Augmented Generation) and now works at Cohere, discusses the evolution of language models, RAG systems, and challenges in AI evaluation. MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmented generation. Try it now - get 2,000 free queries monthly at http://brave.com/api. Key topics covered: - Origins and evolution of Retrieval Augmented Generation (RAG) - Challenges in evaluating RAG systems and language models - Human-AI collaboration in research and knowledge work - Word embeddings and the progression to modern language models - Dense vs sparse retrieval methods in information retrieval The discussion also explored broader implications and applications: - Balancing faithfulness and fluency in RAG systems - User interface design for AI-augmented research tools - The journey from chemistry to AI research - Challenges in enterprise search compared to web search - The importance of data quality in training AI models Patrick Lewis: https://www.patricklewis.io/ Cohere Command Models, check them out - they are amazing for RAG! https://cohere.com/command TOC 00:00:00 1. Intro to RAG 00:05:30 2. RAG Evaluation: Poll framework & model performance 00:12:55 3. Data Quality: Cleanliness vs scale in AI training 00:15:13 4. Human-AI Collaboration: Research agents & UI design 00:22:57 5. RAG Origins: Open-domain QA to generative models 00:30:18 6. RAG Challenges: Info retrieval, tool use, faithfulness 00:42:01 7. Dense vs Sparse Retrieval: Techniques & trade-offs 00:47:02 8. RAG Applications: Grounding, attribution, hallucination prevention 00:54:04 9. UI for RAG: Human-computer interaction & model optimization 00:59:01 10. Word Embeddings: Word2Vec, GloVe, and semantic spaces 01:06:43 11. Language Model Evolution: BERT, GPT, and beyond 01:11:38 12. AI & Human Cognition: Sequential processing & chain-of-thought Refs: 1. Retrieval Augmented Generation (RAG) paper / Patrick Lewis et al. [00:27:45] https://arxiv.org/abs/2005.11401 2. LAMA (LAnguage Model Analysis) probe / Petroni et al. [00:26:35] https://arxiv.org/abs/1909.01066 3. KILT (Knowledge Intensive Language Tasks) benchmark / Petroni et al. [00:27:05] https://arxiv.org/abs/2009.02252 4. Word2Vec algorithm / Tomas Mikolov et al. [01:00:25] https://arxiv.org/abs/1301.3781 5. GloVe (Global Vectors for Word Representation) / Pennington et al. [01:04:35] https://nlp.stanford.edu/projects/glove/ 6. BERT (Bidirectional Encoder Representations from Transformers) / Devlin et al. [01:08:00] https://arxiv.org/abs/1810.04805 7. 'The Language Game' book / Nick Chater and Morten H. Christiansen [01:11:40] https://amzn.to/4grEUpG Disclaimer: This is the sixth video from our Cohere partnership. We were not told what to say in the interview. Filmed in Seattle in June 2024.

Entre Dev y Ops Podcast
EDyO 87 - IA con Roger Oriol

Entre Dev y Ops Podcast

Play Episode Listen Later Apr 25, 2024


En el episodio IA del podcast de Entre Dev y Ops hablaremos de IA con Roger Oriol. Blog Entre Dev y Ops - https://www.entredevyops.es Telegram Entre Dev y Ops - https://t.me/entredevyops Twitter Entre Dev y Ops - https://twitter.com/entredevyops LinkedIn Entre Dev y Ops - https://www.linkedin.com/company/entredevyops/ Patreon Entre Dev y Ops - https://www.patreon.com/edyo Amazon Entre Dev y Ops - https://amzn.to/2HrlmRw Enlaces comentados: Podcast 76: Cómo afecta la IA a nuestro día a día - https://www.entredevyops.es/podcasts/podcast-76.html Blog Roger Oriol - https://www.ruxu.dev/ Twitter Roger Oriol - https://twitter.com/rogiia Test de Turing - https://es.wikipedia.org/wiki/Prueba_de_Turing AlexNet - https://en.wikipedia.org/wiki/AlexNet Word2Vec - https://es.wikipedia.org/wiki/Word2vec#:~:text=Word2vec%20es%20una%20t%C3%A9cnica%20para,un%20gran%20corpus%20de%20texto. Transformers - https://en.wikipedia.org/wiki/Transformer_(deep_learning_architecture) Attention is all you need - https://research.google/pubs/attention-is-all-you-need/ Paper - Attention is all you need - https://arxiv.org/abs/1706.03762 Claude, de Anthropic - https://claude.ai Gemini, de Google - https://gemini.google.com/ Mistral - https://mistral.ai/ Titan - https://aws.amazon.com/es/bedrock/titan/ HuggingFace - https://huggingface.co/ AI Emergence - https://openreview.net/pdf?id=yzkSU5zdwD Chinchilla - https://gpt3demo.com/apps/chinchilla-deepmind Mixture of Experts - https://en.wikipedia.org/wiki/Mixture_of_experts Ejemplo de Mistral - https://mistral.ai/news/mixtral-of-experts/ Destilación - https://en.wikipedia.org/wiki/Knowledge_distillation Ollama - https://ollama.com/ GPT 4 all - https://gpt4all.io/index.html Curso Coursera DeepLearning.io Generative AI with LLMs - https://www.coursera.org/learn/generative-ai-with-llms DotCSV - https://www.youtube.com/@DotCSV Greg Kamradt - https://twitter.com/GregKamradt Youtube Greg Kamradt - https://www.youtube.com/@DataIndependent Web Greg Kamradt - https://www.gregkamradt.com/ Langchain Youtube - https://www.youtube.com/@LangChain Blog Llama - https://llama-2.ai/blog/ Phil Schmid - https://www.philschmid.de/

Random Tech Talks
Embeddings para principiantes - la explicación definitiva.

Random Tech Talks

Play Episode Listen Later Mar 15, 2024 95:00


Random Tech Talks: Resumen del EpisodioEn este episodio fascinante de "Random Tech Talks", nos sumergimos en el intrigante mundo de los embeddings, una herramienta esencial en el campo del aprendizaje automático y la inteligencia artificial. Este tema, presentado de manera amigable para principiantes, nos revela cómo estos vectores de características pueden capturar la esencia de los datos de maneras increíblemente eficientes y multifacéticas. ¿Qué son los Embeddings?Los embeddings son, básicamente, representaciones numéricas de alta dimensión de datos complejos como palabras, imágenes o incluso comportamientos de usuarios. Lo que los hace tan poderosos es su capacidad para preservar relaciones semánticas, como la cercanía entre conceptos o la similitud entre características, en un espacio de menor dimensión. ¿Cómo Funcionan?A través de ejemplos ilustrativos, se explica cómo algoritmos como Word2Vec transforman palabras en vectores, permitiendo operaciones matemáticas que reflejan relaciones semánticas reales. Por ejemplo, la famosa ecuación "Rey - Hombre + Mujer = Reina" se convierte en una realidad en este espacio vectorial.Aplicaciones en Realización de Preguntas y Respuestas Generativas (RAG) Una aplicación emocionante de los embeddings se encuentra en los sistemas de Realización de Preguntas y Respuestas Generativas (RAG), donde se combinan con modelos de lenguaje para generar respuestas informativas y contextualmente relevantes. Esto no solo mejora la precisión de las respuestas en asistentes virtuales y motores de búsqueda, sino que también abre puertas a formas más naturales de interacción hombre-máquina. Innovaciones en IA: Devin y Figure 01El episodio también abordó brevemente las últimas innovaciones en el campo de la IA, destacando a Devin, el primer ingeniero de software potenciado por inteligencia artificial, cuya capacidad para escribir y optimizar código está desafiando nuestras nociones tradicionales del desarrollo de software. Por otro lado, se mencionó a Figure 01, el robot autónomo que ha dejado atrás las demostraciones tecnológicas de Elon Musk, mostrando habilidades y autonomía que parecen sacadas de la ciencia ficción, desde navegación independiente hasta interacciones complejas con el entorno.Este episodio no solo aclara conceptos fundamentales de la IA para los entusiastas de la tecnología sino que también ilustra el impacto profundo y creciente de estos avances en nuestra sociedad. La discusión sobre Devin y Figure 01, en particular, nos invita a reflexionar sobre el futuro de la tecnología y su papel en redefinir los límites de lo posible. LinksGoogle Machine Learning Crash Course: Embeddings:Merriam-Webster: RAGVideo de Figure 01Puedes ver este episodio en YouTube Código fuenteIncluimos aquí el código fuente tal y como se vio en el programa. Recuerda queNecesitas un Key de OpenAI. Una vez que lo obtengas, lo guardas en un archivo que debe llamarse .envGuarda este código en un archivo que se llame embedText.js o como quieras :)require('dotenv').config(); const { Configuration, OpenAIApi } = require("openai"); if (process.argv.length < 2) { console.log("Uso: node embedText.js "); process.exit(1); } var textToEmbed = '' var arregloEmbeds = new Array(); const readline = require('readline').createInterface({ input: process.stdin, output: process.stdout }); function askForInput() { readline.question('Introduce un texto (o "exit" para salir): ', async(input) => { if (input.toLowerCase() === 'exit') { console.log('Hasta luego.'); readline.close(); } else { console.log(`Tu texto: ${input}`); textToEmbed = input embedding = await getEmbedding(textToEmbed); // Lo comparo comparaEmbeds(embedding, textToEmbed); // Y lo agrego al arreglo arregloEmbeds.push({ "texto": textToEmbed, "embedding": embedding, "similitud": 0.0 }); //arregloTextos.push(textToEmbed); askForInput(); // Siguiente embedding } }); } // Setup OpenAI API const configuration = new Configuration({ apiKey: process.env.OPENAI_API_KEY, }); const openai = new OpenAIApi(configuration); askForInput(); // Llamada inicial async function getEmbedding(text) { try { console.log('creando embedding') const response = await openai.createEmbedding({ model: "text-embedding-ada-002", // or choose another model as needed input: text, }); console.log(response.data.data[0].embedding); return response.data.data[0].embedding; } catch (error) { console.error("Error generando embedding:", error); } } async function generateResponse(text) { const response = await openai.createCompletion({ model: "gpt-3.5-turbo-instruct", prompt: text, temperature: 0.7, max_tokens: 150, }); console.log(response.data.choices[0].text.trim()); } function comparaEmbeds(ultimoEmbedding, ultimoTexto) { console.log(ultimoTexto); if (arregloEmbeds.length ({ texto: item.texto, similitud: cosineSimilarity( item.embedding, ultimoEmbedding) })); console.log(similares) arregloEmbeds.forEach((item, index) => { item.similitud = similares[index] }); arregloEmbeds.sort((a, b) => b.similitud - a.similitud); generateResponse(`Eres un asistente que quiere contestar la pregunta siguiente: ${ultimoTexto} y la respuesta que debes tomar en cuenta es ${arregloEmbeds[0].texto}. Escribe un texto de máximo 100 palabras dando amablemente esta respuesta.`); } function cosineSimilarity(vecA, vecB) { let dotProduct = 0; let normA = 0; let normB = 0; for (let i = 0; i < vecA.length; i++) { dotProduct += vecA[i] * vecB[i]; normA += vecA[i] ** 2; normB += vecB[i] ** 2; } return dotProduct / (Math.sqrt(normA) * Math.sqrt(normB)); } InvitaciónTe invitamos a escuchar este episodio y participar en nuestras redes sociales, sobre todo en la página de Facebook que está en en este link.Este y todos los demás episodios de Random Tech Talks pueden escucharse en rtt.show, y en todas las plataformas de podcast (si encuentras una en la que no estamos, avísanos!).Si te gustó este episodio no dejes de suscribirte y recomendarnos con amigos y enemigos, y sobre todo con todos los que creas se beneficiarían de saber un poco más de tecnología platicada desde un nivel más terrenal.

The top AI news from the past week, every ThursdAI

Holy SH*T, These two words have been said on this episode multiple times, way more than ever before I want to say, and it's because we got 2 incredible exciting breaking news announcements in a very very short amount of time (in the span of 3 hours) and the OpenAI announcement came as we were recording the space, so you'll get to hear a live reaction of ours to this insanity. We also had 3 deep-dives, which I am posting on this weeks episode, we chatted with Yi Tay and Max Bane from Reka, which trained and released a few new foundational multi modal models this week, and with Dome and Pablo from Stability who released a new diffusion model called Stable Cascade, and finally had a great time hanging with Swyx (from Latent space) and finally got a chance to turn the microphone back at him, and had a conversation about Swyx background, Latent Space, and AI Engineer. I was also very happy to be in SF today of all days, as my day is not over yet, there's still an event which we Cohost together with A16Z, folks from Nous Research, Ollama and a bunch of other great folks, just look at all these logos! Open Source FTW

god american spotify time world google ai hollywood disney apple interview education japan talk magic news french germany san francisco phd german russian microsoft holy professional blog hawaii dive 3d video games chatgpt tokyo humans sweden silicon valley champions os pc apologies ga cheers discord cloud singapore reactions honestly west coast windows investments context alignment mixed newsletter sold hebrew chat tap dom helped developers breaking news dms ram substack buzz folks vc highest bloom whispers react siri newton lyon andrews munich sf openai gemini goats anton labs nvidia stability api arabic generally decided kd documents alphabet bard open source needle north star gpt ml aws incredibly lama mosaic github llama dome slightly apis soaring jarvis runway farrell pico vcs javascript eureka attached html apache temporal biases tl 2k sora rugs 10m weights tab pharrell llm chinese new year xl colbert gpu pica cascade nps kb rahul enrico dali agi oss fairly yarn dx eeg horowitz ocr rag benchmarks cloudflare gpus curation 7b singaporean ilya rtx deepmind gtm lambda tldr v2 watchos alessio satya nadella lms satya buster keaton fmri mephisto retrieval andrej apple news 8b yam lex fridman mixture sundar pichai googlers yi series c 60k past week sura lumiere smoothly haystack a16z latent wrecker cursor mpt chroma flan dalmatian svd hacker news tensor reca devrel imad datasets netlify reka nvidia gpus tesla autopilot cohere google brain andrew chen yann lecun vae robert scoble matryoshka instructure discords daniel gross loras neurips andrej karpathy jeff dean 128k nlw ai engineer george hotz nat friedman entropic drew houston lachanze word2vec rekka latent space hayes valley swix breca max wolf gradio ingra neuros
Hacker News Recap
December 18th, 2023 | Figma and Adobe abandon proposed merger

Hacker News Recap

Play Episode Listen Later Dec 19, 2023 18:09


This is a recap of the top 10 posts on Hacker News on December 18th, 2023.This podcast was generated by wondercraft.ai(00:35): Figma and Adobe abandon proposed mergerOriginal post: https://news.ycombinator.com/item?id=38681861&utm_source=wondercraft_ai(02:10): Wasm3 entering a minimal maintenance phaseOriginal post: https://news.ycombinator.com/item?id=38681672&utm_source=wondercraft_ai(03:48): "I just bought a 2024 Chevy Tahoe for $1"Original post: https://news.ycombinator.com/item?id=38681450&utm_source=wondercraft_ai(05:13): Word2Vec received 'strong reject' four times at ICLR2013Original post: https://news.ycombinator.com/item?id=38684925&utm_source=wondercraft_ai(06:47): VW is putting buttons back in carsOriginal post: https://news.ycombinator.com/item?id=38686967&utm_source=wondercraft_ai(08:33): Progress toward a GCC-based Rust compilerOriginal post: https://news.ycombinator.com/item?id=38684102&utm_source=wondercraft_ai(10:33): The "Cheap" WebOriginal post: https://news.ycombinator.com/item?id=38681437&utm_source=wondercraft_ai(12:34): Unbricking my MacBook took an email to Tim CookOriginal post: https://news.ycombinator.com/item?id=38691025&utm_source=wondercraft_ai(14:20): 3Blue1Brown Calculus Blog SeriesOriginal post: https://news.ycombinator.com/item?id=38687809&utm_source=wondercraft_ai(16:03): This year in Servo: over 1000 pull requests and beyondOriginal post: https://news.ycombinator.com/item?id=38681463&utm_source=wondercraft_aiThis is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai

Data Science Interview Prep
Replay: Word2Vec

Data Science Interview Prep

Play Episode Listen Later Dec 15, 2023 8:42


Check out one of our popular past episodes on the classic, Word2Vec! Want to support us? Become a premium subscriber to The Data Science Interview Prep Podcast: https://podcasters.spotify.com/pod/show/data-science-interview/subscribe

The top AI news from the past week, every ThursdAI

ThursdAI October 26thTimestamps and full transcript for your convinience## [00:00:00] Intro and brief updates## [00:02:00] Interview with Bo Weng, author of Jina Embeddings V2## [00:33:40] Hugging Face open sourcing a fast Text Embeddings## [00:36:52] Data Provenance Initiative at dataprovenance.org## [00:39:27] LocalLLama effort to compare 39 open source LLMs +## [00:53:13] Gradio Interview with Abubakar, Xenova, Yuichiro## [00:56:13] Gradio effects on the open source LLM ecosystem## [01:02:23] Gradio local URL via Gradio Proxy## [01:07:10] Local inference on device with Gradio - Lite## [01:14:02] Transformers.js integration with Gradio-lite## [01:28:00] Recap and bye byeHey everyone, welcome to ThursdAI, this is Alex Volkov, I'm very happy to bring you another weekly installment of

The Voice of Insurance
Special Ep Arun Balakrishnan CEO Xceedence: AI and Insurance - everything you need to know

The Voice of Insurance

Play Episode Listen Later Sep 1, 2023 43:17


Today's guest is an insurance technology entrepreneur with a great story to tell. Arun Balakrishnan started his career at sea but went back to business school, became an internet entrepreneur and ended up captaining Berkshire Hathaway's foray into the Indian insurance market. Ten years ago he founded Insurance technology firm Xceedence and the growth has been rapid. But that's just a bit of background because the main purpose of today's podcast is to talk about Artificial Intelligence (AI) and in particular the generative AI that has exploded onto the world's consciousness in the past six months. When something has been this hyped having someone like Arun on the show is an absolute godsend. Arun is a great explainer and first helps educate me about what the terms bandied around in AI really mean. Then we start to get to work on decoding what the best applications are going to be in the insurance world. This is fascinating stuff and we soon get down to the fundamentals of what machines and humans are really best at. I won't spoil anything by saying that the humans in insurance really shouldn't worry about being made redundant by this new technology – there are some things that AI can't do well, and even if it could, we probably wouldn't want it to do them for us. This is far more about improved accuracy and vastly increased productivity. Arun says we should think of it a bit like being allocated a smart intern, apprentice or an indefatigable underwriting assistant. Arun is a great teacher and I can highly commend this episode to anyone feeling bewildered or daunted as to how to start to engage with this exciting new technological development NOTES, ABBREVIATIONS AND FURTHER READING Arun mentions Word2Vec, which comes from ‘word to vector', a technique whereby words are transformed into a numerical representation – ie. a vector, and is at the heart of the development of this new form of Artificial Intelligence. Another concept mentioned and worth reading up about further is Zero-Shot classification, which is all about creating a tool that can do a job that it hasn't been specifically trained to do. An example might be for the AI to have learned a lot about football and then use this insight to classify an article about basketball, upon which it has never been trained. LINKS & CONTACT https://xceedance.com/ As Arun said, you can contact him directly on arun@xceedance.com

Real-Time Analytics with Tim Berglund
ChatGPT AI, Semantic Search, and Vector Databases with Ken Krugler | Ep. 18

Real-Time Analytics with Tim Berglund

Play Episode Listen Later Aug 7, 2023 34:10


Follow: https://stree.ai/podcast | Sub: https://stree.ai/sub | New episodes every Monday! On this week's episode, Tim chats with Ken Krugler about the popularity of vector databases and generative AI, such as ChatGPT-4, where they then explore Ken's work with Word2vec and the challenge of fast vector searches in advertising. Ken shares some fascinating insights into semantic search and the mechanics of working with large data sets. The conversation concludes with an appreciation for the depth and creativity that AI can offer, demonstrated by an interesting experiment Ken conducts with summarizing a philosophical paper using different character voices, like a surfer dude and a Jesuit priest.Hierarchical Navigable Small World (HNSW): https://towardsdatascience.com/similarity-search-part-4-hierarchical-navigable-small-world-hnsw-2aad4fe87d37?gi=ea38f97d58f7

The Nonlinear Library
LW - Mech Interp Puzzle 2: Word2Vec Style Embeddings by Neel Nanda

The Nonlinear Library

Play Episode Listen Later Jul 28, 2023 3:52


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Mech Interp Puzzle 2: Word2Vec Style Embeddings, published by Neel Nanda on July 28, 2023 on LessWrong. Code can be found here. No prior knowledge of mech interp or language models is required to engage with this. Language model embeddings are basically a massive lookup table. The model "knows" a vocabulary of 50,000 tokens, and each one has a separate learned embedding vector. But these embeddings turn out to contain a shocking amount of structure! Notably, it's often linear structure, aka word2vec style structure. Word2Vec is a famous result (in old school language models, back in 2013!), that 'man - woman == king - queen'. Rather than being a black box lookup table, the embedded words were broken down into independent variables, "gender" and "royalty". Each variable gets its own direction, and the embedded word is seemingly the sum of its variables. One of the more striking examples of this I've found is a "number of characters per token" direction - if you do a simple linear regression mapping each token to the number of characters in it, this can be very cleanly recovered! (If you filter out ridiculous tokens, like 19979: 512 spaces). Notably, this is a numerical feature not a categorical feature - to go from 3 tokens to four, or four to five, you just add this direction! This is in contrast to the model just learning to cluster tokens of length 3, of length 4, etc. Question 2.1: Why do you think the model cares about the "number of characters" feature? And why is it useful to store it as a single linear direction? There's tons more features to be uncovered! There's all kinds of fundamental syntax-level binary features that are represented strongly, such as "begins with a space". Question 2.2: Why is "begins with a space" an incredibly important feature for a language model to represent? (Playing around a tokenizer may be useful for building intuition here) You can even find some real word2vec style relationships between pairs of tokens! This is hard to properly search for, because most interesting entities are multiple tokens. One nice example of meaningful single token entities is common countries and capitals (idea borrowed from Merullo et al). If you take the average embedding difference for single token countries and capitals, this explains 18.58% of the variance of unseen countries! (0.25% is what I get for a randomly chosen vector). Caveats: This isn't quite the level we'd expect for real word2vec (which should be closer to 100%), and cosine sim only tracks that the direction matters, not what its magnitude is (while word2vec should be constant magnitude, as it's additive). My intuition is that models think more in terms of meaningful directions though, and that the exact magnitude isn't super important for a binary variable. Question 2.3: A practical challenge: What other features can you find in the embedding? Here's the colab notebook I generated the above graphs from, it should be pretty plug and play. The three sections should give examples for looking for numerical variables (number of chars), categorical variables (begins with space) and relationships (country to capital). Here's some ideas - I encourage you to spend time brainstorming your own! Is a number How frequent is it? (Use pile-10k to get frequency data for the pile) Is all caps Is the first token of a common multi-token word Is a first name Is a function word (the, a, of, etc) Is a punctuation character Is unusually common in German (or language of your choice) The indentation level in code Relationships between common English words and their French translations Relationships between the male and female version of a word Please share your thoughts and findings in the comments! (Please wrap them in spoiler tags) Thanks for listening. To help us out with The Nonlinear Library or to learn mo...

The Nonlinear Library
AF - Mech Interp Puzzle 2: Word2Vec Style Embeddings by Neel Nanda

The Nonlinear Library

Play Episode Listen Later Jul 28, 2023 3:53


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Mech Interp Puzzle 2: Word2Vec Style Embeddings, published by Neel Nanda on July 28, 2023 on The AI Alignment Forum. Code can be found here. No prior knowledge of mech interp or language models is required to engage with this. Language model embeddings are basically a massive lookup table. The model "knows" a vocabulary of 50,000 tokens, and each one has a separate learned embedding vector. But these embeddings turn out to contain a shocking amount of structure! Notably, it's often linear structure, aka word2vec style structure. Word2Vec is a famous result (in old school language models, back in 2013!), that 'man - woman == king - queen'. Rather than being a black box lookup table, the embedded words were broken down into independent variables, "gender" and "royalty". Each variable gets its own direction, and the embedded word is seemingly the sum of its variables. One of the more striking examples of this I've found is a "number of characters per token" direction - if you do a simple linear regression mapping each token to the number of characters in it, this can be very cleanly recovered! (If you filter out ridiculous tokens, like 19979: 512 spaces). Notably, this is a numerical feature not a categorical feature - to go from 3 tokens to four, or four to five, you just add this direction! This is in contrast to the model just learning to cluster tokens of length 3, of length 4, etc. Question 2.1: Why do you think the model cares about the "number of characters" feature? And why is it useful to store it as a single linear direction? There's tons more features to be uncovered! There's all kinds of fundamental syntax-level binary features that are represented strongly, such as "begins with a space". Question 2.2: Why is "begins with a space" an incredibly important feature for a language model to represent? (Playing around a tokenizer may be useful for building intuition here) You can even find some real word2vec style relationships between pairs of tokens! This is hard to properly search for, because most interesting entities are multiple tokens. One nice example of meaningful single token entities is common countries and capitals (idea borrowed from Merullo et al). If you take the average embedding difference for single token countries and capitals, this explains 18.58% of the variance of unseen countries! (0.25% is what I get for a randomly chosen vector). Caveats: This isn't quite the level we'd expect for real word2vec (which should be closer to 100%), and cosine sim only tracks that the direction matters, not what its magnitude is (while word2vec should be constant magnitude, as it's additive). My intuition is that models think more in terms of meaningful directions though, and that the exact magnitude isn't super important for a binary variable. Question 2.3: A practical challenge: What other features can you find in the embedding? Here's the colab notebook I generated the above graphs from, it should be pretty plug and play. The three sections should give examples for looking for numerical variables (number of chars), categorical variables (begins with space) and relationships (country to capital). Here's some ideas - I encourage you to spend time brainstorming your own! Is a number How frequent is it? (Use pile-10k to get frequency data for the pile) Is all caps Is the first token of a common multi-token word Is a first name Is a function word (the, a, of, etc) Is a punctuation character Is unusually common in German (or language of your choice) The indentation level in code Relationships between common English words and their French translations Relationships between the male and female version of a word Please share your thoughts and findings in the comments! (Please wrap them in spoiler tags) Thanks for listening. To help us out with The Nonlinear Library o...

The Nonlinear Library: LessWrong
LW - Mech Interp Puzzle 2: Word2Vec Style Embeddings by Neel Nanda

The Nonlinear Library: LessWrong

Play Episode Listen Later Jul 28, 2023 3:52


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Mech Interp Puzzle 2: Word2Vec Style Embeddings, published by Neel Nanda on July 28, 2023 on LessWrong. Code can be found here. No prior knowledge of mech interp or language models is required to engage with this. Language model embeddings are basically a massive lookup table. The model "knows" a vocabulary of 50,000 tokens, and each one has a separate learned embedding vector. But these embeddings turn out to contain a shocking amount of structure! Notably, it's often linear structure, aka word2vec style structure. Word2Vec is a famous result (in old school language models, back in 2013!), that 'man - woman == king - queen'. Rather than being a black box lookup table, the embedded words were broken down into independent variables, "gender" and "royalty". Each variable gets its own direction, and the embedded word is seemingly the sum of its variables. One of the more striking examples of this I've found is a "number of characters per token" direction - if you do a simple linear regression mapping each token to the number of characters in it, this can be very cleanly recovered! (If you filter out ridiculous tokens, like 19979: 512 spaces). Notably, this is a numerical feature not a categorical feature - to go from 3 tokens to four, or four to five, you just add this direction! This is in contrast to the model just learning to cluster tokens of length 3, of length 4, etc. Question 2.1: Why do you think the model cares about the "number of characters" feature? And why is it useful to store it as a single linear direction? There's tons more features to be uncovered! There's all kinds of fundamental syntax-level binary features that are represented strongly, such as "begins with a space". Question 2.2: Why is "begins with a space" an incredibly important feature for a language model to represent? (Playing around a tokenizer may be useful for building intuition here) You can even find some real word2vec style relationships between pairs of tokens! This is hard to properly search for, because most interesting entities are multiple tokens. One nice example of meaningful single token entities is common countries and capitals (idea borrowed from Merullo et al). If you take the average embedding difference for single token countries and capitals, this explains 18.58% of the variance of unseen countries! (0.25% is what I get for a randomly chosen vector). Caveats: This isn't quite the level we'd expect for real word2vec (which should be closer to 100%), and cosine sim only tracks that the direction matters, not what its magnitude is (while word2vec should be constant magnitude, as it's additive). My intuition is that models think more in terms of meaningful directions though, and that the exact magnitude isn't super important for a binary variable. Question 2.3: A practical challenge: What other features can you find in the embedding? Here's the colab notebook I generated the above graphs from, it should be pretty plug and play. The three sections should give examples for looking for numerical variables (number of chars), categorical variables (begins with space) and relationships (country to capital). Here's some ideas - I encourage you to spend time brainstorming your own! Is a number How frequent is it? (Use pile-10k to get frequency data for the pile) Is all caps Is the first token of a common multi-token word Is a first name Is a function word (the, a, of, etc) Is a punctuation character Is unusually common in German (or language of your choice) The indentation level in code Relationships between common English words and their French translations Relationships between the male and female version of a word Please share your thoughts and findings in the comments! (Please wrap them in spoiler tags) Thanks for listening. To help us out with The Nonlinear Library or to learn mo...

Gresham College Lectures
AI in Business

Gresham College Lectures

Play Episode Listen Later Jun 1, 2023 61:30 Transcription Available


AI is another major technological innovation. AI needs data, or more precisely, big organized data. Most data processing is about making it useful for automatic systems such as machine learning, deep learning, and other AI systems. But one big problem with AI systems is that they lack context. An AI system is a pattern recognition machine devoid of any understanding of how the world works.This lecture discusses how AI systems are used in business and their limitations.A lecture by Raghavendra Rau recorded on 22 May 2023 at Barnard's Inn Hall, London.The transcript and downloadable versions of the lecture are available from the Gresham College website: https://www.gresham.ac.uk/watch-now/ai-businessGresham College has offered free public lectures for over 400 years, thanks to the generosity of our supporters. There are currently over 2,500 lectures free to access. We believe that everyone should have the opportunity to learn from some of the greatest minds. To support Gresham's mission, please consider making a donation: https://gresham.ac.uk/support/Website:  https://gresham.ac.ukTwitter:  https://twitter.com/greshamcollegeFacebook: https://facebook.com/greshamcollegeInstagram: https://instagram.com/greshamcollegeSupport the show

Tech Café
Dossier : L'IA, comment ça marche ? (partie 2/2)

Tech Café

Play Episode Listen Later May 5, 2023 75:53


Composants et matériels électroniques, semi-conducteurs : découvrez toutes les offres de notre partenaire Farnell France sur fr.farnell.comDans ce dossier, Guillaume Poggiaspalla explique comment fonctionnent les algorithmes d'intelligence artificielle. Avec des images faciles à se représenter, il nous explique l'arrivée des transformeurs. ❤️ Patreon

Michal Hubík Podcast
Budoucnost AI a ChatGPT? Terminátoři jsou sci-fi mediální masáž. | Tomáš Mikolov #41

Michal Hubík Podcast

Play Episode Listen Later Apr 6, 2023 111:18


Tomáš Mikolov je český vědec a inženýr, který se zabývá oblastí strojového učení a především vývojem metod pro reprezentaci jazyka pomocí vektorů, známých jako word embeddings. Mikolov je známý především díky své práci v Google, kde pracoval na vývoji technologií pro zpracování přirozeného jazyka, a také díky vývoji algoritmu Word2vec.S Tomášem jsme probrali aktuální problematiky AI, život v Americe, dezinformace médií a jeho kariérní posuny.–Odkazy Tomáše:https://www.linkedin.com/in/tomas-mikolov-59831188/–Michal Hubík Podcast můžeš podpořit nákupem na odkaze: https://aktin.cz/mhpPod značkou Vilgain vydáváme nejlepší produkty za dostupnou cenu pro kohokoliv. Chceme umožnit všem lidem koupit opravdu kvalitní potraviny, jejichž užívání přispívá k jejich psychickému i fyzickému rozvoji.–Timestamps: 00:00:10 Exploze AI00:01:27 Panika00:04:01 Zastrašování Médií00:07:37 Nástroje GPT00:09:02 Mistři dezinformací00:11:03 Strach z umělé inteligence00:14:09 Zastavení vývoje AI?00:16:25 Elon Musk00:19:38 Zákaz Chat GPT v Itálii00:21:23 Large laguage model00:29:41 Průlom "T.M."00:33:07 Zdroje + Americká společnost00:38:08 San Francisko00:40:38 Kariéra00:43:25 Práce pro Google Brain a život v Americe00:50:08 Word2vec00:52:31 Reklama00:55:28 Facebook/FastText00:59:27 Přemlouvání Zuckerberga do Facebooku01:05:33 Práve ve Facebooku/New York01:10:59 "Nejenom" Americká kultura01:12:56 Morálka ve vědě01:14:40 Gréta Thunbergová jako pouhá figurka?01:17:08 Čerpání/Sledování infomací01:20:02 Má AI vědomí?01:25:09 Potlačení kreativity01:31:50 Cil umělé inteligence01:39:29 Využití AI01:45:45 Naděje pro malé firmy01:50:35 Závěr–Sítě podcastu:TikTok https://www.tiktok.com/@michalhubikpodcastInstagram https://www.instagram.com/michalhubikpodcast/Youtube https://www.youtube.com/channel/UCyH2312UHGZVQ5q1c-lvHuQLinkedIn https://www.linkedin.com/company/michal-hub%C3%ADk-podcast/Moje sítě:Instagram https://www.instagram.com/michalhubik LinkedIn https://www.linkedin.com/in/michalhubik Všechny epizody podcastu najdeš také tady:–Apple Podcasts https://podcasts.apple.com/cz/podcast/michal-hub%C3%ADk-podcast/id1603599256 Spotify https://open.spotify.com/show/0RJOV7fAbJYXQbHxgEfQxf?si=f9cb25025ca249d5 Google Podcasts https://podcasts.google.com/feed/aHR0cHM6Ly

Data Science Interview Prep

Word2Vec is a must-know if you're interested in Natural Language Processing (NLP) or preparing for any entry-level NLP roles. If you find our episodes helpful, we would really appreciate if you would consider becoming a paid member of our channel to support our growth.

Software Engineering Radio - The Podcast for Professional Software Developers

Host Kanchan Shringi speaks with Venky Naganathan,Sr. Director of Engineering at Conga specializing in Artificial Intelligence and Chatbots about the Conversational UI paradigm for Enterprise Apps as well as the enablers and business use cases suited...

Datacast
Episode 66: Monitoring Models in Production with Emeli Dral

Datacast

Play Episode Listen Later Jun 9, 2021 46:16


Show Notes(02:07) Emeli shared her educational background getting degrees in Applied Mathematics and Informatics from the Peoples' Friendship University of Russia in the early 2010s.(04:24) Emeli went over her experience getting a Master's Degree at Yandex School of Data Analysis.(07:06) Emeli reflected on lessons learned from her first job out of university working as a Software Developer at Rambler, one of the biggest Russian web portals.(09:33) Emeli walked over her first year as a Data Scientist developing e-commerce recommendation systems at Yandex.(13:38) Emeli discussed core projects accomplished as the Chief Data Scientist at Yandex Data Factory, Yandex's end-to-end data platform.(17:52) Emeli shared her learnings transitioning from an IC to a manager role.(19:21) Emeli mentioned key components of success for industrial AI, given her time as the co-founder and Chief Data Scientist at Mechanica AI.(22:40) Emeli dissected the makings of her Coursera specializations — “Machine Learning and Data Analysis” and “Big Data Essentials.”(26:14) Emeli discussed her teaching activities at Moscow Institute of Physics and Technology, Yandex School of Data Analysis, Harbour.Space, and Graduate School of Management — St. Petersburg State University.(30:12) Emeli shared the story behind the founding of Evidently AI, which is building a human interface to machine learning, so that companies can trust, monitor, and improve the performance of their AI solutions.(32:32) Emeli explained the concept of model monitoring and exposed the monitoring gap in the enterprise (read Part 1 and Part 2 of the Monitoring series).(34:13) Emeli looked at possible data quality and integrity issues while proposing how to track them (read Part 3, Part 4, and Part 5 of the Monitoring series).(36:47) Emeli revealed the pros and cons of building an open-source product.(39:13) Emeli talked about prioritizing product roadmap for Evidently AI.(41:24) Emeli described the data community in Moscow.(42:03) Closing segment.Emeli's Contact InfoLinkedInTwitterCourseraGitHubMediumEvidently AI's ResourcesWebsiteTwitterLinkedInGitHubDocumentationMentioned ContentBlog PostsML Monitoring, Part 1: What Is It and How It Differs? (Aug 2020)ML Monitoring, Part 2: Who Should Care and What We Are Missing? (Aug 2020)ML Monitoring, Part 3: What Can Go Wrong With Your Data? (Sep 2020)ML Monitoring, Part 4: How To Track Data Quality and Data Integrity? (Oct 2020)ML Monitoring, Part 5: Why Should You Care About Data And Concept Drift? (Nov 2020)ML Monitoring, Part 6: Can You Build a Machine Learning Model to Monitor Another Model? (April 2021)Courses“Machine Learning and Data Analysis”“Big Data Essentials”PeopleYann LeCun (Professor at NYU, Chief AI Scientist at Facebook)Tomas Mikolov (the creator of Word2Vec, ex-scientist at Google and Facebook)Andrew Ng (Professor at Stanford, Co-Founder of Google Brain, Coursera, and Landing AI, Ex-Chief Scientist at Baidu)Book“The Elements of Statistical Learning” (by Trevor Hastie, Robert Tibshirani, and Jerome Friedman)New UpdatesSince the podcast was recorded, a lot has happened at Evidently! You can use this open-source tool (https://github.com/evidentlyai/evidently) to generate a variety of interactive reports on the ML model performance and integrate it into your pipelines using JSON profiles.This monitoring tutorial is a great showcase of what can go wrong with your models in production and how to keep an eye on them: https://evidentlyai.com/blog/tutorial-1-model-analytics-in-production.About The ShowDatacast features long-form conversations with practitioners and researchers in the data community to walk through their professional journey and unpack the lessons learned along the way. I invite guests coming from a wide range of career paths - from scientists and analysts to founders and investors — to analyze the case for using data in the real world and extract their mental models (“the WHY”) behind their pursuits. Hopefully, these conversations can serve as valuable tools for early-stage data professionals as they navigate their own careers in the exciting data universe.Datacast is produced and edited by James Le. Get in touch with feedback or guest suggestions by emailing khanhle.1013@gmail.com.Subscribe by searching for Datacast wherever you get podcasts or click one of the links below:Listen on SpotifyListen on Apple PodcastsListen on Google PodcastsIf you're new, see the podcast homepage for the most recent episodes to listen to, or browse the full guest list.

DataCast
Episode 66: Monitoring Models in Production with Emeli Dral

DataCast

Play Episode Listen Later Jun 9, 2021 46:16


Show Notes(02:07) Emeli shared her educational background getting degrees in Applied Mathematics and Informatics from the Peoples' Friendship University of Russia in the early 2010s.(04:24) Emeli went over her experience getting a Master's Degree at Yandex School of Data Analysis.(07:06) Emeli reflected on lessons learned from her first job out of university working as a Software Developer at Rambler, one of the biggest Russian web portals.(09:33) Emeli walked over her first year as a Data Scientist developing e-commerce recommendation systems at Yandex.(13:38) Emeli discussed core projects accomplished as the Chief Data Scientist at Yandex Data Factory, Yandex's end-to-end data platform.(17:52) Emeli shared her learnings transitioning from an IC to a manager role.(19:21) Emeli mentioned key components of success for industrial AI, given her time as the co-founder and Chief Data Scientist at Mechanica AI.(22:40) Emeli dissected the makings of her Coursera specializations — “Machine Learning and Data Analysis” and “Big Data Essentials.”(26:14) Emeli discussed her teaching activities at Moscow Institute of Physics and Technology, Yandex School of Data Analysis, Harbour.Space, and Graduate School of Management — St. Petersburg State University.(30:12) Emeli shared the story behind the founding of Evidently AI, which is building a human interface to machine learning, so that companies can trust, monitor, and improve the performance of their AI solutions.(32:32) Emeli explained the concept of model monitoring and exposed the monitoring gap in the enterprise (read Part 1 and Part 2 of the Monitoring series).(34:13) Emeli looked at possible data quality and integrity issues while proposing how to track them (read Part 3, Part 4, and Part 5 of the Monitoring series).(36:47) Emeli revealed the pros and cons of building an open-source product.(39:13) Emeli talked about prioritizing product roadmap for Evidently AI.(41:24) Emeli described the data community in Moscow.(42:03) Closing segment.Emeli's Contact InfoLinkedInTwitterCourseraGitHubMediumEvidently AI's ResourcesWebsiteTwitterLinkedInGitHubDocumentationMentioned ContentBlog PostsML Monitoring, Part 1: What Is It and How It Differs? (Aug 2020)ML Monitoring, Part 2: Who Should Care and What We Are Missing? (Aug 2020)ML Monitoring, Part 3: What Can Go Wrong With Your Data? (Sep 2020)ML Monitoring, Part 4: How To Track Data Quality and Data Integrity? (Oct 2020)ML Monitoring, Part 5: Why Should You Care About Data And Concept Drift? (Nov 2020)ML Monitoring, Part 6: Can You Build a Machine Learning Model to Monitor Another Model? (April 2021)Courses“Machine Learning and Data Analysis”“Big Data Essentials”PeopleYann LeCun (Professor at NYU, Chief AI Scientist at Facebook)Tomas Mikolov (the creator of Word2Vec, ex-scientist at Google and Facebook)Andrew Ng (Professor at Stanford, Co-Founder of Google Brain, Coursera, and Landing AI, Ex-Chief Scientist at Baidu)Book“The Elements of Statistical Learning” (by Trevor Hastie, Robert Tibshirani, and Jerome Friedman)New UpdatesSince the podcast was recorded, a lot has happened at Evidently! You can use this open-source tool (https://github.com/evidentlyai/evidently) to generate a variety of interactive reports on the ML model performance and integrate it into your pipelines using JSON profiles.This monitoring tutorial is a great showcase of what can go wrong with your models in production and how to keep an eye on them: https://evidentlyai.com/blog/tutorial-1-model-analytics-in-production.About The ShowDatacast features long-form conversations with practitioners and researchers in the data community to walk through their professional journey and unpack the lessons learned along the way. I invite guests coming from a wide range of career paths - from scientists and analysts to founders and investors — to analyze the case for using data in the real world and extract their mental models (“the WHY”) behind their pursuits. Hopefully, these conversations can serve as valuable tools for early-stage data professionals as they navigate their own careers in the exciting data universe.Datacast is produced and edited by James Le. Get in touch with feedback or guest suggestions by emailing khanhle.1013@gmail.com.Subscribe by searching for Datacast wherever you get podcasts or click one of the links below:Listen on SpotifyListen on Apple PodcastsListen on Google PodcastsIf you're new, see the podcast homepage for the most recent episodes to listen to, or browse the full guest list.

Les Petites Histoires Du SEO
Le Natural Language Processing c'est quoi ? - Ep. 4 - LPHS

Les Petites Histoires Du SEO

Play Episode Listen Later May 30, 2021 7:20


Qu'est-ce que le Natural Language Processing ? Et quel est son rapport avec le SEO ? Le Natural Language Processing a pour objectif de permettre aux machines de comprendre le langage humain, et est notamment utilisé dans la recherche d'information. C'est une technologie utilisée par Google pour traiter, comprendre et classer le contenu des pages web, mais également les recherches des utilisateurs. D'un point de vue historique, une théorie majeure fut développée au milieu des années 50. Il s'agit de ce que l'on appelle l'hypothèse distributionnelle. Elle pose l'idée que les mots qui se trouvent dans des contextes d'apparition proches tendent à avoir des significations similaires. Plusieurs décennies plus tard, en 1983, Gerard Salton propose le modèle vectoriel, qui consiste à représenter des documents textuels ou des listes de mots sous la forme de vecteurs, c'est-à-dire de valeurs numériques. Parallèlement, ce même Gérard Salton propose d'utiliser une méthode statistique de pondération, appelée TF-IDF, pour évaluer l'importance d'un terme, devenu une valeur numérique, dans un document. A partir des années 2010, on commence à utiliser des réseaux de neurones artificielles en NLP. En 2013, des algorithmes entraînés par des réseaux de neurones et développés par les équipes de Google ont permis de mettre au point le système Word2Vec, un algorithme de word embedding, capable d'identifier les relations entre les mots en prenant en compte le contexte dans lequel ces mots, transformés en vecteur, apparaissent. Mais depuis 2013, Google ne cesse de repousser les frontières du traitement automatique du langage naturel. On peut citer BERT, son algorithme à l'œuvre depuis 2019 pour comprendre encore plus précisément les requêtes des utilisateurs. Fin 2020, Google annonce que sa mise à jour “passage indexing” lui permet d'identifier un passage précis d'un contenu qui répond selon lui précisément à la requête de l'internaute. De cette manière, Google peut renvoyer à l'utilisateur un extrait d'un contenu en réponse à sa recherche, peu importe que le contenu d'ensemble de la page n'ait qu'un rapport lointain avec la demande de l'utilisateur. On le voit, la compréhension qu'a Google de votre contenu est précise. Les avancées en traitement automatique du langage naturel montre qu'aujourd'hui il est totalement improductif de bourrer votre contenu du mot-clé sur lequel vous souhaitez vous positionner. De la même manière, les longs textes dilués ne servent à rien. Au contraire. Google souhaite mettre en avant des textes précis, allant à l'essentiel, clairs dans l'objectif qu'ils se donnent de répondre à telle ou telle problématique, autant dans leur globalité que dans chacune des sous-thématiques abordées. Gardez toujours à l'esprit. Ce que Google veut, c'est afficher les réponses les plus pertinentes à la requête de l'utilisateur. Pour optimiser un contenu, il faut donc d'abord et avant tout être clair dans l'intention qu'on se donne de répondre à une problématique rencontrée par vos utilisateurs. Et plutôt que de bourrer votre page du même mot-clé sur lequel vous souhaitez vous positionner, demandez-vous plutôt quels sont les termes et les thèmes qui tournent autour et qui sont régulièrement abordés lorsqu'on parle du sujet sur lequel vous souhaitez prendre la parole. Structurez votre contenu en conséquence. Chacun des sujets connexes à votre sujet principal pourra faire l'objet d'une sous-partie ou d'un paragraphe spécifique. Cette manière de structurer votre contenu plaira autant aux internautes qu'au moteur de recherche. Et c'est la combinaison gagnante pour vous rapprocher des premières places dans les pages de résultats de Google. Retrouvez le podcast Qu'est-ce que le Natural Language Processing sur YouTube : https://www.youtube.com/watch?v=5eGX_aturVM

The Thesis Review
[25] Tomas Mikolov - Statistical Language Models Based on Neural Networks

The Thesis Review

Play Episode Listen Later May 14, 2021 79:17


Tomas Mikolov is a Senior Researcher at the Czech Institute of Informatics, Robotics, and Cybernetics. His research has covered topics in natural language understanding and representation learning, including Word2Vec, and complexity. Tomas's PhD thesis is titles "Statistical Language Models Based on Neural Networks", which he completed in 2012 at the Brno University of Technology. We discuss compression and recurrent language models, the backstory behind Word2Vec, and his recent work on complexity & automata. Episode notes: https://cs.nyu.edu/~welleck/episode25.html Follow the Thesis Review (@thesisreview) and Sean Welleck (@wellecks) on Twitter, and find out more info about the show at https://cs.nyu.edu/~welleck/podcast.html Support The Thesis Review at www.patreon.com/thesisreview or www.buymeacoffee.com/thesisreview

Machine Learning Podcast
#022 ML Татьяна Шаврина. Эволюция подходов к обработке естественного языка (NLP)

Machine Learning Podcast

Play Episode Listen Later Feb 5, 2021 62:20


Сегодня в гостях Татьяна Шаврина - тимлид команды AGI NLP, главный эксперт по технологиям, SberDevices, аспирант НИУ ВШЭ и просто очень приятный и интересный собеседник. Обсудили то, как с течением времени менялись подходы к обработке естественного языка, какие оказались революционными для области, а какие были частью закономерного развития. Word2vec, Seq2seq, Transformer, GPT, BERT - если эти названия вам говорят мало, но вы хотите узнать больше - выпуск вам будет интересен. И даже если вы уже все это хорошо знаете, слушать Татьяну очень интересно! Ссылки выпуска: Методология тестирования моделей, основанная на тестах для сильного ИИ - https://russiansuperglue.com/ Книга "Введение в информационный поиск" Маннинг Кристофер д. - https://www.ozon.ru/product/vvedenie-v-informatsionnyy-poisk-168021950/?utm_source=google&utm_medium=cpc&utm_campaign=RF_Product_Shopping_Books_newclients_super&gclid=CjwKCAiA9vOABhBfEiwATCi7GOdEOcDm_r9sxEWggOaUhpGnDaflijxaYDEXAjIsGpCKD1pAubW2exoCrf8QAvD_BwE MIT course "Advanced Natural Language Processing" https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-864-advanced-natural-language-processing-fall-2005/ Cambridge NLP course - https://www.cl.cam.ac.uk/teaching/1718/NLP/ Буду благодарен за обратную связь! Оставляйте ваши комментарии там, где можно. Например, в Apple Podcasts. Они помогут сделать подкаст лучше! Напишите что вам было понятно, что не очень, какие темы раскрыть, каких гостей пригласить, ну, и вообще в какую сторону катить этот подкаст :) Подписывайтесь на телеграм-канал "Стать специалистом по машинному обучению" (https://t.me/toBeAnMLspecialist) Телеграм автора подкаста (https://t.me/kmsint) Со мной также можно связаться по электронной почте: kms101@yandex.ru Также теперь подкаст можно найти на YouTube (https://www.youtube.com/channel/UCzvfXLNpB2Bbf32dc7a8oDQ?) и Яндекс.Музыке https://music.yandex.ru/album/9781458

Code Logic
Word Embeddings - A simple introduction to word2vec

Code Logic

Play Episode Listen Later Jan 13, 2021 4:02


Hey guys welcome to another episode for word embeddings! In this episode we talk about another popularly used word embedding technique that is known as word2vec. We use word2vec to grab the contextual meaning in our vector representation. I've found this useful reading for word2vec. Do read it for an in depth explanation. p.s. Sorry for always posting episode after a significant delay, this is because I myself am learning various stuffs, I have different blogs to handle, multiple projects that are in place so my schedule almost daily is kinda packed. I hope you all get some value from my podcasts and helps you get an intuitive understanding of various topics. See you in the next podcast episode!

PaperPlayer biorxiv bioinformatics
Spec2Vec: Improved mass spectral similarity scoring through learning of structural relationships

PaperPlayer biorxiv bioinformatics

Play Episode Listen Later Aug 12, 2020


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.08.11.245928v1?rss=1 Authors: Huber, F., Ridder, L., Rogers, S., van der Hooft, J. J. Abstract: Spectral similarity is used as a proxy for structural similarity in many tandem mass spectrometry (MS/MS) based metabolomics analyses, such as library matching and molecular networking. This is based upon the assumption that spectral similarity is a good proxy for structural similarity. Although weaknesses in the relationship between common spectral similarity scores and the true structural similarities have been pointed out, little development of alternative scores has been undertaken. Here, we introduce Spec2Vec, a novel spectral similarity score inspired by a natural language processing algorithm -- Word2Vec. Where Word2Vec learns relationships between words in sentences, Spec2Vec does so for mass fragments and neutral losses in MS/MS spectra. The spectral similarity score is based on spectral embeddings learnt from the fragmental relationships within a large set of spectral data. Using a dataset derived from GNPS MS/MS libraries including spectra for nearly 13,000 unique molecules, we show how Spec2Vec scores are more proportional to structural similarity of molecules than the commonly used cosine score and its derivative, the modified cosine score. We also demonstrate the advantages of Spec2Vec in library searching for both exact matches and analogues as well as in molecular networking. Finally, Spec2Vec is also computationally more scalable allowing us to search for structural analogues in a large database within seconds. Copy rights belong to original authors. Visit the link for more info

The Data Life Podcast
22: Transfer Learning for NLP - With Paul Azunre

The Data Life Podcast

Play Episode Listen Later Apr 13, 2020 46:46


In this episode, we are talking with Paul Azunre. Paul is one of the world's experts in the area of Transfer Learning for NLP and is also an author of the upcoming book Transfer Learning for NLP published by Manning Publications. In this episode we talk about things such as: 1) Paul's background and how his background in maths and optimization as well as fake news detection got him started in transfer learning in NLP. 2) How Paul got started with the book, book writing process as well as tips to the listeners for writing a technical book. 3) High level summary of transfer learning in both computer vision and NLP and why this is the ImageNet moment of NLP. 4) Why ML and NLP practitioners today should be excited about transfer learning (such as how students in Ghana are able to build their own Google Translate using transfer learning) 5) How BERT, ELMo and ALBERT work at the high level and how they differ from traditional techniques like Word2Vec or FastText. 6) Differences between BERT, ELMo and ALBERT. 7) What makes Paul's new book a must-read for anyone interested in this field. ✨Paul's Info

This Week in Machine Learning & Artificial Intelligence (AI) Podcast
NLP for Mapping Physics Research with Matteo Chinazzi - #353

This Week in Machine Learning & Artificial Intelligence (AI) Podcast

Play Episode Listen Later Mar 2, 2020 34:12


Predicting the future of science, particularly physics, is the task that Matteo Chinazzi, an associate research scientist at Northeastern University focused on in his paper Mapping the Physics Research Space: a Machine Learning Approach, along with co-authors including former TWIML AI Podcast guest Bruno Gonçalves. In addition to predicting the trajectory of physics research, Matteo is also active in the computational epidemiology field. His work in that area involves building simulators that can model the spread of diseases like Zika or the seasonal flu at a global scale.  Check out our full article on this episode at twimlai.com/talk/353.

The Data Life Podcast
16: Getting Started with Natural Language Processing

The Data Life Podcast

Play Episode Listen Later Oct 5, 2019 19:31


So many tweets and news articles and unstructured text surrounds us. How do we make sense of all of these? Natural language processing or NLP can help. NLP refers to algorithms that process, understand and generate aspects of natural language either in text or in spoken voice. In this episode we will cover some of the common techniques in NLP to help get started in this exciting field! We cover several tasks in a NLP pipeline: 1. Tokenization and punctuation removal 2. Stemming and Lemmatization 3. One hot vectors 4. Word embeddings including Word2Vec and Glove 5. Recurrent Neural Networks and LSTMs 6. tf and tf-idf approaches - when to use word embeddings, when to use tf / tf-idf approaches? 7. Generating text using encoder-decoder or sequence to sequence models Some resources: 1. Sequence Models - course by Andrew Ng on Coursera - one of the best courses I have seen on this topic! https://www.coursera.org/learn/nlp-sequence-models 2. Awesome collection of resources for NLP for Python, C++, Scala etc. and popular resource: https://github.com/keon/awesome-nlp 3. Overview of Text Similarity Metrics (a blog written by me on Medium): https://towardsdatascience.com/overview-of-text-similarity-metrics-3397c4601f50 4. How to train custom word embeddings on a GPU https://towardsdatascience.com/how-to-train-custom-word-embeddings-using-gpu-on-aws-f62727a1e3f6 Thanks for listening, please support this podcast by following the link in the end. --- Send in a voice message: https://anchor.fm/the-data-life-podcast/message Support this podcast: https://anchor.fm/the-data-life-podcast/support

Biznes Myśli
BM64: Przetwarzanie języka naturalnego w biznesie

Biznes Myśli

Play Episode Listen Later Sep 2, 2019 62:01


NLP, czyli przetwarzanie języka naturalnego może być bardzo pomocne z praktycznego punktu widzenia. Może usprawnić jakość produktu, zadowolenie klientów i ostatecznie zwiększyć wartość firmy. W obecnych czasach jest rozkwit NLP i wiele dzieje się w tym temacie. Słysząc o NLP na pewno chcesz widzieć o algorytmie Word2vec i rozumieć jak działa, bo wtedy jesteś w stanie lepiej zrozumieć, jakiego rodzaju problemy można rozwiązać. https://biznesmysli.pl/64

Biznes Myśli
BM64: Przetwarzanie języka naturalnego w biznesie

Biznes Myśli

Play Episode Listen Later Sep 1, 2019 62:01


NLP, czyli przetwarzanie języka naturalnego może być bardzo pomocne z praktycznego punktu widzenia. Może usprawnić jakość produktu, zadowolenie klientów i ostatecznie zwiększyć wartość firmy. W obecnych czasach jest rozkwit NLP i wiele dzieje się w tym temacie. Słysząc o NLP na pewno chcesz widzieć o algorytmie Word2vec i rozumieć jak działa, bo wtedy jesteś w stanie lepiej zrozumieć, jakiego rodzaju problemy można rozwiązać. https://biznesmysli.pl/64

This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Identifying New Materials with NLP with Anubhav Jain - TWIML Talk #291

This Week in Machine Learning & Artificial Intelligence (AI) Podcast

Play Episode Listen Later Aug 15, 2019 39:54


Today we are joined by Anubhav Jain, Staff Scientist & Chemist at Lawrence Berkeley National Lab. Anubhav leads the Hacker Materials Research Group, where his research focuses on applying computing to accelerate the process of finding new materials for functional applications. With the immense amount of published scientific research out there, it can be difficult to understand how that information can be applied to future studies, let alone find a way to read it all. In this episode we discuss: - His latest paper, ‘Unsupervised word embeddings capture latent knowledge from materials science literature’ - The design of a system that takes the literature and uses natural language processing to analyze, synthesize and then conceptualize complex material science concepts - How the method is shown to recommend materials for functional applications in the future - scientific literature mining at its best. Check out the complete show notes at twimlai.com/talk/291.

airhacks.fm podcast with adam bien
KISS Java EE, MicroProfile, AI, (Deep) Machine Learning

airhacks.fm podcast with adam bien

Play Episode Listen Later Aug 10, 2019 81:40


An airhacks.fm conversation with Pavel Pscheidl (@PavelPscheidl) about: Pentium 1 with 12, 75 MHz, first hello world with 17, Quake 3 friend as programming coach, starting with Java 1.6 at at the university of Hradec Kralove, second "hello world" with Operation Flashpoint, the third "hello world" was a Swing Java application as introduction to object oriented programming, introduction to enterprise Java in the 3rd year at the university, first commercial banking Java EE 6 / WebLogic project in Prague with mobile devices, working full time during the study, the first Java EE project was really successful, 2 month development time, one DTO, nor superfluous layers, using enunciate to generate the REST API, CDI and JAX-RS are a strong foundation, the first beep, fast JSF, CDI and JAX-RS deployments, the first beep, the War of Frameworks, pragmatic Java EE, "no frameworks" project at telco, reverse engineering Java EE, getting questions answered at airhacks.tv, working on PhD and statistics, starting at h2o.ai, h2o is a sillicon valley startup, h2o started as a distributed key-value store with involvement of Cliff Click, machine learning algorithms were introduced on top of distributed cache - the advent of h2o, h2o is an opensource company - see github, Driverless AI is the commercial product, Driverless AI automates cumbersome tasks, all AI heavy lifting is written in Java, h2o provides a custom java.util.Map implementation as distributed cache, random forest is great for outlier detection, the computer vision library openCV, Gradient Boosting Machine (GBM), the opensource airlines dataset, monitoring Java EE request processing queues with GBM, Generalized Linear Model (GLM), GBM vs. GLM, GBM is more explained with the decision tree as output, XGBoost, at h2o XGBoost is written in C and comes with JNI Java interface, XGBoost works well on GPUs, XGBoost is like GBM but optimized for GPUs, Word2vec, Deep Learning (Neural Networks), h2o generates a directly usable archive with the trained model -- and is directly usable in Java, K-Means, k-means will try to find the answer without a teacher, AI is just predictive statistics on steroids, Isolation Random Forest, IRF was designed for outlier detection, and K-Means was not, Naïve Bayes Classifier is rarely used in practice - it assumes no relation between the features, Stacking is the combination of algorithms to improve the results, AutoML: Automatic Machine Learning, AutomML will try to find the right combination of algorithms to match the outcome, h2o provides a set of connectors: csv, JDBC, amazon S3, Google Cloud Storage, applying AI to Java EE logs, the amount of training data depends on the amount of features, for each feature you will need approx. 30 observations, h2o world - the conference, cancer prediction with machine learning, preserving wildlife with AI, using AI for spider categorization Pavel Pscheidl on twitter: @PavelPscheidl, Pavel's blog: pavel.cool

Riesgo Existencial
Cuanta Ciencia 04 - 10 de Julio de 2019

Riesgo Existencial

Play Episode Listen Later Jul 12, 2019 4:06


¡Bienvenidos a Cuanta Ciencia! En donde hablamos de los descubrimientos más curiosos en el terreno de la ciencia.Este programa es traído gracias al apoyo que recibimos en Patreon de personas como Jaime Rosales. Tú también puedes apoyarnos en Patreon.com/CuantoContenido en donde también tenemos programas enfocados en cine y en cómics. En este episodio hablamos de Tentáculos Pensantes, Inteligencia Artificial Predictiva y Ejercicio y SinapsisEste y más contenido disponible en www.patreon.com/cuantocontenido y danos like en www.facebook.com/cuantocontenido que no te cuesta nadaCuanta Ciencia 04 - Noticias de interés al 10 de Julio de 2019¡Bienvenidos a Cuanta Ciencia! En donde hablamos de los descubrimientos más curiosos en el terreno de la ciencia.Este programa es traído gracias al apoyo que recibimos en Patreon de personas como Jaime Rosales. Tú también puedes apoyarnos en Patreon.com/CuantoContenido en donde también tenemos programas enfocados en cine y en cómics. ____________________Tentáculos PensantesLos pulpos son unas de las criaturas más fascinantes que hay en este planeta y eso se debe a su inteligencia y habilidad. Recientemente se descubrió que no toda su inteligencia viene del cerebro, sino de todo su cuerpo, ya que los cefalópodos evolucionaron de manera radicalmente distinta al resto de los seres vivos en el planeta. ¿Por qué? Por que en lugar de tener un sistema nervioso centralizado como nosotros, seres vertebrados, nos encontramos que dos terceras partes de sus neuronas están distribuidas en sus tentáculos, las cuales pueden tomar decisiones por sí mismas, sin recibir órdenes del cerebro. El neurocientífico Dominic Sivitilli de la Universidad de Washington nos dice que "Es un modelo alternativo de inteligencia, lo cual nos ayuda a entender sobre la diversidad cognitiva que hay en el mundo y quizás en el universo". Lo más fascinante al respecto es que aunque los tentáculos no se comunican con el cerebro, estos se comunican con otras partes del cuerpo del pulpo, pudiendo realizar acciones complejas mientras el cerebro se enfoca en otras actividades. "It's an alternative model for intelligence. It gives us an understanding as to the diversity of cognition in the world, and perhaps the universe."Dominic Sivitilli, Behavioural NeuroscientistFuente: https://www.sciencealert.com/here-s-how-octopus-arms-make-decisions-without-input-from-the-brain___________________________________________¿Las inteligencias artificiales van a reemplazarnos? Tal vez sí, tal vez no, pero el saber usarlas puede ser gran ayuda. Investigadores del Laboratorio Nacional Lawrence Berkeley desarrollaron una que a través de aprendizaje mecánico ha logrado identificar cosas que a los simples humanos se nos pasan por alto. ¿En que consiste? Los investigadores le ingestaron al algoritmo conocido como Word2vec más de 3 millones de resúmenes de materiales publicados en más de mil publicaciones desde 1922 hasta 2018. Se necesitaron más de 500 mil palabras distintas de todos estos resúmenes para poder establecer conexiones matemáticas entre todos los textos y con eso se podían generar predicciones. El principal tema tratado por el equipo fueron los materiales termoeléctricos. Basados en todos los textos analizados, la Inteligencia Artificial pudo determinar qué material tenía las mejores propiedades termoeléctricas, y lo más asombroso es que cuando solo había revisado materiales publicados hasta el 2008, Word2Vec fue capaz de predecir que materiales aparecerían en futuros estudios. "Honestamente no esperaba que el algoritmo pudiera predecir resultados futuros" dice Anubhav Jain, líder del equipo en Berkeley. "Quedé sorprendido cuando vi las predicciones y además el razonamiento detrás de las mismas. Esto muestra que si este algoritmo se hubiera desarrollado antes, algunos materiales pudieron ser descubiertos años antes"Ahí lo tienen, otro caso en donde el uso de herramientas como el aprendizaje mecánico en la inteligencia artificial puede ayudarnos a revisar datos y obtener mejores resultados en conjunto con la coordinación humana. "I honestly didn't expect the algorithm to be so predictive of future results. I had thought maybe the algorithm could be descriptive of what people had done before but not come up with these different connections. "Anubhav Jain, Berkeley Lab's Energy Storage & Distributed Resources DivisionFuente: https://www.iflscience.com/technology/artificial-intelligence-set-loose-on-old-scientific-papers-discovers-something-humans-missed/___________________________________________Ejercicio y SinápsisMoverse es saludable y ayuda no solo a los músculos, sino también al cerebro. Eso demuestran estudios recientes hechos en la Universidad de Ciencia y Salud de Oregon. Mientras es conocimiento común que el ejercicio es benéfico para la salud, al analizar ejercicios por periodos cortos, hechos con ratones se descubrió que estos incrementaban la cantidad de sinapsis en el hipocampo de los roedores. Al analizar más a detalle se descubrió que hay un gen en particular que aumentaba su actividad: el Mtss1L, el cual nunca se había considerado en estudios previos. Este gen encoda la proteína que promueve crecimiento en las espinas dendríticas que es en donde ocurre la sinapsis. "Previos estudios se enfocaban en ejercicios hechos por más tiempo" dice el Doctor Gary Westbrook "como neurocientíficos, no es que no nos importen los beneficios de la actividad en el corazón y los músculos, pero buscábamos saber más sobre los beneficios específicos del ejercicio en el cerebro". Si se sienten con mucha tensión al hacer demasiados procesos mentales, salgan y hagan un poco de ejercicio. Esto ayudará no solo a despejarse, sino que mejorará las funciones neuronales, ayudándoles a que carburen de mejor manera. "Previous studies of exercise almost all focus on sustained exercise. As neuroscientists, it's not that we don't care about the benefits on the heart and muscles but we wanted to know the brain-specific benefit of exercise".Gary Westbrook, M.D. Senior Scientist at the OHSU Vollum InstituteFuente: https://news.ohsu.edu/2019/07/02/study-reveals-a-short-bout-of-exercise-enhances-brain-function___________________________________________Y con esto terminamos este episodio de Cuanta Ciencia. Recuerden que este proyecto solo puede continuar con su apoyo. Si les gustó este video, denle like, dejen sus comentarios y compartan, que no les cuesta nada. Este y más episodios estarán disponibles en nuestra página de Facebook punto com / CuantoContenido. Si buscan otro tipo de programas ¿ya escucharon el podcast de Filmsteria? Se la pasarán bien escuchando a Penny, Josué y Alejandro hablando de cine y los chismes del espectáculo en algo que es como el ventaneando de los podcasts de cine. Los encuentran en Dixo, Apple Podcasts, Spotify y lugares similares. Ahora sí me despido, yo soy Dan Campos, gracias por acompañarme. Estaremos viéndonos en otro episodio, en la pequeña pantalla.

Riesgo Existencial
Cuanta Ciencia 04 - 10 de Julio de 2019

Riesgo Existencial

Play Episode Listen Later Jul 12, 2019 4:06


¡Bienvenidos a Cuanta Ciencia! En donde hablamos de los descubrimientos más curiosos en el terreno de la ciencia.Este programa es traído gracias al apoyo que recibimos en Patreon de personas como Jaime Rosales. Tú también puedes apoyarnos en Patreon.com/CuantoContenido en donde también tenemos programas enfocados en cine y en cómics. En este episodio hablamos de Tentáculos Pensantes, Inteligencia Artificial Predictiva y Ejercicio y SinapsisEste y más contenido disponible en www.patreon.com/cuantocontenido y danos like en www.facebook.com/cuantocontenido que no te cuesta nadaCuanta Ciencia 04 - Noticias de interés al 10 de Julio de 2019¡Bienvenidos a Cuanta Ciencia! En donde hablamos de los descubrimientos más curiosos en el terreno de la ciencia.Este programa es traído gracias al apoyo que recibimos en Patreon de personas como Jaime Rosales. Tú también puedes apoyarnos en Patreon.com/CuantoContenido en donde también tenemos programas enfocados en cine y en cómics. ____________________Tentáculos PensantesLos pulpos son unas de las criaturas más fascinantes que hay en este planeta y eso se debe a su inteligencia y habilidad. Recientemente se descubrió que no toda su inteligencia viene del cerebro, sino de todo su cuerpo, ya que los cefalópodos evolucionaron de manera radicalmente distinta al resto de los seres vivos en el planeta. ¿Por qué? Por que en lugar de tener un sistema nervioso centralizado como nosotros, seres vertebrados, nos encontramos que dos terceras partes de sus neuronas están distribuidas en sus tentáculos, las cuales pueden tomar decisiones por sí mismas, sin recibir órdenes del cerebro. El neurocientífico Dominic Sivitilli de la Universidad de Washington nos dice que "Es un modelo alternativo de inteligencia, lo cual nos ayuda a entender sobre la diversidad cognitiva que hay en el mundo y quizás en el universo". Lo más fascinante al respecto es que aunque los tentáculos no se comunican con el cerebro, estos se comunican con otras partes del cuerpo del pulpo, pudiendo realizar acciones complejas mientras el cerebro se enfoca en otras actividades. "It's an alternative model for intelligence. It gives us an understanding as to the diversity of cognition in the world, and perhaps the universe."Dominic Sivitilli, Behavioural NeuroscientistFuente: https://www.sciencealert.com/here-s-how-octopus-arms-make-decisions-without-input-from-the-brain___________________________________________¿Las inteligencias artificiales van a reemplazarnos? Tal vez sí, tal vez no, pero el saber usarlas puede ser gran ayuda. Investigadores del Laboratorio Nacional Lawrence Berkeley desarrollaron una que a través de aprendizaje mecánico ha logrado identificar cosas que a los simples humanos se nos pasan por alto. ¿En que consiste? Los investigadores le ingestaron al algoritmo conocido como Word2vec más de 3 millones de resúmenes de materiales publicados en más de mil publicaciones desde 1922 hasta 2018. Se necesitaron más de 500 mil palabras distintas de todos estos resúmenes para poder establecer conexiones matemáticas entre todos los textos y con eso se podían generar predicciones. El principal tema tratado por el equipo fueron los materiales termoeléctricos. Basados en todos los textos analizados, la Inteligencia Artificial pudo determinar qué material tenía las mejores propiedades termoeléctricas, y lo más asombroso es que cuando solo había revisado materiales publicados hasta el 2008, Word2Vec fue capaz de predecir que materiales aparecerían en futuros estudios. "Honestamente no esperaba que el algoritmo pudiera predecir resultados futuros" dice Anubhav Jain, líder del equipo en Berkeley. "Quedé sorprendido cuando vi las predicciones y además el razonamiento detrás de las mismas. Esto muestra que si este algoritmo se hubiera desarrollado antes, algunos materiales pudieron ser descubiertos años antes"Ahí lo tienen, otro caso en donde el uso de herramientas como el aprendizaje mecánico en la inteligencia artificial puede ayudarnos a revisar datos y obtener mejores resultados en conjunto con la coordinación humana. "I honestly didn't expect the algorithm to be so predictive of future results. I had thought maybe the algorithm could be descriptive of what people had done before but not come up with these different connections. "Anubhav Jain, Berkeley Lab's Energy Storage & Distributed Resources DivisionFuente: https://www.iflscience.com/technology/artificial-intelligence-set-loose-on-old-scientific-papers-discovers-something-humans-missed/___________________________________________Ejercicio y SinápsisMoverse es saludable y ayuda no solo a los músculos, sino también al cerebro. Eso demuestran estudios recientes hechos en la Universidad de Ciencia y Salud de Oregon. Mientras es conocimiento común que el ejercicio es benéfico para la salud, al analizar ejercicios por periodos cortos, hechos con ratones se descubrió que estos incrementaban la cantidad de sinapsis en el hipocampo de los roedores. Al analizar más a detalle se descubrió que hay un gen en particular que aumentaba su actividad: el Mtss1L, el cual nunca se había considerado en estudios previos. Este gen encoda la proteína que promueve crecimiento en las espinas dendríticas que es en donde ocurre la sinapsis. "Previos estudios se enfocaban en ejercicios hechos por más tiempo" dice el Doctor Gary Westbrook "como neurocientíficos, no es que no nos importen los beneficios de la actividad en el corazón y los músculos, pero buscábamos saber más sobre los beneficios específicos del ejercicio en el cerebro". Si se sienten con mucha tensión al hacer demasiados procesos mentales, salgan y hagan un poco de ejercicio. Esto ayudará no solo a despejarse, sino que mejorará las funciones neuronales, ayudándoles a que carburen de mejor manera. "Previous studies of exercise almost all focus on sustained exercise. As neuroscientists, it's not that we don't care about the benefits on the heart and muscles but we wanted to know the brain-specific benefit of exercise".Gary Westbrook, M.D. Senior Scientist at the OHSU Vollum InstituteFuente: https://news.ohsu.edu/2019/07/02/study-reveals-a-short-bout-of-exercise-enhances-brain-function___________________________________________Y con esto terminamos este episodio de Cuanta Ciencia. Recuerden que este proyecto solo puede continuar con su apoyo. Si les gustó este video, denle like, dejen sus comentarios y compartan, que no les cuesta nada. Este y más episodios estarán disponibles en nuestra página de Facebook punto com / CuantoContenido. Si buscan otro tipo de programas ¿ya escucharon el podcast de Filmsteria? Se la pasarán bien escuchando a Penny, Josué y Alejandro hablando de cine y los chismes del espectáculo en algo que es como el ventaneando de los podcasts de cine. Los encuentran en Dixo, Apple Podcasts, Spotify y lugares similares. Ahora sí me despido, yo soy Dan Campos, gracias por acompañarme. Estaremos viéndonos en otro episodio, en la pequeña pantalla.

Podcasts – Weird Things
WT: Dr. Skynet

Podcasts – Weird Things

Play Episode Listen Later Jul 9, 2019 65:03


The Earth shook–twice!! What’s more dangerous: earthquakes or tornados? How dangerous would an extended solar flare be today? SpaceX updates: Raptor engine test and a caught fairing. How much is acceptable cell phone camera image processing? Another deepfakes Pandora’s Box is opened. Unsupervised machine learning (Word2vec) may find science discoveries just from reading other research […]

Linear Digressions
Revisiting Biased Word Embeddings

Linear Digressions

Play Episode Listen Later Jun 23, 2019 18:09


The topic of bias in word embeddings gets yet another pass this week. It all started a few years ago, when an analogy task performed on Word2Vec embeddings showed some indications of gender bias around professions (as well as other forms of social bias getting reproduced in the algorithm’s embeddings). We covered the topic again a while later, covering methods for de-biasing embeddings to counteract this effect. And now we’re back, with a second pass on the original Word2Vec analogy task, but where the researchers deconstructed the “rules” of the analogies themselves and came to an interesting discovery: the bias seems to be, at least in part, an artifact of the analogy construction method. Intrigued? So were we… Relevant link: https://arxiv.org/abs/1905.09866

Data Science at Home
Episode 64: Get the best shot at NLP sentiment analysis

Data Science at Home

Play Episode Listen Later Jun 14, 2019 12:58


The rapid diffusion of social media like Facebook and Twitter, and the massive use of different types of forums like Reddit, Quora, etc., is producing an impressive amount of text data every day.  There is one specific activity that many business owners have been contemplating over the last five years, that is identifying the social sentiment of their brand, by analysing the conversations of their users. In this episode I explain how one can get the best shot at classifying sentences with deep learning and word embedding.     Additional material Schematic representation of how to learn a word embedding matrix E by training a neural network that, given the previous M words, predicts the next word in a sentence.        Word2Vec example source code https://gist.github.com/rlangone/ded90673f65e932fd14ae53a26e89eee#file-word2vec_example-py     References [1] Mikolov, T. et al., "Distributed Representations of Words and Phrases and their Compositionality", Advances in Neural Information Processing Systems 26, pages 3111-3119, 2013. [2] The Best Embedding Method for Sentiment Classification, https://medium.com/@bramblexu/blog-md-34c5d082a8c5 [3] The state of sentiment analysis: word, sub-word and character embedding  https://amethix.com/state-of-sentiment-analysis-embedding/  

reddit references advances phrases quora best shot sentiment analysis word2vec compositionality neural information processing systems
Elevate with the Ed Lab @ Catlin Gabel
Predicting the Stock Market: a senior’s project

Elevate with the Ed Lab @ Catlin Gabel

Play Episode Listen Later Jun 12, 2019 16:13


In this episode of Elevate Rob sits down with senior Sasha Agapiev to explore the details of a year-long investigation he pursued in the Honors Computer Science class with Andrew Merrill.  Sasha's interest in the world of quantitative finance led him to design a project that explored ways to use natural language process algorithms to analyze historical stock market prices, and to potentially find some new market patterns. For more details about Shasha's project check out his website: https://carpeventures.com/  For more information about Word2Vec, the neural network software he used check out: https://skymind.ai/wiki/word2vec If you have questions about this episode, are curious about this podcast in general or have people or topics you think we should cover in coming episodes please email us at   vannoodr@catlin.edu --- Send in a voice message: https://anchor.fm/elevatelearning/message

DataBuzzWord
#17. Charles Cohen - Bodyguard.ai - l'IA au secours du CyberHarcelement

DataBuzzWord

Play Episode Listen Later Mar 28, 2019 21:22


Le slack de la communauté : https://bigdatahebdo.slack.com/envoyez un mail à contact@bigdatahebdo.com avec votre email dans le corps du message.Bodyguard :https://bodyguard.aihttps://twitter.com/Bodyguard_appAPP IOS : https://www.bodyguard.ai/ios.phpAPP Android: https://play.google.com/store/apps/details?id=clash.charles.hghWord2Vec:https://fr.wikipedia.org/wiki/Word2vecFastText:https://fasttext.cc/nos twitters:https://twitter.com/charleschn7https://twitter.com/JiliJeanlouisDataBuzzWord sur les réseaux Sociaux: Facebook : https://www.facebook.com/Databuzzword Twitter : https://twitter.com/Databuzzword Instagram : https://www.instagram.com/databuzzword/ Youtube : http://bit.ly/YT-DBW

Data Skeptic
word2vec

Data Skeptic

Play Episode Listen Later Feb 1, 2019 31:27


Word2vec is an unsupervised machine learning model which is able to capture semantic information from the text it is trained on. The model is based on neural networks. Several large organizations like Google and Facebook have trained word embeddings (the result of word2vec) on large corpora and shared them for others to use. The key algorithmic ideas involved in word2vec is the continuous bag of words model (CBOW). In this episode, Kyle uses excerpts from the 1983 cinematic masterpiece War Games, and challenges Linhda to guess a word Kyle leaves out of the transcript. This is similar to how word2vec is trained. It trains a neural network to predict a hidden word based on the words that appear before and after the missing location.

Linear Digressions
Re-release: Word2Vec

Linear Digressions

Play Episode Listen Later Dec 30, 2018 17:59


Bringing you another old classic this week, as we gear up for 2019! See you next week with new content. Word2Vec is probably the go-to algorithm for vectorizing text data these days.  Which makes sense, because it is wicked cool.  Word2Vec has it all: neural networks, skip-grams and bag-of-words implementations, a multiclass classifier that gets swapped out for a binary classifier, made-up dummy words, and a model that isn't actually used to predict anything (usually).  And all that's before we get to the part about how Word2Vec allows you to do algebra with text.  Seriously, this stuff is cool.

Machine Learning – Software Engineering Daily
Word2Vec with Adrian Colyer Holiday Repeat

Machine Learning – Software Engineering Daily

Play Episode Listen Later Dec 28, 2018 61:47


Originally posted on 13 September 2017. Machines understand the world through mathematical representations. In order to train a machine learning model, we need to describe everything in terms of numbers.  Images, words, and sounds are too abstract for a computer. But a series of numbers is a representation that we can all agree on, whether The post Word2Vec with Adrian Colyer Holiday Repeat appeared first on Software Engineering Daily.

Real Chatbot
Word2Vec Introduction for Chatbots

Real Chatbot

Play Episode Listen Later Aug 28, 2018 4:42


Learn what Word2Vec is and the different algorithms and approaches within it. Get a quick glimpse of why it is popular and a quick overview how it works.

Linear Digressions
Debiasing Word Embeddings

Linear Digressions

Play Episode Listen Later Dec 17, 2017 18:20


When we covered the Word2Vec algorithm for embedding words, we mentioned parenthetically that the word embeddings it produces can sometimes be a little bit less than ideal--in particular, gender bias from our society can creep into the embeddings and give results that are sexist. For example, occupational words like "doctor" and "nurse" are more highly aligned with "man" or "woman," which can create problems because these word embeddings are used in algorithms that help people find information or make decisions. However, a group of researchers has released a new paper detailing ways to de-bias the embeddings, so we retain gender info that's not particularly problematic (for example, "king" vs. "queen") while correcting bias.

This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Word2Vec & Friends with Bruno Gonçalves - TWiML Talk #48

This Week in Machine Learning & Artificial Intelligence (AI) Podcast

Play Episode Listen Later Sep 18, 2017 33:41


This week i'm bringing you an interview from Bruno Goncalves, a Moore-Sloan Data Science Fellow at NYU. As you’ll hear in the interview, Bruno is a longtime listener of the podcast. We were able to connect at the NY AI conference back in June after I noted on a previous show that I was interested in learning more about word2vec. Bruno graciously agreed to come on the show and walk us through an overview of word embeddings, word2vec and related ideas. He provides a great overview of not only word2vec, related NLP concepts such as Skip Gram, Continuous Bag of Words, Node2Vec and TFIDF. Notes for this show can be found at twimlai.com/talk/48.

Machine Learning – Software Engineering Daily
Word2Vec with Adrian Colyer

Machine Learning – Software Engineering Daily

Play Episode Listen Later Sep 13, 2017 61:47


Machines understand the world through mathematical representations. In order to train a machine learning model, we need to describe everything in terms of numbers.  Images, words, and sounds are too abstract for a computer. But a series of numbers is a representation that we can all agree on, whether we are a computer or a The post Word2Vec with Adrian Colyer appeared first on Software Engineering Daily.

Greatest Hits – Software Engineering Daily
Word2Vec with Adrian Colyer

Greatest Hits – Software Engineering Daily

Play Episode Listen Later Sep 13, 2017 61:47


Machines understand the world through mathematical representations. In order to train a machine learning model, we need to describe everything in terms of numbers.  Images, words, and sounds are too abstract for a computer. But a series of numbers is a representation that we can all agree on, whether we are a computer or a The post Word2Vec with Adrian Colyer appeared first on Software Engineering Daily.

ZADevChat Podcast
71 - Is Intelligence an Algorithm? With Jade Abbott

ZADevChat Podcast

Play Episode Listen Later Aug 29, 2017 70:49


We chat to Jade Abbott from Retro Rabbit about artificial intelligence, broadly and more specifically about NLP and what that means for us. Chantal, Kenneth & Len talk to Jade about natural language processing, commonly referred to as NLP. What does it take to get a machine to understand what we're saying as people? Jade has always had a fascination with smart machines, from trying to build robots in school and now teaching machines to understand what we're saying. Jade takes a fairly complex topic and helps us come to terms with it. We question whether people, or intelligence, is algorithmic and what that means. Processing natural language is not without challenges and Jade walks us through the maze of terminology and some tools to get started with, and we have several resources below to help as well. What would happen if AI tries to right a movie? What happens if the movie was made? Importantly, neural nets are not the whole of AI. We wander around expert systems, random forests, and other great statistical models that very useful and predictable. Is intelligence just an algorithm? What do you think? Let us know! Find and follow Jade online: * https://twitter.com/alienelf * http://github.com/jaderabbit * https://twitter.com/fmfyband * https://fmfy.bandcamp.com/ Jade has some repos with sample projects on GitHub: * https://github.com/jaderabbit/botcon2016 * https://github.com/jaderabbit/deepdreamsofelectricsheep * https://www.kaggle.com/jaderabbit/training-an-lstm-to-write-songs Jade offers some great resources not specifically covered in the show. For people looking to get into AI: * https://www.coursera.org/learn/machine-learning - Andrew Ng's Coursera Machine Learning course * Kaggle - http://www.kaggle.com/ * https://medium.freecodecamp.org/the-best-data-science-courses-on-the-internet-ranked-by-your-reviews-6dc5b910ea40 How Neural Networks Really Work: * https://www.youtube.com/watch?v=EInQoVLg_UY&t=78s How Neural Network's Really Work by Geoffrey Hinton Using Natural Language for AI * The key blog post on using deep learning for Natural Language Processing: http://karpathy.github.io/2015/05/21/rnn-effectiveness/ * Beautiful tutorial on Word2Vec http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/ * My kaggle notebook for training a neural network to generate songs: https://www.kaggle.com/jaderabbit/training-an-lstm-to-write-songs Here are some resources mentioned in the show: * Ex Machina - https://en.wikipedia.org/wiki/Ex_Machina_(film) * Alien Covenant - https://en.wikipedia.org/wiki/Alien:_Covenant * Marvin from Hitchhikers Guide - https://en.wikipedia.org/wiki/Marvin_(character) * Sherlock - https://en.wikipedia.org/wiki/Sherlock_(TV_series) * Natural Language Processing - https://en.wikipedia.org/wiki/Natural_language_processing * word2vec - https://en.wikipedia.org/wiki/Word2vec * Convolutional neural network - https://en.wikipedia.org/wiki/Convolutional_neural_network * Recurrent neural network - https://en.wikipedia.org/wiki/Recurrent_neural_network * Kaggle - https://www.kaggle.com/ * Sunspring | A Sci-Fi Short Film - https://www.youtube.com/watch?v=LY7x2Ihqjmc * Rabbiteer - https://rabbiteer.io/ And finally our picks Jade: * Kaggle - https://www.kaggle.com * Sunspring | A Sci-Fi Short Film - https://www.youtube.com/watch?v=LY7x2Ihqjmc * Creativity: how is AI impacting this human skill? - http://bit.ly/2wjumoD Chantal: * For Computers, Too, It's Hard to Learn to Speak Chinese - http://bit.ly/2graOJr Kenneth: * Westworld - https://en.wikipedia.org/wiki/Westworld_(TV_series) Len: * Instaparse - https://github.com/Engelberg/instaparse Thanks for listening! Stay in touch: * Website & newsletter - https://zadevchat.io * Socialize - https://twitter.com/zadevchat & http://facebook.com/ZADevChat/ * Suggestions and feedback - https://github.com/zadevchat/ping * Subscribe and rate in iTunes - http://bit.ly/zadevchat-itunes

Machine Learning Guide
022 Deep NLP 1

Machine Learning Guide

Play Episode Listen Later Jul 28, 2017 49:21


Recurrent Neural Networks (RNNs) and Word2Vec. ocdevel.com/mlg/22 for notes and resources

Data Driven
*DataScienceDaily* for June 16, 2017 – Facebook Gets Bots to Negotiate, Word2Vec, and 25 Billion Simulated Galaxies

Data Driven

Play Episode Listen Later Jun 16, 2017 2:45


Data Science Daily Show Notes @CNNnews18 (https://twitter.com/CNNnews18) @gcosma1 Word2Vec (skip-gram neural network model): PART 1 – Intuition by @ManishChablani https://medium.com/towards-data-science/word2vec-skip-gram-model-part-1-intuition-78614e4d6e0b (https://medium.com/towards-data-science/word2vec-skip-gram-model-part-1-intuition-78614e4d6e0b) … #DataScience #MachineLearning #NLP @Rbloggers   Using Partial Least Squares to conduct relative importance analysis in Displayr https://wp.me/pMm6L-Dwv (https://wp.me/pMm6L-Dwv)  #rstats #DataScience @gp_pulipaka   Revolutionary Supercomputer Code Simulates Entire Cosmos: “25 Billion Virtual Galaxies.” #BigData #DataScience #HPC  http://buff.ly/2suwKrI (http://buff.ly/2suwKrI)

Linear Digressions
Word2Vec

Linear Digressions

Play Episode Listen Later Apr 30, 2017 17:59


Word2Vec is probably the go-to algorithm for vectorizing text data these days.  Which makes sense, because it is wicked cool.  Word2Vec has it all: neural networks, skip-grams and bag-of-words implementations, a multiclass classifier that gets swapped out for a binary classifier, made-up dummy words, and a model that isn't actually used to predict anything (usually).  And all that's before we get to the part about how Word2Vec allows you to do algebra with text.  Seriously, this stuff is cool.

RARE PERSPECTIVES: The AI and Machine Learning Podcast
RRP #1: Tomáš Mikolov on word2vec and AI research at Microsoft, Google, Facebook

RARE PERSPECTIVES: The AI and Machine Learning Podcast

Play Episode Listen Later Feb 9, 2017 62:21


Episode Summary: Today I sat down with Tomáš Mikolov, my fellow Czech countryman whom most of you will know through his work on word2vec. But Tomáš has many more interesting things to say beside word2vec (although we cover word2vec too!): his beginnings with 8bit graphics and games, living in NY compared to California, AI research at Microsoft vs Google vs ... Read More

SEO Radio
Search Geeks Speak: RankBrain Panel

SEO Radio

Play Episode Listen Later Apr 27, 2016 72:45


  RankBrain Panel Discussion: Blog Posts on RankBrain SEL Coverage on RankBrain RankBrain coverage on SEM Post Bill's coverage on RankBrain More on SEM Post Gary Illyes comments Christine S on RankBrain Bloomberg article Stone Temple on RB myths Word2Vec David's coverage on the Word2Vec patent Word2Vec Project Insights Parameter learning explained (PDF) HummingBird HummingBird post on the DOjo Bill on Hummingbird Other Reading Google Brain project Google Deep Mind purchase Efficient estimation of word representations in vector space Distributed Representations of Words and Phrases and their Compositionality (PDF) “Efficient Estimation of Word Representations in Vector Space” (PDF) Hosts: Terry Van Horne Dave Harry iTunes and the Dojo Radio iPhone App!

Center for Mind, Brain and Culture
The Large-Scale Structure of the Mental Dictionary: A Data Mining Approach Using Word2Vec, t-SNE, and GMeans

Center for Mind, Brain and Culture

Play Episode Listen Later Nov 17, 2015 56:30


Advancements in machine learning and data mining have already led to amazing breakthroughs in the natural sciences, including the unlocking of the human genome and the detection of subatomic particles. Such techniques promise to wield a similar impact on the study of mind. In my talk I will discuss how the large-scale structure of the human mental lexicon, roughly 50,000 words, can be recovered from billions of words at a level of resolution that includes the differentiation of word senses. Central to this effort are several machine learning and dimensionality reduction techniques, including deep learning, t-Distributed Stochastic Neighbor Embedding (t-SNE), and the clustering technique called GMeans. In addition to the extraction of the mental lexicon, I will discuss how an approach to topic modeling, based on neural networks, might be used to partially automate the process of theory generation. I also raise implications for research on physical and mental wellbeing. NEUROSCIENCE WORKSHOP: Dimensionality Reduction Friday, October 30, 2015 Saturday, October 31, 2015

Center for Mind, Brain, and Culture
Neuroscience Workshop/Lecture (4 of 5) | Phil Wolff | The Large-Scale Structure of the Mental Dictionary: A Data Mining Approach Using Word2Vec, t-SNE, and GMeans

Center for Mind, Brain, and Culture

Play Episode Listen Later Oct 31, 2015 56:31


Advancements in machine learning and data mining have already led to amazing breakthroughs in the natural sciences, including the unlocking of the human genome and the detection of subatomic particles. Such techniques promise to wield a similar impact on the study of mind. In my talk I will discuss how the large-scale structure of the human mental lexicon, roughly 50,000 words, can be recovered from billions of words at a level of resolution that includes the differentiation of word senses. Central to this effort are several machine learning and dimensionality reduction techniques, including deep learning, t-Distributed Stochastic Neighbor Embedding (t-SNE), and the clustering technique called GMeans. In addition to the extraction of the mental lexicon, I will discuss how an approach to topic modeling, based on neural networks, might be used to partially automate the process of theory generation. I also raise implications for research on physical and mental wellbeing. NEUROSCIENCE WORKSHOP: Dimensionality Reduction Friday, October 30, 2015 Saturday, October 31, 2015