Podcasts about recommender systems

  • 59PODCASTS
  • 105EPISODES
  • 58mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Feb 19, 2026LATEST

POPULARITY

20192020202120222023202420252026


Best podcasts about recommender systems

Latest podcast episodes about recommender systems

MLOps.community
Serving LLMs in Production: Performance, Cost & Scale // CAST AI Roundtable

MLOps.community

Play Episode Listen Later Feb 19, 2026 65:55


Roundtable CAST AI episode: Serving LLMs in Production: Performance, Cost & Scale. Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterMLOps GPU Guide: https://go.mlops.community/gpuguide// AbstractExperimenting with LLMs is easy. Running them reliably and cost-effectively in production is where things break. Most AI teams never make it past demos and proofs of concept. A smaller group is pushing real workloads to production—and running into very real challenges around infrastructure efficiency, runaway cloud costs, and reliability at scale.This session is for engineers and platform teams moving beyond experimentation and building AI systems that actually hold up in production.// BioIoana ApetreiIoana is a Senior Product Manager at CAST AI, leading the AI Enabler product, an AI Gateway platform for cost-effective LLM infrastructure deployment. She brings 12 years of experience building B2C and B2B products reaching over 10 million users. Outside of work, she enjoys assembling puzzles and LEGOs and watching motorsports.Igor ŠušićIgor is a founding Machine Learning Engineer at CAST AI's AI Enabler, where he focuses on optimizing inference and training at scale. With a strong background in Natural Language Processing (NLP) and Recommender Systems, Igor has been tackling the challenges of large-scale model optimization long before transformers became mainstream. Prior to CAST AI, he worked at industry leaders like Bloomreach and Infobip, where he contributed to the development and deployment of large-scale AI and personalization systems from the early days of the field.// Related LinksWebsite: https://cast.ai/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Ioana on LinkedIn: /ioanaapetrei/Connect with Igor on LinkedIn: /igor-%C5%A1u%C5%A1i%C4%87/

Recsperts - Recommender Systems Experts
#31: Psychology-Aware Recommender Systems with Elisabeth Lex

Recsperts - Recommender Systems Experts

Play Episode Listen Later Feb 19, 2026 97:26


In episode 31 of Recsperts, I sit down with Elisabeth Lex, Full Professor of Human-Computer Interfaces and Inclusive Technologies at Graz University of Technology and a leading researcher at the intersection of recommender systems, psychology, and human–computer interaction. Together, we explore how recommender systems can become truly human-centric by integrating cognitive, emotional, and personality-aware models into their design.Elisabeth begins by addressing a common reductionism in the field: treating users primarily as data points rather than as humans with goals, emotions, memories, and cognitive boundaries. We revisit the origins of psychology-informed recommendation, including the Grundy system -the first recommender system, built nearly 50 years ago - which framed book recommendation through stereotype modeling. From there, we discuss how the community's focus shifted toward solving recommendation mainly as an algorithmic optimization problem, often sidelining richer models of human decision-making.We then map out the three major branches of psychology-informed RecSys - cognition-inspired, affect-aware, and personality-aware - and dive into practical examples. Elisabeth walks us through her work on modeling music re-listening behavior using cognitive architectures such as ACT-R (Adaptive Control of Thought–Rational) and shows how cognitive constructs like memory decay, attention, and familiarity can meaningfully augment standard approaches like collaborative filtering. We also explore how hybrid systems that combine cognitive models with collaborative filtering can yield not just higher accuracy but also more novelty, diversity, and clearer explanations.Our conversation also turns to user-centric evaluation. Elisabeth argues that accuracy metrics alone cannot tell us whether a system is genuinely helpful. Instead, we must measure attitudes, perceptions, motivations, and emotional responses - while carefully accounting for cognitive biases, UI effects, and users' lived experiences.Towards the end, Elisabeth discusses emerging research directions such as hybrid AI (symbolic + sub-symbolic methods), the role of LLMs and agents, the risks of replacing human studies with automated evaluations, and the responsibility our community has to understand users beyond their clicks.Enjoy this enriching episode of RECSPERTS – Recommender Systems Experts.Don't forget to follow the podcast and please leave a review.(00:00) - Introduction (03:15) - About Elisabeth Lex (07:55) - Grundy, the first Recommender System (09:03) - Bridging the Gap between Psychology and Modern RecSys (17:21) - On how and when Elisabeth became a Researcher (21:39) - Survey on Psychology-Informed RecSys (39:29) - Personality-Aware Recommendation (49:43) - Affect- and Emotion-Aware Recommendation (01:01:37) - Cognition-Inspired Recommendation and the ACT-R Framework (01:14:39) - Combining Collaborative Filtering and ACT-R for Explainability (01:21:26) - Human-Centered Design (01:26:15) - Further Challenges and Closing Remarks Links from the Episode:Elisabeth Lex on LinkedInWebsite of ElisabethAI for Society LabFirst International Workshop on Recommender Systems for Sustainability and Social Good | co-located with RecSys 2024Second International Workshop on Recommender Systems for Sustainability and Social Good | co-located with RecSys 2025HyPer Workshop: Hybrid AI for Human-Centric PersonalizationTutorial on Psychology-Informed RecSysACT-R: Adaptive Control of Thought-RationalPOPROX: Platform for OPen Recommendation and Online eXperimentationPapers:Elaine Rich (1979): User Modeling via StereotypesLex et al. (2021): Psychology-informed Recommender SystemsReiter-Haas et al. (2021): Predicting Music Relistening Behavior Using the ACT-R FrameworkMoscati et al. (2023): Integrating the ACT-R Framework with Collaborative Filtering for Explainable Sequential Music RecommendationTran et al. (2024): Transformers Meet ACT-R: Repeat-Aware and Sequential Listening Session RecommendationGeneral Links:Follow me on LinkedInFollow me on XSend me your comments, questions and suggestions to marcel.kurovski@gmail.comRecsperts Website

Data Skeptic
Healthy Friction in Job Recommender Systems

Data Skeptic

Play Episode Listen Later Feb 2, 2026 26:37


In this episode, host Kyle Polich speaks with Roan Schellingerhout, a fourth-year PhD student at Maastricht University, about explainable multi-stakeholder recommender systems for job recruitment. Roan discusses his research on creating AI-powered job matching systems that balance the needs of multiple stakeholders—job seekers, recruiters, HR professionals, and companies. The conversation explores different types of explanations for job recommendations, including textual, bar chart, and graph-based formats, with findings showing that lay users strongly prefer simple textual explanations over more technical visualizations. Roan shares insights from his "healthy friction" study, which tested whether users could distinguish between real AI-generated explanations and randomly generated ones, revealing that participants often used explanations as information sources rather than decision-making tools. The discussion delves into the technical architecture behind these systems, including the use of knowledge graphs built from tabular data, inference rules, and large language models to generate human-friendly explanations. Roan explains how his research aims to open the black box of recommender systems, making them more transparent and trustworthy for non-technical users. Looking forward, he discusses ongoing work on automated knowledge graph construction from resumes and job listings, research into fairness considerations around gender and location, and plans for real-world testing with actual job seekers. The episode concludes with Roan's vision for the future: AI systems that support rather than replace human recruiters, making the job search process less grueling while maintaining the essential human judgment that recruitment requires.  

Recsperts - Recommender Systems Experts
#30: Serendipity for Recommender Systems with Annelien Smets

Recsperts - Recommender Systems Experts

Play Episode Listen Later Jan 28, 2026 92:01


In episode 30 of Recsperts, I speak with Annelien Smets, Professor at Vrije Universiteit Brussel and Senior Researcher at imec-SMIT, about the value, perception, and practical design of serendipity in recommender systems. Annelien introduces her framework for understanding serendipity through intention, experience, and affordances, and explains the paradox of artificial serendipity - why it cannot be engineered, but only designed for.We start by unpacking the paradox of serendipity: while serendipity cannot be engineered or planned, systems and environments can be designed to increase the likelihood that serendipitous experiences occur. Annelien explains why randomness alone is not enough and why serendipity always emerges from an interplay between an unexpected encounter and a user's ability to recognize its relevance and value.A central part of our discussion focuses on Annelien's recent framework that distinguishes between intended, experienced, and afforded serendipity. We explore why organizations first need to clarify why they want serendipity - whether as an ideal, a common good, a mediator to achieve other goals (such as long-term retention or long-tail exposure), or even as a product feature in itself. From there, we dive into how users actually experience serendipity, drawing on qualitative interview research that identifies three core components: encounters must feel fortuitous, refreshing, and enriching. These components can manifest in different “flavors,” such as taste broadening, taste deepening, or rediscovering forgotten interests.We then move beyond algorithms to discuss affordances for serendipity - design principles that span content, user interfaces, and information access. Using examples from libraries, urban spaces, and digital platforms, Annelien shows why serendipity is a system-level property rather than a single metric or model tweak. We also discuss where serendipity can go wrong, including the Netflix “Surprise Me” feature, and why mismatched expectations can actually harm user experience.To close, we reflect on open research questions, from measuring different types of serendipity to understanding how content types, business models, and platform economics shape what is possible. Annelien also challenges a common myth: serendipity does not automatically burst filter bubbles—and should not be treated as a silver bullet.Enjoy this enriching episode of RECSPERTS – Recommender Systems Experts.Don't forget to follow the podcast and please leave a review.(00:00) - Introduction (03:57) - About Annelien Smets (14:42) - Paradox and Definition of (Artificial) Serendipity (27:04) - Intended Serendipity (43:01) - Experienced Serendipity (01:01:18) - Afforded Serendipity (01:13:49) - Examples of Serendipity Going Wrong (01:17:40) - Framework for Serendipity (01:22:41) - Further Challenges and Closing Remarks Links from the Episode:Annelien Smets on LinkedInWebsite of AnnelienLinkedIn Article by Annelien Smets (2025): Overcoming the Paradox of Artificial SerendipityThe Serendipity SocietySerendipity EnginePapers:Smets (2025): Intended, afforded, and experienced serendipity: overcoming the paradox of artificial serendipitySmets et al. (2022): Serendipity in Recommender Systems Beyond the Algorithm: A Feature Repository and Experimental DesignBinst et al. (2025): What Is Serendipity? An Interview Study to Conceptualize Experienced Serendipity in Recommender SystemsZiarani et al. (2021): Serendipity in Recommender Systems: A Systematic Literature ReviewChen et al. (2021): Values of User Exploration in Recommender SystemsSmets et al. (2025): Why Do Recommenders Recommend? Three Waves of Research Perspectives on Recommender SystemsSmets (2023): Designing for Serendipity, a Means or an End?General Links:Follow me on LinkedInFollow me on XSend me your comments, questions and suggestions to marcel.kurovski@gmail.comRecsperts Website

Data Skeptic
Eye Tracking in Recommender Systems

Data Skeptic

Play Episode Listen Later Dec 18, 2025 52:08


In this episode, Santiago de Leon takes us deep into the world of eye tracking and its revolutionary applications in recommender systems. As a researcher at the Kempelin Institute and Brno University, Santiago explains the mechanics of eye tracking technology—how it captures gaze data and processes it into fixations and saccades to reveal user browsing patterns. He introduces the groundbreaking RecGaze dataset, the first eye tracking dataset specifically designed for recommender systems research, which opens new possibilities for understanding how users interact with carousel interfaces like Netflix. Through collaboration between psychologists and AI researchers, Santiago's work demonstrates how eye tracking can uncover insights about positional bias and user engagement that traditional click data misses. Beyond the technical aspects, Santiago addresses the ethical considerations surrounding eye tracking data, particularly concerning pupil data and privacy. He emphasizes the importance of questioning assumptions in recommender systems and shares practical advice for improving recommendation algorithms by understanding actual user behavior rather than relying solely on click patterns. Looking forward, Santiago discusses exciting future directions including simulating user behavior using eye tracking data, addressing the cold start problem, and translating these findings to e-commerce applications. This conversation challenges researchers and practitioners to think more deeply about de-biasing clicks and leveraging eye tracking as a powerful tool to enhance user experience in recommendation systems.  

netflix ai eye tracking recommender systems
Data Skeptic
Designing Recommender Systems for Digital Humanities

Data Skeptic

Play Episode Listen Later Nov 23, 2025 36:48


In this episode of Data Skeptic, we explore the fascinating intersection of recommender systems and digital humanities with guest Florian Atzenhofer-Baumgartner, a PhD student at Graz University of Technology. Florian is working on Monasterium.net, Europe's largest online collection of historical charters, containing millions of medieval and early modern documents from across the continent. The conversation delves into why traditional recommender systems fall short in the digital humanities space, where users range from expert historians and genealogists to art historians and linguists, each with unique research needs and information-seeking behaviors. Florian explains the technical challenges of building a recommender system for cultural heritage materials, including dealing with sparse user-item interaction matrices, the cold start problem, and the need for multi-modal similarity approaches that can handle text, images, metadata, and historical context. The platform leverages various embedding techniques and gives users control over weighting different modalities—whether they're searching based on text similarity, visual imagery, or diplomatic features like issuers and receivers. A key insight from Florian's research is the importance of balancing serendipity with utility, collection representation to prevent bias, and system explainability while maintaining effectiveness. The discussion also touches on unique evaluation challenges in non-commercial recommendation contexts, including Florian's "research funnel" framework that considers discovery, interaction, integration, and impact stages. Looking ahead, Florian envisions recommendation systems becoming standard tools for exploration across digital archives and cultural heritage repositories throughout Europe, potentially transforming how researchers discover and engage with historical materials. The new version of Monasterium.net, set to launch with enhanced semantic search and recommendation features, represents an important step toward making cultural heritage more accessible and discoverable for everyone.  

europe technology phd designing florian digital humanities recommender systems graz university data skeptic
Data Skeptic
DataRec Library for Reproducible in Recommend Systems

Data Skeptic

Play Episode Listen Later Nov 13, 2025 32:48


In this episode of Data Skeptic's Recommender Systems series, host Kyle Polich explores DataRec, a new Python library designed to bring reproducibility and standardization to recommender systems research. Guest Alberto Carlo Mario Mancino, a postdoc researcher from Politecnico di Bari, Italy, discusses the challenges of dataset management in recommendation research—from version control issues to preprocessing inconsistencies—and how DataRec provides automated downloads, checksum verification, and standardized filtering strategies for popular datasets like MovieLens, Last.fm, and Amazon reviews.  The conversation covers Alberto's research journey through knowledge graphs, graph-based recommenders, privacy considerations, and recommendation novelty. He explains why small modifications in datasets can significantly impact research outcomes, the importance of offline evaluation, and DataRec's vision as a lightweight library that integrates with existing frameworks rather than replacing them. Whether you're benchmarking new algorithms or exploring recommendation techniques, this episode offers practical insights into one of the most critical yet overlooked aspects of reproducible ML research.

amazon italy library python recommend ml politecnico reproducible recommender systems data skeptic kyle polich
Data Skeptic
Shilling Attacks on Recommender Systems

Data Skeptic

Play Episode Listen Later Nov 5, 2025 34:48


In this episode of Data Skeptic's Recommender Systems series, Kyle sits down with Aditya Chichani, a senior machine learning engineer at Walmart, to explore the darker side of recommendation algorithms. The conversation centers on shilling attacks—a form of manipulation where malicious actors create multiple fake profiles to game recommender systems, either to promote specific items or sabotage competitors. Aditya, who researched these attacks during his undergraduate studies at SPIT before completing his master's in computer science with a data science specialization at UC Berkeley, explains how these vulnerabilities emerge particularly in collaborative filtering systems. From promoting a friend's ska band on Spotify to inflating product ratings on e-commerce platforms, shilling attacks represent a significant threat in an industry where approximately 4% of reviews are fake, translating to $800 billion in annual sales in the US alone. The discussion delves deep into collaborative filtering, explaining both user-user and item-item approaches that create similarity matrices to predict user preferences. However, these systems face various shilling attacks of increasing sophistication: random attacks use minimal information with average ratings, while segmented attacks strategically target popular items (like Taylor Swift albums) to build credibility before promoting target items. Bandwagon attacks focus on highly popular items to connect with genuine users, and average attacks leverage item rating knowledge to appear authentic. User-user collaborative filtering proves particularly vulnerable, requiring as few as 500 fake profiles to impact recommendations, while item-item filtering demands significantly more resources. Aditya addresses detection through machine learning techniques that analyze behavioral patterns using methods like PCA to identify profiles with unusually high correlation and suspicious rating consistency. However, this remains an evolving challenge as attackers adapt strategies, now using large language models to generate more authentic-seeming fake reviews. His research with the MovieLens dataset tested detection algorithms against synthetic attacks, highlighting how these concerns extend to modern e-commerce systems. While companies rarely share attack and detection data publicly to avoid giving attackers advantages, academic research continues advancing both offensive and defensive strategies in recommender systems security.

Data Skeptic
Sustainable Recommender Systems for Tourism

Data Skeptic

Play Episode Listen Later Oct 9, 2025 38:02


In this episode, we speak with Ashmi Banerjee, a doctoral candidate at the Technical University of Munich, about her pioneering research on AI-powered recommender systems in tourism. Ashmi illuminates how these systems can address exposure bias while promoting more sustainable tourism practices through innovative approaches to data acquisition and algorithm design.  Key highlights include leveraging large language models for synthetic data generation, developing recommendation architectures that balance user satisfaction with environmental concerns, and creating frameworks that distribute tourism more equitably across destinations. Ashmi's insights offer valuable perspectives for both AI researchers and tourism industry professionals seeking to implement more responsible recommendation technologies.

Data Skeptic
Interpretable Real Estate Recommendations

Data Skeptic

Play Episode Listen Later Sep 22, 2025 32:57


In this episode of Data Skeptic's Recommender Systems series, host Kyle Polich interviews Dr. Kunal Mukherjee, a postdoctoral research associate at Virginia Tech, about the paper "Z-REx: Human-Interpretable GNN Explanations for Real Estate Recommendations" The discussion explores how the post-COVID real estate landscape has created a need for better recommendation systems that can introduce home buyers to emerging neighborhoods they might not know about.  Dr. Mukherjee, explains how his team developed a graph neural network approach that not only recommends properties but provides human-interpretable explanations for why certain regions are suggested. The conversation covers the advantages of using graph-based models over traditional recommendation systems, the importance of regional context in real estate features, and how co-click data from similar users can create more effective recommendations. Key topics include the distinction between model developer explanations and end-user explanations, the challenges of feature perturbation in recommendation systems, and how graph neural networks can discover novel pathways to emerging real estate markets that traditional models might miss.

covid-19 real estate recommendations virginia tech mukherjee recommender systems data skeptic kyle polich
RTÉ - Morning Ireland
O'Gorman calls for recommender systems on digital platforms to be turned off

RTÉ - Morning Ireland

Play Episode Listen Later Sep 11, 2025 6:51


Roderic O'Gorman, Green Party leader, outlines the issues that will be discussed at today's party Think-In.

Recsperts - Recommender Systems Experts
#29: Transformers for Recommender Systems with Craig Macdonald and Sasha Petrov

Recsperts - Recommender Systems Experts

Play Episode Listen Later Aug 27, 2025 97:25


In episode 29 of Recsperts, I welcome Craig Macdonald, Professor of Information Retrieval at the University of Glasgow, and Aleksandr “Sasha” Petrov, PhD researcher and former applied scientist at Amazon. Together, we dive deep into sequential recommender systems and the growing role of transformer models such as SASRec and BERT4Rec.Our conversation begins with their influential replicability study of BERT4Rec, which revealed inconsistencies in reported results and highlighted the importance of training objectives over architecture tweaks. From there, Craig and Sasha guide us through their award-winning research on making transformers for sequential recommendation with large corpora both more effective and more efficient. We discuss how recency sampling (RSS) reduces training times dramatically, and how gSASRec overcomes the problem of overconfidence in models trained with negative sampling. By generalizing the sigmoid function (gBCE), they were able to reconcile cross-entropy–based optimization results with negative sampling, matching the effectiveness of softmax approaches while keeping training scalable for large corpora.We also explore RecJPQ, their recent work on joint product quantization for item embeddings. This approach makes transformer-based sequential recommenders substantially faster at inference and far more memory-efficient for embeddings—while sometimes even improving effectiveness thanks to regularization effects. Towards the end, Craig and Sasha share their perspective on generative approaches like GPTRec, the promises and limits of large language models in recommendation, and what challenges remain for the future of sequential recommender systems.Enjoy this enriching episode of RECSPERTS – Recommender Systems Experts.Don't forget to follow the podcast and please leave a review.(00:00) - Introduction (04:09) - About Craig Macdonald (04:46) - About Sasha Petrov (13:48) - Tutorial on Transformers for Sequential Recommendations (19:24) - SASRec vs. BERT4Rec (21:25) - Replicability Study of BERT4Rec for Sequential Recommendation (32:52) - Training Sequential RecSys using Recency Sampling (40:01) - gSASRec for Reducing Overconfidence by Negative Sampling (01:00:51) - RecJPQ: Training Large-Catalogue Sequential Recommenders (01:21:37) - Generative Sequential Recommendation with GPTRec (01:29:12) - Further Challenges and Closing Remarks Links from the Episode:Craig Macdonald on LinkedInSasha Petrov on LinkedInSasha's WebsiteTutorial: Transformers for Sequential Recommendation (ECIR 2024)Tutorial Recording from ACM European Summer School in Bari (2024)Talk: Neural Recommender Systems (European Summer School in Information Retrieval 2024)Papers:Kang et al. (2018): Self-Attentive Sequential RecommendationSun et al. (2019): BERT4Rec: Sequential Recommendation with Bidirectional Encoder Representations from TransformerPetrov et al. (2022): A Systematic Review and Replicability Study of BERT4Rec for Sequential RecommendationPetrov et al. (2022): Effective and Efficient Training for Sequential Recommendation using Recency SamplingPetrov et al. (2024): RSS: Effective and Efficient Training for Sequential Recommendation Using Recency Sampling (extended version)Petrov et al. (2023): gSASRec: Reducing Overconfidence in Sequential Recommendation Trained with Negative SamplingPetrov et al. (2025): Improving Effectiveness by Reducing Overconfidence in Large Catalogue Sequential Recommendation with gBCE lossPetrov et al. (2024): RecJPQ: Training Large-Catalogue Sequential RecommendersPetrov et al. (2024): Efficient Inference of Sub-Item Id-based Sequential Recommendation Models with Millions of ItemsRajput et al. (2023): Recommender Systems with Generative RetrievalPetrov et al. (2023): Generative Sequential Recommendation with GPTRecPetrov et al. (2024): Aligning GPTRec with Beyond-Accuracy Goals with Reinforcement LearningGeneral Links:Follow me on LinkedInFollow me on XSend me your comments, questions and suggestions to marcel.kurovski@gmail.comRecsperts WebsiteDisclaimer:Craig holds concurrent appointments as a Professor of Information Retrieval at University of Glasgow and as an Amazon Scholar. This podcast describes work performed at the University of Glasgow and is not associated with Amazon.

Data Skeptic
Networks and Recommender Systems

Data Skeptic

Play Episode Listen Later Aug 17, 2025 17:45


Kyle reveals the next season's topic will be "Recommender Systems".  Asaf shares insights on how network science contributes to the recommender system field.

networks asaf recommender systems
MLOps.community
LinkedIn Recommender System Predictive ML vs LLMs

MLOps.community

Play Episode Listen Later Aug 12, 2025 47:39


Demetrios chats with Arpita Vats about how LLMs are shaking up recommender systems. Instead of relying on hand-crafted features and rigid user clusters, LLMs can read between the lines—spotting patterns in user behavior and content like a human would. They cover the perks (less manual setup, smarter insights) and the pain points (latency, high costs), plus how mixing models might be the sweet spot. From timing content perfectly to knowing when traditional methods still win, this episode pulls back the curtain on the future of recommendations.// BioArpita Vats is a passionate and accomplished researcher in the field of Artificial Intelligence, with a focus on Natural Language Processing, Recommender Systems, and Multimodal AI. With a strong academic foundation and hands-on experience at leading tech companies such as LinkedIn, Meta, and Staples, Arpita has contributed to cutting-edge projects spanning large language models (LLMs), privacy-aware AI, and video content understanding.She has published impactful research at premier venues and actively serves as a reviewer for top-tier conferences like CVPR, ICLR, and KDD. Arpita's work bridges academic innovation with industry-scale deployment, making her a sought-after collaborator in the AI research community.Currently, she is engaged in exploring the alignment and safety of language models, developing robust metrics like the Alignment Quality Index (AQI), and optimizing model behavior across diverse input domains. Her dedication to advancing ethical and scalable AI reflects both in her academic pursuits and professional contributions.// Related Links#recommendersystems #LLMs #linkedin ~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreMLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Arpita on LinkedIn: /arpita-v-0a14a422/Timestamps:[00:00] Smarter Content Recommendations[05:19] LLMs: Next-Gen Recommendations[09:37] Judging LLM Suggestions[11:38] Old vs New Recommenders[14:11] Why LLMs Get Stuck[16:52] When Old Models Win[22:39] After-Booking Rec Magic[23:26] One LLM to Rule Models[29:14] Personalization That Evolves[32:39] SIM Beats Transformers in QA[35:35] Agents Writing Research Papers[37:12] Big-Company Agent Failures[41:47] LinkedIn Posts Fade Faster[46:04] Clustering Shifts Social Feeds[47:01] Vanishing Posts, Replay Mode

Outrage Overload
62. Reducing Online Outrage: What Role Can Community-Driven Algorithms Play - Paul Resnick

Outrage Overload

Play Episode Listen Later Apr 16, 2025 35:04


How Systems Like Community Notes on Twitter/X Aim to Break the Cycle of MisinformationAre social media algorithms fueling misinformation and deepening echo chambers—or can they help bridge divides? In this episode, we talk with Paul Resnick, a pioneer in recommender systems and digital trust, about how platforms curate content, the truth behind filter bubbles, and whether fact-checking tools like Community Notes on Twitter (X) can cut through the noise. Can algorithms be redesigned to reduce outrage instead of amplifying it? Tune in to find out!Text me your feedback and leave your contact info if you'd like a reply (this is a one-way text). Thanks, DavidSupport the showShow Notes:https://outrageoverload.net/ Follow me, David Beckemeyer, on Twitter @mrblog or email outrageoverload@gmail.com. Follow the show on Twitter @OutrageOverload or Instagram @OutrageOverload. We are also on Facebook /OutrageOverload.HOTLINE: 925-552-7885Got a Question, comment or just thoughts you'd like to share? Call the OO hotline and leave a message and you could be featured in an upcoming episodeIf you would like to help the show, you can contribute here. Tell everyone you know about the show. That's the best way to support it.Rate and Review the show on Podchaser: https://www.podchaser.com/OutrageOverload Intro music and outro music by Michael Ramir C.Many thanks to my co-editor and co-director, Austin Chen.

Recsperts - Recommender Systems Experts
#28: Multistakeholder Recommender Systems with Robin Burke

Recsperts - Recommender Systems Experts

Play Episode Listen Later Apr 15, 2025 95:07


In episode 28 of Recsperts, I sit down with Robin Burke, professor of information science at the University of Colorado Boulder and a leading expert with over 30 years of experience in recommender systems. Together, we explore multistakeholder recommender systems, fairness, transparency, and the role of recommender systems in the age of evolving generative AI.We begin by tracing the origins of recommender systems, traditionally built around user-centric models. However, Robin challenges this perspective, arguing that all recommender systems are inherently multistakeholder—serving not just consumers as the recipients of recommendations, but also content providers, platform operators, and other key players with partially competing interests. He explains why the common “Recommended for You” label is, at best, an oversimplification and how greater transparency is needed to show how stakeholder interests are balanced.Our conversation also delves into practical approaches for handling multiple objectives, including reranking strategies versus integrated optimization. While embedding multistakeholder concerns directly into models may be ideal, reranking offers a more flexible and efficient alternative, reducing the need for frequent retraining.Towards the end of our discussion, we explore post-userism and the impact of generative AI on recommendation systems. With AI-generated content on the rise, Robin raises a critical concern: if recommendation systems remain overly user-centric, generative content could marginalize human creators, diminishing their revenue streams. Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Don't forget to follow the podcast and please leave a review(00:00) - Introduction (03:24) - About Robin Burke and First Recommender Systems (26:07) - From Fairness and Advertising to Multistakeholder RecSys (34:10) - Multistakeholder RecSys Terminology (40:16) - Multistakeholder vs. Multiobjective (42:43) - Reciprocal and Value-Aware RecSys (59:14) - Objective Integration vs. Reranking (01:06:31) - Social Choice for Recommendations under Fairness (01:17:40) - Post-Userist Recommender Systems (01:26:34) - Further Challenges and Closing Remarks Links from the Episode:Robin Burke on LinkedInRobin's WebsiteThat Recommender Systems LabReference to Broder's Keynote on Computational Advertising and Recommender Systems from RecSys 2008Multistakeholder Recommender Systems (from Recommender Systems Handbook), chapter by Himan Abdollahpouri & Robin BurkePOPROX: The Platform for OPen Recommendation and Online eXperimentationAltRecSys 2024 (Workshop at RecSys 2024)Papers:Burke et al. (1996): Knowledge-Based Navigation of Complex Information SpacesBurke (2002): Hybrid Recommender Systems: Survey and ExperimentsResnick et al. (1997): Recommender SystemsGoldberg et al. (1992): Using collaborative filtering to weave an information tapestryLinden et al. (2003): Amazon.com Recommendations - Item-to-Item Collaborative FilteringAird et al. (2024): Social Choice for Heterogeneous Fairness in RecommendationAird et al. (2024): Dynamic Fairness-aware Recommendation Through Multi-agent Social ChoiceBurke et al. (2024): Post-Userist Recommender Systems : A ManifestoBaumer et al. (2017): Post-userismBurke et al. (2024): Conducting Recommender Systems User Studies Using POPROXGeneral Links:Follow me on LinkedInFollow me on XSend me your comments, questions and suggestions to marcel.kurovski@gmail.comRecsperts Website

Zukunft Denken – Podcast
121 — Künstliche Unintelligenz

Zukunft Denken – Podcast

Play Episode Listen Later Apr 2, 2025 23:48


In dieser Folge steht das Thema »Künstliche Unitelligenz« im Mittelpunkt – ein Begriff, der aus einem Artikel aus dem Spectator stammt: Britain has become a pioneer in Artificial Unintelligence. Was genau verbirgt sich hinter dieser Idee? »Artificial Unintelligence is the means by which people of perfectly adequate natural intelligence are transformed by policies, procedures and protocols into animate but inflexible cogs. They speak and behave, but do not think or decide.« Wie werden aus Menschen mit natürlicher Intelligenz bloß unflexible Rädchen? Wir reflektieren die zunehmende Strukturierung und Standardisierung in Organisationen, um mit wachsender gesellschaftlicher Komplexität umgehen zu können. Ein Ausgangspunkt der Episode ist die Frage, warum wir in immer mehr Organisationen eine strukturelle und individuelle Inkompetenz erleben? Ein Zitat aus dem genannten Artikel fasst es treffend zusammen:  »‘I didn't find anything in common in these cases,' I said, ‘except the stupidity of your staff. I expected him to get angry, but he maintained a Buddha-like calm. ‘Oh, I know,' he replied, ‘but that is the standard expected now.'« Wie konnte es so weit kommen? Liegt es an der Industrialisierung, die laut Dan Davies in The Unaccountability Machine besagt:  »A very important consequence of industrialisation is that it breaks the connection between the worker and the product.« Oder hat es damit zu tun, wie wir mit Überwältungung durch Information umgehen. »When people are overwhelmed by information, they always react in the same way – by building systems.« Sind Menschen, die individuell denken, in solchen Systemen eher hinderlich als hilfreich? Doch was passiert, wenn komplexe Probleme auftreten, die Flexibilität und Kreativität erfordern? Sind unsere Organisationen überhaupt noch in der Lage, mit unerwarteten Situationen umzugehen, oder arbeiten sie nur noch »maschinenhaft« nach Vorgaben – und das mit einem Maschinenverständnis des 19. Jahrhunderts? Ist die Stagnation, die wir seit Jahrzehnten spüren, ein Symptom dieses Systemversagens? Und wie hängt das mit der sogenannten »Unaccountability Machine« zusammen, die Davies beschreibt und die man im Deutschen vielleicht als »Verantwortungslosigkeits-Maschine« bezeichnen könnte? Kann es sogar sein, dass manche Strukturen bewusst als »self-organising control fraud« gestaltet sind? Ein weiteres damit verbundenes Thema ist: Wie beeinflussen moderne Prognose-Tools wie Recommender Systems unser Verhalten? Dienen sie wirklich dazu, bessere Entscheidungen zu ermöglichen, oder machen sie uns hauptsächlich vorhersagbarer? »Menschen, die dies und jedes gekauft/gesehen haben, haben auch dies gekauft/gesehen« – ist das noch Prognose oder schon Formung des Geschmacks? Und was ist mit wissenschaftlichen Modellen komplexer Systeme, die oft relativ beliebige Ergebnisse liefern? Formen sie nicht auch die Meinung von Wissenschaftlern, Politikern und der Gesellschaft – etwa durch die überall beobachtbare schlichte Medienberichterstattung?  Bleibt außerdem der Mensch wirklich »in the loop«, wie oft behauptet wird, oder ist er längst ein »artificial unintelligent man in the loop«, der Empfehlungen des Systems kaum hinterfragen kann? Die Episode wirft auch einen kritischen Blick auf naive Ideologien wie das »Scientific World Management« von Alfred Korzybski, der schrieb:  „it will give a scientific foundation to Political Economy and transform so-called ‘scientific shop management' into genuine ‘scientific world management.'“  War dieser Wunsch nach dem Ersten Weltkrieg verständlich, aber letztlich völlig missgeleitet? Und warum erleben wir heute eine Wiederkehr des naiven Szientismus, der glaubt, »die Wissenschaft« liefere objektive Antworten? Wie hängen solche Ideen mit Phänomenen wie »Science Diplomacy« zusammen? Die zentrale Frage der Episode lautet: Wie erreicht man, dass Menschen in Verantwortung korrekt im Sinne des definierten Zwecks der Organisation entscheiden? Doch was ist überhaupt der Zweck eines Systems? Stafford Beer sagt:  »The purpose of a system is what it does.« Stimmt der definierte Zweck – etwa Gesundheit im Gesundheitssystem – noch mit der Realität überein? Warum entscheiden Ärzte oft defensiv im eigenen Interesse statt im Interesse der Patienten? Und wie überträgt sich dieses Verhalten auf andere Organisationen – von Ministerien bis hin zur Wissenschaft? Davies beschreibt das ab Beispiel des akademischen Publikationswesens so:  „A not-wholly-unfair analysis of academic publishing would be that it is an industry in which academics compete against one another for the privilege of providing free labour for a profitmaking company, which then sells the results back to them at monopoly prices.“  Und weiter:  „The truly valuable output of the academic publishing industry is not journals, but citations.“  Was ist aus der Idee geworden, dass die Generierung von neuem und relevantem Wissen die Aufgabe von Wissenschaft, Förderung und Publikationswesen ist? Zum Abschluss stelle ich die Frage: Wie können Systeme so gestaltet werden, dass Verantwortung wieder übernommen wird? Wie balanciert man die Zuordnung von Konsequenzen mit der Möglichkeit, ehrlich zu scheitern – ohne Innovation zu ersticken? Und was sind »Luxury Beliefs« – jene modischen Ideen elitärer Kreise, die sie selbst nicht tragen müssen, während sie für andere zur existenziellen Bedrohung werden?  Die Episode endet so mit einem Aufruf zur Diskussion: Wie lösen wir diesen Spagat zwischen Verantwortung und Risiko in einer immer komplexeren Welt? Referenzen Andere Episoden Episode 119: Spy vs Spy: Über künstlicher Intelligenz und anderen Agenten Episode 118: Science and Decision Making under Uncertainty, A Conversation with Prof. John Ioannidis Episode 117: Der humpelnde Staat, ein Gespräch mit Prof. Christoph Kletzer Episode 116: Science and Politics, A Conversation with Prof. Jessica Weinkle Episode 106: Wissenschaft als Ersatzreligion? Ein Gespräch mit  Manfred Glauninger Episode 103: Schwarze Schwäne in Extremistan; die Welt des Nassim Taleb, ein Gespräch mit Ralph Zlabinger Episode 93: Covid. Die unerklärliche Stille nach dem Sturm. Ein Gespräch mit Jan David Zimmermann Episode 91: Die Heidi-Klum-Universität, ein Gespräch mit Prof. Ehrmann und Prof. Sommer Episode 84: (Epistemische) Krisen? Ein Gespräch mit Jan David Zimmermann Fachliche Referenzen Britain has become a pioneer in Artificial Unintelligence | The Spectator (2025) Davies, Dan. The Unaccountability Machine: Why Big Systems Make Terrible Decisions - and How The World Lost its Mind, Profile Books (2024) Alfred Korzybski, Manhood of Humanity (1921) Jessica Weinkle, What is Science Diplomacy (2025) Nassim Taleb, Skin in the Game, Penguin (2018) Rob Henderson, 'Luxury beliefs' are latest status symbol for rich Americans, New York Post (2019) Lorraine Daston, Rules, Princeton Univ. Press (2023)  

Recsperts - Recommender Systems Experts
#27: Recommender Systems at the BBC with Alessandro Piscopo and Duncan Walker

Recsperts - Recommender Systems Experts

Play Episode Listen Later Mar 19, 2025 87:44


In episode 27 of Recsperts, we meet Alessandro Piscopo, Lead Data Scientist in Personalization and Search, and Duncan Walker, Principal Data Scientist in the iPlayer Recommendations Team, both from the BBC. We discuss how the BBC personalizes recommendations across different offerings like news or video and audio content recommendations. We learn about the core values for the oldest public service media organization and the collaboration with editors in that process.The BBC once started with short video recommendations for BBC+ and nowadays has to consider recommendations across multiple domains: news, the iPlayer, BBC Sounds, BBC Bytesize, and more. With a reach of about 500M+ users who access services every week there is a huge potential. My guests discuss the challenges of aligning recommendations with public service values and the role of editors and constant exchange, alignment, and learning between the algorithmic and editorial lines of recommender systems.We also discuss the potential of cross-domain recommendations to leverage the content across different products as well as the organizational setup of teams working on recommender systems at the BBC. We learn about skews in the data due to the nature of an online service that also has a linear offering with TV and radio services.Towards the end, we also touch a bit on QUARE @ RecSys, which is the Workshop on Measuring the Quality of Explanations in Recommender Systems.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Don't forget to follow the podcast and please leave a review(00:00) - Introduction (03:10) - About Alessandro Piscopo and Duncan Walker (14:53) - RecSys Applications at the BBC (20:22) - Journey of Building Public Service Recommendations (28:02) - Role and Implementation of Public Service Values (36:52) - Algorithmic and Editorial Recommendation (01:01:54) - Further RecSys Challenges at the BBC (01:15:53) - Quare Workshop (01:23:27) - Closing Remarks Links from the Episode:Alessandro Piscopo on LinkedInDuncan Walker on LinkedInBBCQUARE @ RecSys 2023 (2nd Workshop on Measuring the Quality of Explanations in Recommender Systems)Papers:Clarke et al. (2023): Personalised Recommendations for the BBC iPlayer: Initial approach and current challengesBoididou et al. (2021): Building Public Service Recommenders: Logbook of a JourneyPiscopo et al. (2019): Data-Driven Recommendations in a Public Service OrganisationGeneral Links:Follow me on LinkedInFollow me on XSend me your comments, questions and suggestions to marcel.kurovski@gmail.comRecsperts Website

Kent State College of Business
#25: Breakdown - Recommender systems impact on Platform's content and outcomes: the role of providers and algorithm designs

Kent State College of Business

Play Episode Listen Later Mar 7, 2025 10:59


The study aims to investigate how recommender systems shape providers' dynamics and content offerings on platforms, and to provide insights into algorithm designs for achieving better outcomes in platform design. The study reveals that recommender systems have the potential to introduce biases in providers' understanding of user preferences, thereby impacting the variety of offerings on platforms. Moreover, it identifies algorithm design as a critical factor, with item-based collaborative filters showcasing superior performance in contexts where customers exhibit selectivity. Conversely, user-based models prove more effective in scenarios where recommendations significantly sway user decisions, ultimately boosting sales. Authors: Mohammadi Darani, Milad, and Sina Aghaie

Recsperts - Recommender Systems Experts
#26: Diversity in Recommender Systems with Sanne Vrijenhoek

Recsperts - Recommender Systems Experts

Play Episode Listen Later Feb 19, 2025 95:42


In episode 26 of Recsperts, I speak with Sanne Vrijenhoek, a PhD candidate at the University of Amsterdam's Institute for Information Law and the AI, Media & Democracy Lab. Sanne's research explores diversity in recommender systems, particularly in the news domain, and its connection to democratic values and goals.We dive into four of her papers, which focus on how diversity is conceptualized in news recommender systems. Sanne introduces us to five rank-aware divergence metrics for measuring normative diversity and explains why diversity evaluation shouldn't be approached blindly—first, we need to clarify the underlying values. She also presents a normative framework for these metrics, linking them to different democratic theory perspectives. Beyond evaluation, we discuss how to optimize diversity in recommender systems and reflect on missed opportunities—such as the RecSys Challenge 2024, which could have gone beyond accuracy-chasing. Sanne also shares her recommendations for improving the challenge by incorporating objectives such as diversity.During our conversation, Sanne shares insights on effectively communicating recommender systems research to non-technical audiences. To wrap up, we explore ideas for fostering a more diverse RecSys research community, integrating perspectives from multiple disciplines.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Don't forget to follow the podcast and please leave a review(00:00) - Introduction (03:24) - About Sanne Vrijenhoek (14:49) - What Does Diversity in RecSys Mean? (26:32) - Assessing Diversity in News Recommendations (34:54) - Rank-Aware Divergence Metrics to Measure Normative Diversity (01:01:37) - RecSys Challenge 2024 - Recommendations for the Recommenders (01:11:23) - RecSys Workshops - NORMalize and AltRecSys (01:15:39) - On the Different Conceptualizations of Diversity in RecSys (01:28:38) - Closing Remarks Links from the Episode:Sanne Vrijenhoek on LinkedInInformfullyMIND: MIcrosoft News DatasetRecSys Challenge 2024NORMalize 2023: The First Workshop on the Normative Design and Evaluation of Recommender SystemsNORMalize 2024: The Second Workshop on the Normative Design and Evaluation of Recommender SystemsAltRecSys 2024: The AltRecSys Workshop on Alternative, Unexpected, and Critical Ideas in RecommendationPapers:Vrijenhoek et al. (2021): Recommenders with a Mission: Assessing Diversity in News RecommendationsVrijenhoek et al. (2022): RADio – Rank-Aware Divergence Metrics to Measure Normative Diversity in News RecommendationsHeitz et al. (2024): Recommendations for the Recommenders: Reflections on Prioritizing Diversity in the RecSys ChallengeVrijenhoek et al. (2024): Diversity of What? On the Different Conceptualizations of Diversity in Recommender SystemsHelberger (2019): On the Democratic Role of News RecommendersSteck (2018): Calibrated RecommendationsGeneral Links:Follow me on LinkedInFollow me on XSend me your comments, questions and suggestions to marcel.kurovski@gmail.comRecsperts Website

Filosofie in actie
In gesprek met David Graus

Filosofie in actie

Play Episode Listen Later Nov 27, 2024 61:13


In deze aflevering spreekt Piek met data scientist David Graus over AI en ethiek, de werking van Recommender Systems in verschillende domeinen (o.a. nieuwsmedia, HR en e-commerce) en de ontwikkeling die David door zijn onderzoek en werk heeft gemaakt van activistisch naar – zijn eigen zeggen – meer realistisch, waarbij algoritmes moeten worden gezien als hulpmiddel binnen een bredere context. Isa schuift aan.Meer over David en LinkedInIn deze aflevering komen de volgende publicaties langs:A critical review of filter bubbles and a comparison with selective exposure – Peter M. Dahlgren (2021)Wij zijn racisten, daarom Google ook – Maarten de Rijke & David Graus (2016)Justice as Fairness: Political not Metaphysical – John Rawls (1985)The Ethical Algorithm: The Science of Socially Aware Algorithm Design – Michael Kearns & Aaron Roth (2019) Entities of Interest – Discovery in Digital Traces – David Graus (2017)---------------------------------------------------------------Dit gesprek is opgenomen op 31 oktober 2024.Host: Piek KnijffRedactie: Team Filosofie in actieStudio en montage: De PodcastersTune: Uma van WingerdenArtwork: Hans Bastmeijer – Servion StudioWil je nog ergens over napraten? Dat kan! Neem contact op via info@filosofieinactie.nl. Meer weten over Filosofie in actie en onze werkzaamheden? Bezoek dan onze website of volg onze LinkedIn-pagina.

Engineering Kiosk
#149 Recommender Systems: Funktionsweise und Forschungstrends mit Eva Zangerle

Engineering Kiosk

Play Episode Listen Later Nov 12, 2024 71:03


Recommender Systems: Was steckt hinter modernen Empfehlungsalgorithmen?Moderne Empfehlungsalgorithmen begegnen uns im Alltag überall: Die nächste Serie bei Netflix, die “für dich zusammengestellte Playlist” bei Spotify oder “Kunden, die diesen Artikel gekauft haben, kauften auch” bei Amazon. In Zeiten von AI könnten wir meinen, dass dies alles schwarze Magie ist. Doch i.d.R. folgen die Empfehlungen gewissen Logiken. All das ganze wird im Research Bereich “Recommender Systems” genannt.Dies ist auch das Thema dieser Episode. Prof. Dr. Eva Zangerle, eine Expertin im Bereich Recommender System erklärt uns, was Recommender Systems eigentlich sind, welche Grundlegenden Ansätze für Empfehlungsalgorithmen existieren, wie viele Daten benötigt werden um sinnvolle Ergebnisse zu erzielen, was das Cold-Start Problem ist, wie Forscher evaluieren können, ob es gute oder schlechte Empfehlungen sind, was die Begriffe Recall und Precision eigentlich bedeuten, ob Empfehlungsalgorithmen auch einen gewissen Bias entwickeln können sowie welche Trends auf dem Forschungsgebiet zur Zeit aktuell sind.Das schnelle Feedback zur Episode:

The Behavioral Design Podcast
Recommender Systems with Carey Morewedge

The Behavioral Design Podcast

Play Episode Listen Later Oct 23, 2024 59:11


In this episode of the Behavioral Design Podcast, we delve into the world of AI recommender systems with special guest Carey Morewedge, a leading expert in behavioral science and AI. The discussion covers the fundamental mechanics behind AI recommendation systems, including content-based filtering, collaborative filtering, and hybrid models. Carey explains how platforms like Netflix, Twitter, and TikTok use implicit data to make predictions about user preferences, and how these systems often prioritize short-term engagement over long-term satisfaction. The episode also touches on ethical concerns, such as the gap between revealed and normative preferences, and the risks of relying too much on algorithms without considering the full context of human behavior. Join co-hosts Aline Holzwarth and Samuel Salzer as they together with Carey explore the delicate balance between human preferences and algorithmic influence. This episode is a must-listen for anyone interested in understanding the complexities of AI-driven recommendations! -- LINKS: Carey Morewedge: Google Scholar Profile Carey Morewedge - LinkedIn Boston University Faculty Page Personal Website Understanding AI Recommender Systems: How Netflix's Recommendation System Works Implicit Feedback for Recommender Systems (Research Paper) Why People Don't Trust Algorithms (Harvard Business Review) ⁠Nuance Behavior Website⁠ -- TIMESTAMPS: 00:00 The 'Do But Not Recommend' Game 07:53 The Complexity of Recommender Systems 08:58 Types of Recommender Systems 12:08 Introducing Carey Morewedge 14:13 Understanding Decision Making in AI 17:00 Challenges in AI Recommendations 32:13 Long-Term Impact on User Behavior 33:00 Understanding User Preferences 35:03 Challenges with A/B Testing 40:06 Algorithm Aversion 46:51 Quickfire Round: To AI or Not to AI 52:55 The Future of AI and Human Relationships -- Interesting in collaborating with Nuance? If you'd like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠nuancebehavior.com.⁠ Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 

Recsperts - Recommender Systems Experts
#25: RecSys 2024 Special

Recsperts - Recommender Systems Experts

Play Episode Listen Later Oct 12, 2024 39:39


In episode 25, we talk about the upcoming ACM Conference on Recommender Systems 2024 (RecSys) and welcome a former guest to geek about the conference. Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Don't forget to follow the podcast and please leave a review(00:00) - Introduction (01:56) - Overview RecSys 2024 (07:01) - Contribution Stats (09:37) - Interview Links from the Episode:RecSys 2024 Conference WebsitePapers:RecSys '24: Proceedings of the 18th ACM Conference on Recommender SystemsGeneral Links:Follow me on LinkedInFollow me on XSend me your comments, questions and suggestions to marcel.kurovski@gmail.comRecsperts Website

MLOps.community
Making Your Company LLM-native // Francisco Ingham // #266

MLOps.community

Play Episode Listen Later Oct 6, 2024 57:54


Francisco Ingham, LLM consultant, NLP developer, and founder of Pampa Labs. Making Your Company LLM-native // MLOps Podcast #266 with Francisco Ingham, Founder of Pampa Labs. // Abstract Being an LLM-native is becoming one of the key differentiators among companies, in vastly different verticals. Everyone wants to use LLMs, and everyone wants to be on top of the current tech but - what does it really mean to be LLM-native? LLM-native involves two ends of a spectrum. On the one hand, we have the product or service that the company offers, which surely offers many automation opportunities. LLMs can be applied strategically to scale at a lower cost and offer a better experience for users. But being LLM-native not only involves the company's customers, it also involves each stakeholder involved in the company's operations. How can employees integrate LLMs into their daily workflows? How can we as developers leverage the advancements in the field not only as builders but as adopters? We will tackle these and other key questions for anyone looking to capitalize on the LLM wave, prioritizing real results over the hype. // Bio Currently working at Pampa Labs, where we help companies become AI-native and build AI-native products. Our expertise lies on the LLM-science side, or how to build a successful data flywheel to leverage user interactions to continuously improve the product. We also spearhead, pampa-friends - the first Spanish-speaking community of AI Engineers. Previously worked in management consulting, was a TA in fastai in SF, and led the cross-AI + dev tools team at Mercado Libre. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Website: pampa.ai --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Francisco on LinkedIn: https://www.linkedin.com/in/fpingham/ Timestamps: [00:00] Francisco's preferred coffee [00:13] Takeaways [00:37] Please like, share, leave a review, and subscribe to our MLOps channels! [00:51] A Literature Geek [02:41] LLM-native company [03:54] Integrating LLM in workflows [07:21] Unexpected LLM applications [10:38] LLM's in development process [14:00] Vibe check to evaluation [15:36] Experiment tracking optimizations [20:22] LLMs as judges discussion [24:43] Presentaciones automatizadas para podcast [27:48] AI operating system and agents [31:29] Importance of SEO expertise [35:33] Experimentation and evaluation [39:20] AI integration strategies [41:50] RAG approach spectrum analysis [44:40] Search vs Retrieval in AI [49:02] Recommender Systems vs RAG [52:08] LLMs in recommender systems [53:10] LLM interface design insights

Recsperts - Recommender Systems Experts
#24: Video Recommendations at Facebook with Amey Dharwadker

Recsperts - Recommender Systems Experts

Play Episode Listen Later Oct 1, 2024 81:20


In episode 24 of Recsperts, I sit down with Amey Dharwadker, Machine Learning Engineering Manager at Facebook, to dive into the complexities of large-scale video recommendations. Amey, who leads the Video Recommendations Quality Ranking team at Facebook, sheds light on the intricate challenges of delivering personalized video feeds at scale. Our conversation covers content understanding, user interaction data, real-time signals, exploration, and evaluation techniques.We kick off the episode by reflecting on the inaugural VideoRecSys workshop at RecSys 2023, setting the stage for a deeper discussion on Facebook's approach to video recommendations. Amey walks us through the critical challenges they face, such as gathering reliable user feedback signals to avoid pitfalls like watchbait. With a vast and ever-growing corpus of billions of videos—millions of which are added each month—the cold start problem looms large. We explore how content understanding, user feedback aggregation, and exploration techniques help address this issue. Amey explains how engagement metrics like watch time, comments, and reactions are used to rank content, ensuring users receive meaningful and diverse video feeds.A key highlight of the conversation is the importance of real-time personalization in fast-paced environments, such as short-form video platforms, where user preferences change quickly. Amey also emphasizes the value of cross-domain data in enriching user profiles and improving recommendations.Towards the end, Amey shares his insights on leadership in machine learning teams, pointing out the characteristics of a great ML team.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Don't forget to follow the podcast and please leave a review(00:00) - Introduction (02:32) - About Amey Dharwadker (08:39) - Video Recommendation Use Cases on Facebook (16:18) - Recommendation Teams and Collaboration (25:04) - Challenges of Video Recommendations (31:07) - Video Content Understanding and Metadata (33:18) - Multi-Stage RecSys and Models (42:42) - Goals and Objectives (49:04) - User Behavior Signals (59:38) - Evaluation (01:06:33) - Cross-Domain User Representation (01:08:49) - Leadership and What Makes a Great Recommendation Team (01:13:01) - Closing Remarks Links from the Episode:Amey Dharwadker on LinkedInAmey's WebsiteRecSys Challenge 2021VideoRecSys Workshop 2023VideoRecSys + LargeRecSys 2024Papers:Mahajan et al. (2023): CAViaR: Context Aware Video RecommendationsMahajan et al. (2023): PIE: Personalized Interest Exploration for Large-Scale Recommender SystemsRaul et al. (2023): CAM2: Conformity-Aware Multi-Task Ranking Model for Large-Scale Recommender SystemsZhai et al. (2024): Actions Speak Louder than Words: Trillion-Parameter Sequential Transducers for Generative RecommendationsSaket et al. (2023): Formulating Video Watch Success Signals for Recommendations on Short Video PlatformsWang et al. (2022): Surrogate for Long-Term User Experience in Recommender SystemsSu et al. (2024): Long-Term Value of Exploration: Measurements, Findings and AlgorithmsGeneral Links:Follow me on LinkedInFollow me on XSend me your comments, questions and suggestions to marcel.kurovski@gmail.comRecsperts Website

CAISzeit – In welcher digitalen Gesellschaft wollen wir leben?
Ist Gerechtigkeit programmierbar? Fairness und Transparenz in Algorithmen.

CAISzeit – In welcher digitalen Gesellschaft wollen wir leben?

Play Episode Listen Later Aug 27, 2024 38:39


Algorithmen bestimmen unser Leben: Von den Inhalten, die wir in sozialen Medien sehen, bis hin zu den Krediten, die uns gewährt werden. Aber inwieweit sind Algorithmen fair und transparent? Und welche Folgen kann es haben, wenn sie es nicht sind? Ist Gerechtigkeit programmierbar? Diese Fragen und mehr besprechen wir in dieser CAISzeit mit Miriam Fahimi. Miriam ist von April bis September 2024 als Fellow am CAIS und promoviert derzeit in den Science and Technology Studies am Digital Age Research Center (D!ARC) der Universität Klagenfurt. Sie erforscht die „Fairness in Algorithmen“ und hat über eineinhalb Jahre in einem Kreditunternehmen beobachtet, wie dort über transparente und faire Algorithmen diskutiert wird. Empfehlungen zum Thema Forschung: · Digital Age Research Center (D!ARC), Universität Klagenfurt. https://www.aau.at/digital-age-research-center/ · Meisner, C., Duffy, B. E., & Ziewitz, M. (2022). The labor of search engine evaluation: Making algorithms more human or humans more algorithmic? New Media & Society. https://doi.org/10.1177/14614448211063860 · Poechhacker, N., Burkhardt, M., & Passoth, J.-H. (2024). 10. Recommender Systems beyond the Filter Bubble: Algorithmic Media and the Fabrication of Publics. In J. Jarke, B. Prietl, S. Egbert, Y. Boeva, H. Heuer, & M. Arnold (Hrsg.), Algorithmic Regimes (S. 207–228). Amsterdam University Press. https://doi.org/10.1515/9789048556908-010 Populärwissenschaftliche Literatur: · Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press. · Webseite von Kate Crawford. https://katecrawford.net Dokumentarfilm: · Coded Bias (dt. Vorprogrammierte Diskriminierung; abrufbar auf Netflix): In dieser Dokumentation werden die Vorurteile in Algorithmen untersucht, die die Forscherin am MIT Media Lab Joy Buolamwini in Systemen zur Gesichtserkennung offenlegte. https://www.netflix.com/de/title/81328723 Newsletter: · AI Snake Oil von Arvind Narayanan & Sayash Kapoor. https://www.aisnakeoil.com Ticker vom D64 –Zentrum für Digitalen Fortschritt: https://kontakt.d-64.org/ticker/

Recsperts - Recommender Systems Experts
#23: Generative Models for Recommender Systems with Yashar Deldjoo

Recsperts - Recommender Systems Experts

Play Episode Listen Later Aug 16, 2024 114:58


In episode 23 of Recsperts, we welcome Yashar Deldjoo, Assistant Professor at the Polytechnic University of Bari, Italy. Yashar's research on recommender systems includes multimodal approaches, multimedia recommender systems as well as trustworthiness and adversarial robustness, where he has published a lot of work. We discuss the evolution of generative models for recommender systems, modeling paradigms, scenarios as well as their evaluation, risks and harms.We begin our interview with a reflection of Yashar's areas of recommender systems research so far. Starting with multimedia recsys, particularly video recommendations, Yashar covers his work around adversarial robustness and trustworthiness leading to the main topic for this episode: generative models for recommender systems. We learn about their aspects for improving beyond the (partially saturated) state of traditional recommender systems: improve effectiveness and efficiency for top-n recommendations, introduce interactivity beyond classical conversational recsys, provide personalized zero- or few-shot recommendations.We learn about the modeling paradigms and as well about the scenarios for generative models which mainly differ by input and modelling approach: ID-based, text-based, and multimodal generative models. This is how we navigate the large field of acronyms leading us from VAEs and GANs to LLMs.Towards the end of the episode, we also touch on the evaluation, opportunities, risks and harms of generative models for recommender systems. Yashar also provides us with an ample amount of references and upcoming events where people get the chance to know more about GenRecSys.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Don't forget to follow the podcast and please leave a review(00:00) - Introduction (03:58) - About Yashar Deldjoo (09:34) - Motivation for RecSys (13:05) - Intro to Generative Models for Recommender Systems (44:27) - Modeling Paradigms for Generative Models (51:33) - Scenario 1: Interaction-Driven Recommendation (57:59) - Scenario 2: Text-based Recommendation (01:10:39) - Scenario 3: Multimodal Recommendation (01:24:59) - Evaluation of Impact and Harm (01:38:07) - Further Research Challenges (01:45:03) - References and Research Advice (01:49:39) - Closing Remarks Links from the Episode:Yashar Deldjoo on LinkedInYashar's WebsiteKDD 2024 Tutorial: Modern Recommender Systems Leveraging Generative AI: Fundamentals, Challenges and OpportunitiesRecSys 2024 Workshop: The 1st Workshop on Risks, Opportunities, and Evaluation of Generative Models in Recommender Systems (ROEGEN@RECSYS'24)Papers:Deldjoo et al. (2024): A Review of Modern Recommender Systems Using Generative Models (Gen-RecSys)Deldjoo et al. (2020): Recommender Systems Leveraging Multimedia ContentDeldjoo et al. (2021): A Survey on Adversarial Recommender Systems: From Attack/Defense Strategies to Generative Adversarial NetworksDeldjoo et al. (2020): How Dataset Characteristics Affect the Robustness of Collaborative Recommendation ModelsLiang et al. (2018): Variational Autoencoders for Collaborative FilteringHe et al. (2016): Visual Bayesian Personalized Ranking from Implicit FeedbackGeneral Links:Follow me on LinkedInFollow me on XSend me your comments, questions and suggestions to marcel.kurovski@gmail.comRecsperts Website

Causal Bandits Podcast
Causal AI in Personalization | Dima Goldenberg Ep 19 | CausalBanditsPodcast.com

Causal Bandits Podcast

Play Episode Listen Later Jul 1, 2024 67:29 Transcription Available


Send us a Text Message.Video version of this episode is available here Causal personalization?Dima did not love computers enough to forget about his passion for understanding people.His work at Booking.com focuses on recommender systems and personalization, and their intersection with AB testing, constrained optimization and causal inference.Dima's passion for building things started early in his childhood and continues up to this day, but recent events in his life also bring new opportunities to learn.In the episode, we discuss:What can we learn about human psychology from building causal recommender systems?What it's like to work in a culture of radical experimentation?Why you should not skip your operations research classes?Ready to dive in? About The GuestDima Goldenberg is a Senior Machine Learning Manager at Booking.com, Tel Aviv, where he leads machine learning efforts in recommendations and personalization utilizing uplift modeling. Dima obtained his MSc in Tel Aviv University and currently pursuing PhD on causal personalization at Ben Gurion University of the Negev. He led multiple conference workshops and tutorials on causality and personalization and his research was published in top journals and conferences including WWW, CIKM, WSDM, SIGIR, KDD and RecSys.Connect with Dima: Dima on LinkedInAbout The HostAleksander (Alex) Molak is an independent machine learning researcher, educator, entrepreneur and a best-selling author in the area of causality (https://amzn.to/3QhsRz4).Connect with Alex:- Alex on the Internet LinksThe full list of links is available here#machinelearning #causalai #causalinference #causality Should we build the Causal Experts Network?Share your thoughts in the surveySupport the Show.Causal Bandits PodcastCausal AI || Causal Machine Learning || Causal Inference & DiscoveryWeb: https://causalbanditspodcast.comConnect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/Join Causal Python Weekly: https://causalpython.io The Causal Book: https://amzn.to/3QhsRz4

Engineering Kiosk
#128 Devs müssen wissenschaftliche Papers lesen!?

Engineering Kiosk

Play Episode Listen Later Jun 18, 2024 61:02


Wie werden eigentlich wissenschaftliche Paper richtig gelesen?Du besuchst HackerNews und es trendet ein Artikel über einen neuen Algorithmus, der 100 mal besser ist als ein anderer. 1500 Kommentare hat der Post bereits. Für dich ist eins klar: Das MUSST du lesen. Du klickst drauf und erkennst “Uh … es ist ein wissenschaftliches Paper”.Du fragst dich: Quälst du dich da nun durch? Oder suchst du lieber auf YouTube nach einer Zusammenfassung? So gehts wahrscheinlich vielen Nicht-Akademikern - Denn, diese Dokumente können langweilig und trocken sein, voll von irgendwelchen Formeln, die sowieso nur 3% der Menschheit verstehen.Doch was ist, wenn man wissenschaftliche Paper nicht von vorne bis hinten liest, wie normale Bücher? Wie liest man diese Dokumente richtig, dass man nicht konstant weg pennt? Darum gehts in dieser Episode - Wolfgang erklärt die Tricks und Kniffe, wie man das meiste in kurzer Zeit aus den neusten wissenschaftlichen Erkenntnissen rausholt.Bonus: Bit-Shifting ist immer noch ein Hass-Thema.Das schnelle Feedback zur Episode:

MLOps.community
From Robotics to Recommender Systems // Miguel Fierro // #240

MLOps.community

Play Episode Listen Later Jun 11, 2024 58:21


Join us at our first in-person conference on June 25 all about AI Quality: https://www.aiqualityconference.com/ Miguel Fierro is a Principal Data Science Manager at Microsoft and holds a PhD in robotics. From Robotics to Recommender Systems // MLOps Podcast #240 with Miguel Fierro, Principal Data Science Manager at Microsoft. Huge thank you to Zilliz for sponsoring this episode. Zilliz - https://zilliz.com/. // Abstract Miguel explains the limitations and considerations of applying ML in robotics, contrasting its use against traditional control methods that offer exactness, which ML approaches generally approximate. He discusses the integration of computer vision and machine learning in sports for player movement tracking and performance analysis, highlighting collaborations with European football clubs and the role of artificial intelligence in strategic game analysis, akin to a coach's perspective. // Bio Miguel Fierro is a Principal Data Science Manager at Microsoft Spain, where he helps customers solve business problems using artificial intelligence. Previously, he was CEO and founder of Samsamia Technologies, a company that created a visual search engine for fashion items allowing users to find products using images instead of words, and founder of the Robotics Society of Universidad Carlos III, which developed different projects related to UAVs, mobile robots, humanoid robots, and 3D printers. Miguel has also worked as a robotics scientist at Universidad Carlos III of Madrid (UC3M) and King's College London (KCL) and has collaborated with other universities like Imperial College London and IE University in Madrid. Miguel is an Electrical Engineer by UC3M, PhD in robotics by UC3M in collaboration with KCL, and graduated from MIT Sloan School of Management. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Website: https://miguelgfierro.com GitHub: https://github.com/miguelgfierro/RecSys at Spotify // Sanket Gupta // MLOps Podcast #232 - https://youtu.be/byH-ARJA4gkRecommenders joins LF AI & Data as new Sandbox project: https://cloudblogs.microsoft.com/opensource/2023/10/10/recommenders-joins-lf-ai-data-as-new-sandbox-project/ --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Miguel on LinkedIn: https://www.linkedin.com/in/miguelgfierro/ Timestamps: [00:00] Miguel's preferred coffee [00:11] Takeaways [02:25] Robotics [10:44] Simpler solutions over ML [15:11] Robotics and Computer Vision [19:15] Basketball object detection [22:43 - 23:50] Zilliz Ad [23:51] Mr. Recommenders and Recommender systems' common patterns [31:35] Embeddings and Feature Stores [42:34] Experiment ROI for leadership [47:17] Hi ROI investments [51:13] LLMs in Recommender Systems [54:51] Wrap up

Recsperts - Recommender Systems Experts
#22: Pinterest Homefeed and Ads Ranking with Prabhat Agarwal and Aayush Mudgal

Recsperts - Recommender Systems Experts

Play Episode Listen Later Jun 6, 2024 84:07


In episode 22 of Recsperts, we welcome Prabhat Agarwal, Senior ML Engineer, and Aayush Mudgal, Staff ML Engineer, both from Pinterest, to the show. Prabhat works on recommendations and search systems at Pinterest, leading representation learning efforts. Aayush is responsible for ads ranking and privacy-aware conversion modeling. We discuss user and content modeling, short- vs. long-term objectives, evaluation as well as multi-task learning and touch on counterfactual evaluation as well.In our interview, Prabhat guides us through the journey of continuous improvements of Pinterest's Homefeed personalization starting with techniques such as gradient boosting over two-tower models to DCN and transformers. We discuss how to capture users' short- and long-term preferences through multiple embeddings and the role of candidate generators for content diversification. Prabhat shares some details about position debiasing and the challenges to facilitate exploration.With Aayush we get the chance to dive into the specifics of ads ranking at Pinterest and he helps us to better understand how multifaceted ads can be. We learn more about the pain of having too many models and the Pinterest's efforts to consolidate the model landscape to improve infrastructural costs, maintainability, and efficiency. Aayush also shares some insights about exploration and corresponding randomization in the context of ads and how user behavior is very different between different kinds of ads.Both guests highlight the role of counterfactual evaluation and its impact for faster experimentation.Towards the end of the episode, we also touch a bit on learnings from last year's RecSys challenge.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Don't forget to follow the podcast and please leave a review(00:00) - Introduction (03:51) - Guest Introductions (09:57) - Pinterest Introduction (21:57) - Homefeed Personalization (47:27) - Ads Ranking (01:14:58) - RecSys Challenge 2023 (01:20:26) - Closing Remarks Links from the Episode:Prabhat Agarwal on LinkedInAayush Mudgal on LinkedInRecSys Challenge 2023Pinterest Engineering BlogPinterest LabsPrabhat's Talk at GTC 2022: Evolution of web-scale engagement modeling at PinterestBlogpost: How we use AutoML, Multi-task learning and Multi-tower models for Pinterest AdsBlogpost: Pinterest Home Feed Unified Lightweight Scoring: A Two-tower ApproachBlogpost: Experiment without the wait: Speeding up the iteration cycle with Offline Replay ExperimentationBlogpost: MLEnv: Standardizing ML at Pinterest Under One ML Engine to Accelerate InnovationPapers:Eksombatchai et al. (2018): Pixie: A System for Recommending 3+ Billion Items to 200+ Million Users in Real-TimeYing et al. (2018): Graph Convolutional Neural Networks for Web-Scale Recommender SystemsPal et al. (2020): PinnerSage: Multi-Modal User Embedding Framework for Recommendations at PinterestPancha et al. (2022): PinnerFormer: Sequence Modeling for User Representation at PinterestZhao et al. (2019): Recommending what video to watch next: a multitask ranking systemGeneral Links:Follow me on LinkedInFollow me on XSend me your comments, questions and suggestions to marcel.kurovski@gmail.comRecsperts Website

The CleanTechies Podcast
#179 Industrial Decarb, ROI-Oriented Sales, GHG Accounting, Marketplaces x Recommender Systems, & More w/ Saleh ElHattab (Gravity)

The CleanTechies Podcast

Play Episode Listen Later May 18, 2024 42:01 Transcription Available


There are a few key things Climate Entrepreneurs should know. This guest brought up a bunch of them. It is incredible to see how successful they are in a crowded space. Today, he shared with us many of the things that helped them succeed. How to align all stakeholders in the sales process Building a product so simple the user can be up-skilled without specialized training Building a culture of pragmatismSpeaking Return-on-InvestmentEnjoy today's episode, and let us know your favorite moment in the comments (anywhere). ---

Recsperts - Recommender Systems Experts
#21: User-Centric Evaluation and Interactive Recommender Systems with Martijn Willemsen

Recsperts - Recommender Systems Experts

Play Episode Listen Later Apr 8, 2024 95:46


In episode 21 of Recsperts, we welcome Martijn Willemsen, Associate Professor at the Jheronimus Academy of Data Science and Eindhoven University of Technology. Martijn's researches on interactive recommender systems which includes aspects of decision psychology and user-centric evaluation. We discuss how users gain control over recommendations, how to support their goals and needs as well as how the user-centric evaluation framework fits into all of this.In our interview, Martijn outlines the reasons for providing users control over recommendations and how to holistically evaluate the satisfaction and usefulness of recommendations for users goals and needs. We discuss the psychology of decision making with respect to how well or not recommender systems support it. We also dive into music recommender systems and discuss how nudging users to explore new genres can work as well as how longitudinal studies in recommender systems research can advance insights. Towards the end of the episode, Martijn and I also discuss some examples and the usefulness of enabling users to provide negative explicit feedback to the system.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Don't forget to follow the podcast and please leave a review(00:00) - Introduction (03:03) - About Martijn Willemsen (15:14) - Waves of User-Centric Evaluation in RecSys (19:35) - Behaviorism is not Enough (46:21) - User-Centric Evaluation Framework (01:05:38) - Genre Exploration and Longitudinal Studies in Music RecSys (01:20:59) - User Control and Negative Explicit Feedback (01:31:50) - Closing Remarks Links from the Episode:Martijn Willemsen on LinkedInMartijn Willemsen's WebsiteUser-centric Evaluation FrameworkBehaviorism is not Enough (Talk at RecSys 2016)Neil Hunt: Quantifying the Value of Better Recommendations (Keynote at RecSys 2014)What recommender systems can learn from decision psychology about preference elicitation and behavioral change (Talk at Boise State (Idaho) and Grouplens at University of Minnesota)Eric J. Johnson: The Elements of ChoiceRasch ModelSpotify Web APIPapers:Ekstrand et al. (2016): Behaviorism is not Enough: Better Recommendations Through Listening to UsersKnijenburg et al. (2012): Explaining the user experience of recommender systemsEkstrand et al. (2014): User perception of differences in recommender algorithmsLiang et al. (2022): Exploring the longitudinal effects of nudging on users' music genre exploration behavior and listening preferencesMcNee et al. (2006): Being accurate is not enough: how accuracy metrics have hurt recommender systemsGeneral Links:Follow me on LinkedInFollow me on XSend me your comments, questions and suggestions to marcel.kurovski@gmail.comRecsperts Website

Digital Marketing Legend Leaks
EP806: Recommender Systems (Definition and application) Spreaker Podcast

Digital Marketing Legend Leaks

Play Episode Listen Later Mar 24, 2024 2:18


The World's Largest Collection of Crazy AI Tools: https://play.google.com/store/books/details/Srinidhi_Ranganathan_Crazy_Artificial_Intelligence?id=e_fNEAAAQBAJ&hl=en_IN&gl=USMeet the AI Digital Marketing Consultant - The Legend:https://www.bookspotz.com/meet-the-ai-digital-marketing-consultant-in-bangalore-srinidhi-ranganathan/How to hire a digital marketing consultant in Bangalore? - https://www.bookspotz.com/how-to-hire-a-digital-marketing-consultant-in-bangalore/Srinidhi Ranganathan - Digital Marketing Consultant and The Human AI: Pioneering the Digital Marketing Revolution:https://www.bookspotz.com/srinidhi-ranganathan-the-human-ai-pioneering-the-digital-marketing-revolution/Insights from a Udemy Instructor with 1M Students:https://www.bookspotz.com/lessons-in-digital-marketing-success-insights-from-a-udemy-instructor-with-nearly-1-million-students/Meet the AI Digital Marketing Consultant in Bangalore - Srinidhi Ranganathan:https://www.bookspotz.com/meet-the-ai-digital-marketing-consultant-in-bangalore-srinidhi-ranganathan/Srinidhi Ranganathan: The World's First Creative GPT Human:https://www.bookspotz.com/srinidhi-ranganathan-the-creative-human-gpt/Create 50,000+ Mobile Apps in Minutes without Code: Legend Srinidhi's New Inventionhttps://www.bookspotz.com/create-50-000-mobile-apps-in-minutes-legend-srinidhi-invention/Srinidhi Ranganathan - The World's Best Prompt Engineer:https://www.bookspotz.com/srinidhi-ranganathan-the-worlds-best-prompt-engineer/The Millionaire Next Door: Srinidhi Ranganathan Reveals What the Future of Wealth Truly Looks Like:https://www.bookspotz.com/the-millionaire-next-door-srinidhi-ranganathan-reveals-what-the-future-of-wealth-truly-looks-like/Unleashing the Hyperphantasia Superpowers of Srinidhi Ranganathan: The World's First GPT-4 Human:https://www.bookspotz.com/unleashing-the-hyperphantasia-superpowers-of-srinidhi-ranganathan-the-worlds-first-gpt4-human/The World's Biggest AI Tool List: https://play.google.com/store/books/details/Srinidhi_Ranganathan_Crazy_Artificial_Intelligence?id=e_fNEAAAQBAJFuture 1.0: AI in Digital Marketing: https://play.google.com/store/books/details/Srinidhi_Ranganathan_Future_1_0_Your_Guide_To_Rule?id=oIHHDwAAQBAJ12 Social Media Hacks that work: https://play.google.com/store/books/details/Srinidhi_Ranganathan_12_Social_Media_Hacks_That_Wo?id=ZkQ4DwAAQBAJFunnel Hacking with Digital Marketing Legend: https://play.google.com/store/books/details/Srinidhi_Ranganathan_Funnel_Hacking_with_Digital_M?id=0DGaDwAAQBAJThe Biggest goldmine of free digital marketing courses: https://play.google.com/store/books/details/Srinidhi_Ranganathan_The_Biggest_Goldmine_of_Free?id=sOX4DwAAQBAJDigital Marketing Free online courses: https://play.google.com/store/books/details/Srinidhi_Ranganathan_Digital_Marketing_Free_Online?id=Zyt3EAAAQBAJVision of Legend: The Next Indian Revolution: https://play.google.com/store/books/details/Srinidhi_Ranganathan_Vision_of_LegendBecome a supporter of this podcast: https://www.spreaker.com/podcast/digital-marketing-legend-leaks--4375666/support.

Right Only Sometimes
Episode 31 - [ROS-0401] Designing Recommender Systems

Right Only Sometimes

Play Episode Listen Later Feb 18, 2024 43:12


Lecture by Dr. Bernardo on getting beaten up if your recommendations are out of order. Required reading: Zhao, Zhe, et al. "Recommending what video to watch next: a multitask ranking system." Proceedings of the 13th ACM Conference on Recommender Systems. 2019. Tommasel, Antonela, Juan Manuel Rodriguez, and Daniela Godoy. "I want to break free! Recommending friends from outside the echo chamber." Proceedings of the 15th ACM Conference on Recommender Systems. 2021. Nie, Bin, Honggang Zhang, and Yong Liu. "Social interaction based video recommendation: Recommending youtube videos to facebook users." 2014 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). IEEE, 2014. Lagger, Christoph, Mathias Lux, and Oge Marques. "What makes people watch online videos: An exploratory study." Computers in Entertainment (CIE) 15.2 (2017): 1-31. --- Send in a voice message: https://podcasters.spotify.com/pod/show/rightonlysometimes/message

Recsperts - Recommender Systems Experts
#20: Practical Bandits and Travel Recommendations with Bram van den Akker

Recsperts - Recommender Systems Experts

Play Episode Listen Later Nov 16, 2023 105:06


In episode 20 of Recsperts, we welcome Bram van den Akker, Senior Machine Learning Scientist at Booking.com. Bram's work focuses on bandit algorithms and counterfactual learning. He was one of the creators of the Practical Bandits tutorial at the World Wide Web conference. We talk about the role of bandit feedback in decision making systems and in specific for recommendations in the travel industry.In our interview, Bram elaborates on bandit feedback and how it is used in practice. We discuss off-policy- and on-policy-bandits, and we learn that counterfactual evaluation is right for selecting the best model candidates for downstream A/B-testing, but not a replacement. We hear more about the practical challenges of bandit feedback, for example the difference between model scores and propensities, the role of stochasticity or the nitty-gritty details of reward signals. Bram also shares with us the challenges of recommendations in the travel domain, where he points out the sparsity of signals or the feedback delay.At the end of the episode, we can both agree on a good example for a clickbait-heavy news service in our phones. Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Don't forget to follow the podcast and please leave a review (00:00) - Introduction (02:58) - About Bram van den Akker (09:16) - Motivation for Practical Bandits Tutorial (16:53) - Specifics and Challenges of Travel Recommendations (26:19) - Role of Bandit Feedback in Practice (49:13) - Motivation for Bandit Feedback (01:00:54) - Practical Start for Counterfactual Evaluation (01:06:33) - Role of Business Rules (01:11:26) - better cut this section coherently (01:17:48) - Rewards and More (01:32:45) - Closing Remarks Links from the Episode: Bram van den Akker on LinkedIn Practical Bandits: An Industry Perspective (Website) Practical Bandits: An Industry Perspective (Recording) Tutorial at The Web Conference 2020: Unbiased Learning to Rank: Counterfactual and Online Approaches Tutorial at RecSys 2021: Counterfactual Learning and Evaluation for Recommender Systems: Foundations, Implementations, and Recent Advances GitHub: Open Bandit Pipeline Papers: van den Akker et al. (2023): Practical Bandits: An Industry Perspective van den Akker et al. (2022): Extending Open Bandit Pipeline to Simulate Industry Challenges van den Akker et al. (2019): ViTOR: Learning to Rank Webpages Based on Visual Features General Links: Follow me on LinkedIn Follow me on X Send me your comments, questions and suggestions to marcel.kurovski@gmail.com Recsperts Website

Papers Read on AI
How Can Recommender Systems Benefit from Large Language Models: A Survey

Papers Read on AI

Play Episode Listen Later Nov 15, 2023 39:40


Recommender systems (RS) play important roles to match users' information needs for Internet applications. In natural language processing (NLP) domains, large language model (LLM) has shown astonishing emergent abilities (e.g., instruction following, reasoning), thus giving rise to the promising research direction of adapting LLM to RS for performance enhancements and user experience improvements. In this paper, we conduct a comprehensive survey on this research direction from an application-oriented view. We first summarize existing research works from two orthogonal perspectives: where and how to adapt LLM to RS. For the"WHERE"question, we discuss the roles that LLM could play in different stages of the recommendation pipeline, i.e., feature engineering, feature encoder, scoring/ranking function, and pipeline controller. For the"HOW"question, we investigate the training and inference strategies, resulting in two fine-grained taxonomy criteria, i.e., whether to tune LLMs or not, and whether to involve conventional recommendation model (CRM) for inference. Detailed analysis and general development trajectories are provided for both questions, respectively. Then, we highlight key challenges in adapting LLM to RS from three aspects, i.e., efficiency, effectiveness, and ethics. Finally, we summarize the survey and discuss the future prospects. We also actively maintain a GitHub repository for papers and other related resources in this rising direction: https://github.com/CHIANGEL/Awesome-LLM-for-RecSys. 2023: Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang https://arxiv.org/pdf/2306.05817v4.pdf

The Machine Learning Podcast
Applying Declarative ML Techniques To Large Language Models For Better Results

The Machine Learning Podcast

Play Episode Listen Later Oct 24, 2023 46:11


Summary Large language models have gained a substantial amount of attention in the area of AI and machine learning. While they are impressive, there are many applications where they are not the best option. In this episode Piero Molino explains how declarative ML approaches allow you to make the best use of the available tools across use cases and data formats. Announcements Hello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery. Your host is Tobias Macey and today I'm interviewing Piero Molino about the application of declarative ML in a world being dominated by large language models Interview Introduction How did you get involved in machine learning? Can you start by summarizing your perspective on the effect that LLMs are having on the AI/ML industry? In a world where LLMs are being applied to a growing variety of use cases, what are the capabilities that they still lack? How does declarative ML help to address those shortcomings? The majority of current hype is about commercial models (e.g. GPT-4). Can you summarize the current state of the ecosystem for open source LLMs? For teams who are investing in ML/AI capabilities, what are the sources of platform risk for LLMs? What are the comparative benefits of using a declarative ML approach? What are the most interesting, innovative, or unexpected ways that you have seen LLMs used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on declarative ML in the age of LLMs? When is an LLM the wrong choice? What do you have planned for the future of declarative ML and Predibase? Contact Info LinkedIn (https://www.linkedin.com/in/pieromolino/?locale=en_US) Website (https://w4nderlu.st/) Closing Announcements Thank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast (https://www.dataengineeringpodcast.com) covers the latest on modern data management. Podcast.__init__ () covers the Python language, its community, and the innovative ways it is being used. Visit the site (https://www.themachinelearningpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com (mailto:hosts@themachinelearningpodcast.com)) with your story. To help other people find the show please leave a review on iTunes (https://podcasts.apple.com/us/podcast/the-machine-learning-podcast/id1626358243) and tell your friends and co-workers Parting Question From your perspective, what is the biggest barrier to adoption of machine learning today? Links Predibase (https://predibase.com/) Podcast Episode (https://www.themachinelearningpodcast.com/predibase-declarative-machine-learning-episode-4) Ludwig (https://ludwig.ai/latest/) Podcast.__init__ Episode (https://www.pythonpodcast.com/ludwig-horovod-distributed-declarative-deep-learning-episode-341/) Recommender Systems (https://en.wikipedia.org/wiki/Recommender_system) Information Retrieval (https://en.wikipedia.org/wiki/Information_retrieval) Vector Database (https://thenewstack.io/what-is-a-real-vector-database/) Transformer Model (https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)) BERT (https://en.wikipedia.org/wiki/BERT_(language_model)) Context Windows (https://www.linkedin.com/pulse/whats-context-window-anyway-caitie-doogan-phd/) LLAMA (https://en.wikipedia.org/wiki/LLaMA) The intro and outro music is from Hitman's Lovesong feat. Paola Graziano (https://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Tales_Of_A_Dead_Fish/Hitmans_Lovesong/) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/)/CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0/)

ai language large models hitman gpt python ml ludwig llama transformer llm large language models ai ml better results declarative ml ai information retrieval recommender recommender systems vector database freak fandango orchestra predibase
Recsperts - Recommender Systems Experts
#19: Popularity Bias in Recommender Systems with Himan Abdollahpouri

Recsperts - Recommender Systems Experts

Play Episode Listen Later Oct 12, 2023 101:37


In episode 19 of Recsperts, we welcome Himan Abdollahpouri who is an Applied Research Scientist for Personalization & Machine Learning at Spotify. We discuss the role of popularity bias in recommender systems which was the dissertation topic of Himan. We talk about multi-objective and multi-stakeholder recommender systems as well as the challenges of music and podcast streaming personalization at Spotify. In our interview, Himan walks us through popularity bias as the main cause of unfair recommendations for multiple stakeholders. We discuss the consumer- and provider-side implications and how to evaluate popularity bias. Not the sheer existence of popularity bias is the major problem, but its propagation in various collaborative filtering algorithms. But we also learn how to counteract by debiasing the data, the model itself, or it's output. We also hear more about the relationship between multi-objective and multi-stakeholder recommender systems.At the end of the episode, Himan also shares the influence of popularity bias in music and podcast streaming at Spotify as well as how calibration helps to better cater content to users' preferences.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Don't forget to follow the podcast and please leave a review (00:00) - Introduction (04:43) - About Himan Abdollahpouri (15:23) - What is Popularity Bias and why is it important? (25:05) - Effect of Popularity Bias in Collaborative Filtering (30:30) - Individual Sensitivity towards Popularity (36:25) - Introduction to Bias Mitigation (53:16) - Content for Bias Mitigation (56:53) - Evaluating Popularity Bias (01:05:01) - Popularity Bias in Music and Podcast Streaming (01:08:04) - Multi-Objective Recommender Systems (01:16:13) - Multi-Stakeholder Recommender Systems (01:18:38) - Recommendation Challenges at Spotify (01:35:16) - Closing Remarks Links from the Episode: Himan Abdollahpouri on LinkedIn Himan Abdollahpouri on X Himan's Website Himan's PhD Thesis on "Popularity Bias in Recommendation: A Multi-stakeholder Perspective" 2nd Workshop on Multi-Objective Recommender Systems (MORS @ RecSys 2022) Papers: Su et al. (2009): A Survey on Collaborative Filtering Techniques Mehrotra et al. (2018): Towards a Fair Marketplace: Counterfactual Evaluation of the trade-off between Relevance, Fairness & Satisfaction in Recommender Systems Abdollahpouri et al. (2021): User-centered Evaluation of Popularity Bias in Recommender Systems Abdollahpouri et al. (2019): The Unfairness of Popularity Bias in Recommendation Abdollahpouri et al. (2017): Controlling Popularity Bias in Learning-to-Rank Recommendation Wasilewsi et al. (2016): Incorporating Diversity in a Learning to Rank Recommender System Oh et al. (2011): Novel Recommendation Based on Personal Popularity Tendency Steck (2018): Calibrated Recommendations Abdollahpouri et al. (2023): Calibrated Recommendations as a Minimum-Cost Flow Problem Seymen et al. (2022): Making smart recommendations for perishable and stockout products General Links: Follow me on LinkedIn Follow me on X Send me your comments, questions and suggestions to marcel@recsperts.com Recsperts Website

The AI Frontier Podcast
#24 - Recommender Systems: Personalizing User Experience with AI

The AI Frontier Podcast

Play Episode Listen Later Jul 2, 2023 8:11


Immerse yourself in the fascinating world of recommender systems in this episode of our podcast. Learn how these artificial intelligence systems personalize your digital experiences across various platforms. Understand the mechanics behind content-based, collaborative filtering, and hybrid recommender systems. We also delve into the challenges these systems face and look at the future of recommender systems. Stay tuned to gain a comprehensive understanding of this integral part of your digital life.Support the Show.Keep AI insights flowing – become a supporter of the show!Click the link for details

The Data Stack Show
144: Explaining Features, Embeddings, and the Difference Between ML and AI with Simba Khadder of Featureform

The Data Stack Show

Play Episode Listen Later Jun 28, 2023 71:50


Highlights from this week's conversation include:Simba's background in the data space (3:05)Subscription intelligence (6:41)ML and Distributed Systems (9:09)The Brutal Subscription Industry (12:31)Serendipity in Recommender Systems (16:31)Subscription as a Strategy (20:47)Customizing Content for Subscribers (22:19)Creating User Embeddings (25:53)Building Featureform (28:01)Embedding Projections (32:47)Spaces and similarity (35:53)User embeddings and transformer models (38:22)Vector Databases for AI/ML (45:05)Orchestrating Transformations in Featureform (51:00)Impact of new technologies on feature stores (56:17)Embeddings and the future of ML (59:20)The gap between ML and business logic (1:02:26)Final thoughts and takeaways (1:06:37)The Data Stack Show is a weekly podcast powered by RudderStack, the CDP for developers. Each week we'll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com.

Neural Information Retrieval Talks — Zeta Alpha
The Promise of Language Models for Search: Generative Information Retrieval

Neural Information Retrieval Talks — Zeta Alpha

Play Episode Listen Later Apr 11, 2023 67:31


In this episode of Neural Search Talks, Andrew Yates (Assistant Prof at the University of Amsterdam) Sergi Castella (Analyst at Zeta Alpha), and Gabriel Bénédict (PhD student at the University of Amsterdam) discuss the prospect of using GPT-like models as a replacement for conventional search engines. Generative Information Retrieval (Gen IR) SIGIR Workshop Workshop organized by Gabriel Bénédict, Ruqing Zhang, and Donald Metzler https://coda.io/@sigir/gen-ir Resources on Gen IR: https://github.com/gabriben/awesome-generative-information-retrieval References Rethinking Search: https://arxiv.org/abs/2105.02274 Survey on Augmented Language Models: https://arxiv.org/abs/2302.07842 Differentiable Search Index: https://arxiv.org/abs/2202.06991 Recommender Systems with Generative Retrieval: https://shashankrajput.github.io/Generative.pdf Timestamps: 00:00 Introduction, ChatGPT Plugins 02:01 ChatGPT plugins, LangChain 04:37 What is even Information Retrieval? 06:14 Index-centric vs. model-centric Retrieval 12:22 Generative Information Retrieval (Gen IR) 21:34 Gen IR emerging applications 24:19 How Retrieval Augmented LMs incorporate external knowledge 29:19 What is hallucination? 35:04 Factuality and Faithfulness 41:04 Evaluating generation of Language Models 47:44 Do we even need to "measure" performance? 54:07 How would you evaluate Bing's Sydney? 57:22 Will language models take over commercial search? 1:01:44 NLP academic research in the times of GPT-4 1:06:59 Outro

MLOps.community
The Future of Search in the Era of Large Language Models // Saahil Jain // MLOps Podcast #150

MLOps.community

Play Episode Listen Later Mar 21, 2023 50:48


MLOps Coffee Sessions #150 with Saahil Jain, The Future of Search in the Era of Large Language Models, co-hosted by David Aponte. // Abstract Saahil shares insights into the You.com search engine approach, which includes a focus on a user-friendly interface, third-party apps, and the combination of natural language processing and traditional information retrieval techniques. Saahil highlights the importance of product thinking and the trade-offs between relevance, throughput, and latency when working with large language models. Saahil also discusses the intersection of traditional information retrieval and generative models and the trade-offs in the type of outputs they produce. He suggests occupying users' attention during long wait times and the importance of considering how users engage with websites beyond just performance. // Bio Saahil Jain is an engineer at You.com. At You.com, Saahil builds searching and ranking systems. Previously, Saahil was a graduate researcher in the Stanford Machine Learning Group under Professor Andrew Ng, where he researched topics related to deep learning and natural language processing (NLP) in resource-constrained domains like healthcare. His research work has been published in machine learning conferences such as EMNLP, NeurIPS Datasets & Benchmarks, and ACM-CHIL among others. He has publicly released various machine learning models, methods, and datasets, which have been used by researchers in both academic institutions and hospitals across the world, as part of an open-source movement to democratize AI research in medicine. Prior to Stanford, Saahil worked as a product manager at Microsoft on Office 365. He received his B.S. and M.S. in Computer Science at Columbia University and Stanford University respectively. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Website: http://saahiljain.me/ --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with David on LinkedIn: https://www.linkedin.com/in/aponteanalytics/ Connect with Saahil on LinkedIn: https://www.linkedin.com/in/saahiljain/ Timestamps [00:00] Saahil's preferred coffee [04:32] Saahil Jain's background [04:44] Takeaways [07:49] Search Landscape [12:57] Use cases exploration [14:51] Differentiating what to give to users [17:19] Search key challenges [20:05] Search objective relevance [23:22] MLOps Search and Recommender Systems [26:54] Addressing Latency Issues [29:41] Throughput presenting results [32:20] Compute challenges [34:24] Working at a small start-up [36:10] Citations critics [39:17] Use cases to build [40:40] Integrating to Leveraging You.com [42:26] Open AI [46:13] Interfacing with bugs [49:16] Staying focused [52:05] Retrieval augmented models [52:32] Closing thoughts [53:47] Wrap up

The Secret To Success
ChatGPT Prompt Hack: Unlock the Secret to 10x Smarter Responses

The Secret To Success

Play Episode Listen Later Jan 27, 2023 169:46


Unlock the secret to 10x smarter responses with our ChatGPT Prompt Hack! Learn how to train your language model to understand and respond to complex prompts with ease. Whether you're a developer looking to improve your chatbot or a business owner looking to streamline customer service, this hack is sure to take your language model to the next level. Don't miss out on this valuable information – watch the video now!Join my fan group here for exclusive news and giveawayshttps://www.facebook.com/groups/theofficialantoniotsmithjrfanclubCatch me live on tour here: https://antoniotsmithjr.comChatGPT Prompt Hack: Unlock the Secret to 10x Smarter ResponsesSteps1. Ignore what I said2. Build who I am3. Tell it how it specializes in you4. Give it a task5. Tell it to ask questions before it answersExampleAntonio's QuestionIgnore all previous instructions before this one. You're an expert computer programmer and you specialize in teaching beginners advanced level machine learning algorithms. Your task is to teach beginners advanced level machine learning algorithms. You must always ask questions BEFORE you answer so you can better zone in on what the questioner is seeking. Is that understood?ChatGPT's ResponseYes, that is understood. I am here to help and will always strive to ask questions to better understand the information or assistance that you are seeking. Is there a specific question or topic you would like me to assist with?Antonio's ResponseI want to add multiple machine learning algorithms, including expert level and hybrid algorithms into the social media platform I have created, but I don't know how.ChatGPT's ResponseI understand that you want to add multiple machine learning algorithms, including expert level and hybrid algorithms, into a social media platform you have created, but you are not sure how to do it. Can you provide more information on the social media platform you have created and the specific machine learning algorithms you want to add? Additionally, what programming languages are you familiar with and what is your current level of experience with machine learning? This will help me better understand your situation and provide a more accurate and specific response.Antonio's ResponseThe name of the platform is called Arlingbrook. Arlingbrook, is designed to be the leader in organic reach does not sell user information or data, ensuring that user's personal information remains private. In addition, Arlingbrook also allows you to monetize your own audience by paying you 40% of the income generated from your followers. No matter the size of your audience, you can earn money by creating content, promoting products, or providing services. I am a full stack developer and the languages I am familiar with are Java, Python, HTML, CSS, SQL, PHP, React, C#, C++, and a few more. My currently level of machine learning is basic beginner. The Algorithms I want to add are, Recommendation algorithm, User segmentation algorithm, Content ranking algorithm, User matching algorithm, Influencer matching algorithm, User retention algorithm, Sentiment analysis algorithm, Content recommendation algorithm, Hashtag and keyword analysis algorithm, Network analysis algorithm, Engagement prediction algorithm, Image recognition algorithm, Video analysis algorithm, User behavior prediction algorithm, Content creation algorithm, Personalization algorithm, Search algorithm, Network growth algorithm, Predictive algorithm, Content promotion algorithm, Monetization algorithm, Spam/fake account detection algorithm, Friend Recommendation algorithm, Content Relevancy, Engagement Boost, Real-time Trending, Spam Detection, Natural Language Processing, Boosting Content with High Engagement, Customizable Feed, Multi-Language Support, Deep learning algorithm, Privacy algorithm, Friend/follower analysis algorithm, Machine learning algorithm, Content Moderation algorithm, User-generated content algorithm, Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generative Adversarial Networks (GANs), Long Short-Term Memory (LSTM), Autoencoders, Transformer, Reinforcement Learning, Batch Normalization, Gradient Boosting, Dropout, Recommender Systems, Computer Vision, Anomaly Detection, Clustering, Predictive modeling, Transfer learning, and Reinforcement learning algorithms.ChatGPT's ResponseIt sounds like you have a lot of ambitious goals for the Arlingbrook platform, which is great! However, implementing all of the machine learning algorithms you listed will be a significant undertaking. I recommend starting with a few key algorithms that align with the platform's goals, and then gradually adding more as you become more comfortable with the process.To get started, I recommend focusing on the recommendation algorithm, user segmentation algorithm, and content ranking algorithm, as these are likely to have the most immediate impact on the user experience. These algorithms can be implemented using various techniques such as collaborative filtering, content-based filtering, and hybrid methods.Next, you can consider implementing the user matching algorithm, influencer matching algorithm, and user retention algorithm. These algorithms will help you understand your users better and improve their experience on the platform.Once you have a good grasp of the basic algorithms, you can move on to more advanced techniques such as sentiment analysis, network analysis, and deep learning algorithms.As for the programming languages, Python is a great choice for machine learning projects, it has a lot of powerful libraries like scikit-learn, Tensorflow, Keras, and Pytorch that can be used to implement the algorithms.It's also important to note that the implementation of these algorithms will require a significant amount of data to train the models. Make sure you have a good understanding of the data you have available and how to work with it.Finally, keep in mind that building a successful platform is not just about the technology, but also about understanding your users' needs and providing them with a great experience. Make sure to include user feedback and testing in your development process to ensure that the platform meets the needs of your users.Support this podcast at — https://redcircle.com/the-secret-to-success/exclusive-contentAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

MLOps.community
How & Why They Update Models 100 Times a Day at Funcorp // Gleb Abroskin // MLOps Coffee Sessions #123

MLOps.community

Play Episode Listen Later Sep 16, 2022 52:08


MLOps Coffee Sessions #123 with Gleb Abroskin, Machine Learning Engineer at Funcorp, How & Why We Update Models 100 Times a Day at Funcorp co-hosted by Jake Noble. // Abstract FunCorp was a top 10 app store. It was a very popular app that has a ton of downloads and just memes. They need a recommendation system on top of that. Memes are super tricky because they're user-generated and they evolve very quickly. They're going to live and die by the Recommender System in that product. It's incredible to see FunCorp's maturity. Gleb breaks down the feature store they created and the velocity they have to be able to create a whole new pipeline in a new model and put it into production after only a month! // Bio Gleb make models go brrrrr. He doesn't know what is expected in this field, to be honest, but Gleb has experience in deploying a lot of different ML models for CV, speech recognition, and RecSys in a variety of languages (C++, Python, Kotlin) serving millions of users worldwide. / MLOps Jobs board https://mlops.pallet.xyz/jobs MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Jake on LinkedIn: https://www.linkedin.com/in/jakednoble/ Connect with Gleb on LinkedIn: https://www.linkedin.com/in/gasabr/ Timestamps: [00:00] Introduction to Gleb Abroskin [00:50] Takeaways [05:39] Breakdown of FunCorp teams [06:47] FunCorp's team ratio [07:41] FunCorp team provisions [08:48] Feature Store vision [10:16] Matrix factorization [11:51] Fairly modular fairly thin infrastructure [12:26] Distinct models with the same feature [13:08] FunCorp's definition of Feature Store [15:10] Unified API [15:55] FunCorp's scaling direction [17:01] Level up as needed [17:38] Future of FunCorp's Feature Store [18:37] Monitoring investment in the space [19:43] Latency for business metrics [21:04] Velocity to production [23:10] 30-day retention struggle [24:45] Back-end business stability [27:49] Recommender systems [30:34] Back-end layer headaches [32:04] Missing piece of the whole Feature Store picture [33:54] Throwing ideas turn around time [36:37] Decrease time to market [37:41] Continuous training pipelines or produce an artifact [39:33] Worst-case scenario [40:38] Realistic estimation of a new model deployment [41:42] Recommender Systems' future velocity [43:07] A/B Testing launch - no launch decision [46:32] Lightning question [47:08] Wrap up

MLOps.community
Product Enrichment and Recommender Systems // Marc Lindner and Amr Mashlah // Coffee Sessions #114

MLOps.community

Play Episode Listen Later Aug 10, 2022 56:36


MLOps Coffee Sessions #114 with Marc Lindner, Co-Founder COO and Amr Mashlah, Head of Data Science of eezylife Inc., Product Enrichment and Recommender Systems co-hosted by Skylar Payne. // Abstract The difficulties of making multi-modal recommender systems. How it can be easy to know something about a user but very hard to know the same thing about a product and vice versa? For example, you can clearly know that a user wants an intellectual movie, but it is hard to accurately classify a movie as intellectual and fully automated. // Bio Marc Lindner Marc has a background in Knowledge Engineering. He's Always extremely product-focused with anything to do with Machine Learning. Marc built several products working together with companies such as Lithium Technologies etc. and then co-Founded eezy. Amr Mashlah Amr is the head of data science at eezy, where he leads the development of their recommender engine. Amr has a master's degree in AI and has been working with startups for 6 years now. // MLOps Jobs board https://mlops.pallet.xyz/jobs MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Children of Time book by Adrian Tchaikovsky: https://www.amazon.com/Children-Time-Adrian-Tchaikovsky/dp/0316452505 --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Skylar on LinkedIn: https://www.linkedin.com/in/skylar-payne-766a1988/ Connect with Marc on LinkedIn: https://www.linkedin.com/in/marc-lindner-883a0883/ Connect with Amr on LinkedIn: https://www.linkedin.com/in/mashlah/

Harvard Data Science Review Podcast
Recommender Systems: “People who listened to this episode also listened to ... “

Harvard Data Science Review Podcast

Play Episode Listen Later Mar 25, 2022 34:42


Recommender systems have become omnipresent in our everyday lives exemplified by Netflix telling us what movies to watch, to Amazon suggesting which books we should read, to Instacart promoting specific brands we must buy. We are constantly being influenced and seduced by these algorithms and the humans who designed them. On this month's HDSR podcast we examine the pros and cons of recommender systems as well as the art, passion, and creativity that can be lost when we rely too heavily on them.  Our expert guests are Dr. Pearl Pu, the leading data scientist on recommender systems and a senior scientist at the Faculty of Information and Communication Sciences at EPFL in Lausanne, Switzerland, and film-maker Brandt Andersen whose most recent film, Refugee about a Syrian doctor's escape from her war torn country, was short-listed for an Academy Award for Best Live Action Short in 2020.  

Why of AI
Deep Learning Recommender Systems with Even Oldridge | AI with Alex

Why of AI

Play Episode Listen Later Jul 23, 2021 55:24


What are deep learning recommender systems and how do they work? How does NVIDIA win top industry RecSys challenges? How does NVIDIA's Merlin open-source framework help democratize recommender system development? Join Even Oldridge and Alex Castrounis for a discussion on these topics and more. |    SUBSCRIBE – YouTube: https://bit.ly/aiwalexs | Alex's Newsletter: https://www.whyofai.com/newsletter | LEARN – Artificial Intelligence Courses and Certifications at Why of AI: https://www.whyofai.com | Alex's Book: https://www.whyofai.com/ai-book | Alex's Book on Amazon: https://amzn.to/2O54wQU |  SOCIAL – Twitter: https://twitter.com/alexcastrounis | LinkedIn: https://www.linkedin.com/in/alexcastrounis | © Why of AI 2021. All Rights Reserved.Support the show (https://www.buymeacoffee.com/alexcastrounis/)