Podcasts about Deep learning

Branch of machine learning

  • 1,712PODCASTS
  • 4,924EPISODES
  • 41mAVG DURATION
  • 1DAILY NEW EPISODE
  • Sep 16, 2025LATEST
Deep learning

POPULARITY

20172018201920202021202220232024

Categories



Best podcasts about Deep learning

Show all podcasts related to deep learning

Latest podcast episodes about Deep learning

Le rendez-vous Tech
Apple annonce un lineup plus solide qu'éblouissant – RDV Tech

Le rendez-vous Tech

Play Episode Listen Later Sep 16, 2025 63:16


Au programme :La meilleure annonce d'Apple est une camera carréeUne IA devient ministre en Albanie (ce n'est pas une blague)En Suisse, le gouvernement propose de tuer la vie privéeLe reste de l'actualitéInfos :Animé par Patrick Beja (Bluesky, Instagram, Twitter, TikTok).Co-animé par Marion Doumeingts (Instagram, Bluesky, Twitter).Co-animé par Jeff Clavier (Instagram, Twitter).Produit par Patrick Beja (LinkedIn) et Fanny Cohen Moreau (LinkedIn).Musique libre de droit par Daniel BejaLe Rendez-vous Tech épisode 632 – Apple annonce un lineup plus solide qu'éblouissant---Liens :

In The Money Players' Podcast
Players' Podcast: Turf Champions Day Recap + Jacob West Reports In

In The Money Players' Podcast

Play Episode Listen Later Sep 15, 2025 48:04


Nick Tammaro and PTF lead off with stakes analysis from Woodbine and Churchill Downs. Where does Notable Speech fit in the BC Mile picture? Is Bentornato a good bet in the Sprint? Where will be see Deep Learning and Teddy's Rocket next? They answer these questions and many more.Next up, Goffs USA agent Jacob West joins PTF and they talk about the general positive health of the industry from a breeding and sales side and look ahead to the Goffs Orby sale and the myriad opportunities afforded to the buyers who shop there, We also get an update on some of the impressive two-year-olds and BC Classic contenders (Fierceness, Mindframe), Jacob is associated with through his role on the Repole bloodstock team.

In The Money Players' Podcast
Players' Podcast: Turf Champions Day Recap + Jacob West Reports In

In The Money Players' Podcast

Play Episode Listen Later Sep 15, 2025 48:04


Nick Tammaro and PTF lead off with stakes analysis from Woodbine and Churchill Downs. Where does Notable Speech fit in the BC Mile picture. Is Bentornato a good bet in the Sprint? Where will be see Deep Learning and Teddy's Rocket next? They answer these questions and many more.Next up, Goffs USA agent Jacob West joins PTF and they talk about the general positive health of the industry from a breeding and sales side and look ahead to the Goffs Orby sale and the myriad opportunities afforded to the buyers who shop there, We also get an update on some of the impressive two-year-olds and BC Classic contenders (Fierceness, Mindframe), Jacob is associated with through his role on the Repole bloodstock team.

Practical AI
Cracking the code of failed AI pilots

Practical AI

Play Episode Listen Later Sep 11, 2025 46:44 Transcription Available


In this Fully Connected episode, we dig into the recent MIT report revealing that 95% of AI pilots fail before reaching production and explore what it actually takes to succeed with AI solutions. We dive into the importance of AI model integration, asking the right questions when adopting new technologies, and why simply accessing a powerful model isn't enough. We explore the latest AI trends, from GPT-5 to open source models, and their impact on jobs, machine learning, and enterprise strategy. Featuring:Chris Benson – Website, LinkedIn, Bluesky, GitHub, XDaniel Whitenack – Website, GitHub, XLinks: The GenAI Divide: State of AI in Business 2025MIT Report: 95% of generative AI pilots at companies are failingSponsors:Miro – The innovation workspace for the age of AI. Built for modern teams, Miro helps you turn unstructured ideas into structured outcomes—fast. Diagramming, product design, and AI-powered collaboration, all in one shared space. Start building at miro.comShopify – The commerce platform trusted by millions. From idea to checkout, Shopify gives you everything you need to launch and scale your business—no matter your level of experience. Build beautiful storefronts, market with built-in AI tools, and tap into the platform powering 10% of all U.S. eCommerce.Start your one-dollar trial at shopify.com/practicalaiUpcoming Events: Join us at the Midwest AI Summit on November 13 in Indianapolis to hear world-class speakers share how they've scaled AI solutions. Don't miss the AI Engineering Lounge, where you can sit down with experts for hands-on guidance. Reserve your spot today!Register for upcoming webinars here!

Le rendez-vous Tech
Google échappe au pire dans son procès avec les US – RDV Tech

Le rendez-vous Tech

Play Episode Listen Later Sep 9, 2025 88:21


Au programme :Procès pour monopole: Google évite le pireAnthropic va payer $1,5 Mrds aux auteurs qu'ils ont piratéDes mini prédictions pour la conf AppleLe reste de l'actualitéEt merci à Freelance Informatique, le sponsor de cet épisode ! Retrouvez toutes les infos sur http://freelance-informatique.fr/.Infos :Animé par Patrick Beja (Bluesky, Instagram, Twitter, TikTok).Co-animé par Cédric Ingrand (Twitter et Bluesky).Produit par Patrick Beja (LinkedIn) et Fanny Cohen Moreau (LinkedIn).Musique libre de droit par Daniel BejaLe Rendez-vous Tech épisode 631 – Google échappe au pire dans son procès avec les US – Google vs US, Anthropic paie les auteurs, Musk pilote Gork, Euro numérique---Liens :

The Art of Teaching
Jamie Gerlach: Deep learning, human agency and the power of play.

The Art of Teaching

Play Episode Listen Later Sep 9, 2025 42:31


Today I'm joined by Jamie Gerlach, an educator and leader who believes in the power of deep learning to build human agency and sees access to rigorous play as a basic right for all. For over a decade, he has designed and led professional learning in embodied literacy, 4C learning and school transformation, working with teachers and leaders across Australia and internationally. Jamie has also lectured at the University of Sydney and spoken at national conferences. In this conversation, we explore his philosophy, passion and practical insights for reimagining education.

The Yellow Block
Deep Learning

The Yellow Block

Play Episode Listen Later Sep 6, 2025 65:51


The Yellow Block is back, you lucky people. Hosted by Dan, Rob and Nick. On the talkSPORT fan network Sponsored by QCS (Quince Contracting Services – Your end-to-end solution for Facilities Management, Compliance, Project Management and Installation). Hosted on Acast. See acast.com/privacy for more information.

Perfect English Podcast
The Story of AI | The Human Odyssey Series

Perfect English Podcast

Play Episode Listen Later Sep 5, 2025 24:05


This is the story of a dream, perhaps one of humanity's oldest and most audacious: the dream of a thinking machine. It's a tale that begins not with silicon and code, but with myths of bronze giants and legends of clay golems. We'll journey from the smoke-filled parlors of Victorian England, where the first computers were imagined, to a pivotal summer conference in 1956 where a handful of brilliant, tweed-clad optimists officially christened a new field: Artificial Intelligence. But this is no simple tale of progress. It's a story of dizzying highs and crushing lows, of a dream that was promised, then deferred, left to freeze in the long "AI Winter." We'll uncover how it survived in obscurity, fueled by niche expert systems and a quiet, stubborn belief in its potential. Then, we'll witness its spectacular rebirth, a renaissance powered by two unlikely forces: the explosion of the internet and the graphical demands of video games. This is the story of Deep Learning, of machines that could finally see, and of the revolution that followed. We'll arrive in our present moment, a strange new world where we converse daily with Large Language Models—our new, slightly unhinged, and endlessly fascinating artificial companions. This isn't just a history of technology; it's the biography of an idea, and a look at how it's finally, complicatedly, come of age. To unlock full access to all our episodes, consider becoming a premium subscriber on Apple Podcasts or Patreon. And don't forget to visit englishpluspodcast.com for even more content, including articles, in-depth studies, and our brand-new audio series and courses now available in our Patreon Shop!

Product Guru's
GTM Engineer: A Nova Profissão Tech que Está Revolucionando Marketing e Vendas!

Product Guru's

Play Episode Listen Later Sep 3, 2025 47:28


Neste episódio do Product Guru's, exploramos o papel do GTM Engineer, uma função híbrida que une tecnologia, automação, marketing e vendas em uma estratégia coesa de crescimento. A convidada Natalia Gaal compartilha sua experiência prática com a ferramenta Clay, que está transformando a forma como empresas orquestram dados e personalizam campanhas de marketing B2B com alta precisão.Ao longo da conversa, discutimos a origem do termo GTM Engineer, os diferentes arquétipos profissionais da função, como a maturidade dos EUA difere da brasileira e quais as oportunidades para quem quer se especializar na área. Também foi feito um tour visual pelo Clay, mostrando na prática como ele se diferencia de CRMs e ferramentas tradicionais de automação. Se você quer entender o futuro da integração entre tecnologia e marketing, este episódio é imperdível!/// Você vai aprender:• GTM Engineer é uma função híbrida que une tecnologia, marketing e vendas.• A profissão surgiu e se popularizou com a empresa Clay, em 2023.• O papel principal é integrar ferramentas e sistemas de forma estratégica.• Existem quatro arquétipos principais de GTM Engineers com diferentes focos.• O Clay é uma ferramenta de orquestração de dados, não um CRM.• GTM Engineers são altamente valorizados por empresas americanas, e contratados do Brasil.• O Brasil ainda está em estágio inicial, mas com enorme potencial de crescimento.• O pensamento sistêmico é mais importante que dominar várias ferramentas.• Personalização de campanhas com dados enriquecidos é o grande diferencial do Clay.• Networking e comunidades são cruciais para crescer nessa nova profissão./// Recado Importante: O futuro dos produtos digitais já começou e a Inteligência Artificial é parte do time. A PM3 acaba de lançar a Formação em Gestão de Produtos de IA: um curso pensado para Product Managers que querem criar, delegar e inovar com mais inteligência. Muito além dos prompts: você vai aprender a liderar produtos baseados em IA, dominar temas como Machine Learning, Deep Learning e IA Generativa, e aplicar novas formas de discovery, experimentação e validação.Prepare-se para o mercado que mais cresce no mundo e torne-se o PM que lidera a transformação.Acesse o link e saiba mais: https://go.pm3.com.br/ProductGurus-AI-SpecialistCupom: PRODUCTGURUS/// Linkedin da Natalia Gaal: https://www.linkedin.com/in/nataliagaal//// Outros links mencionados:Clay university: https://www.clay.com/universityClay Cohorts - https://www.clay.com/university/cohortsClay Blog - https://www.clay.com/blog Clay slack community - https://clayhq.typeform.com/slack?typeform-source=community.clay.comClay Bootcamp + GTM Engineering Youtube videos - recheado de videos legais - https://www.youtube.com/@nathanlippi Clay Bootcamp - https://www.claybootcamp.com/The rise of GTM engineer Clay blog post - https://www.clay.com/blog/gtm-engineeringGTM Engineer Archetypes by Victor Kim - https://claude.ai/public/artifacts/7c5edf14-f674-4141-8c68-4b1b955f1b3b Revenue Thinkers - comunidade de revops no BR - https://revenuethinkers.com/Linkedin do Bruno Head Mkt Clay - https://www.linkedin.com/in/brunoestrella/ /// Capítulos:00:00 – Introdução02:20 – A importância do papel do GTM Engineer nas empresas06:00 – O cenário nos EUA vs. Brasil11:09 – Formação em gestão de produtos e IA13:08 – Os 4 arquétipos do GTM Engineer18:42 – O que é o Clay e como ele funciona na prática31:21 – Maturidade de mercado: Brasil vs. EUA37:33 – Como GTM Engineer conecta marketing e vendas41:41 – Conselhos para quem quer se tornar GTM Engineer46:13 – Encerramento/// Onde encontrar a Product Guru's:WhatsApp: https://whatsapp.com/channel/0029Va7uwHS5fM5U0LIatu3XX (antigo Twitter): ⁠https://twitter.com/product_gurus⁠LinkedIn: ⁠https://www.linkedin.com/company/product-guru-s/⁠Instagram: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.instagram.com/product.gurus/⁠

Le rendez-vous Tech
L'IA menace la bourse, mais c'est pas (si) grave ? – RDV Tech

Le rendez-vous Tech

Play Episode Listen Later Sep 2, 2025 80:13


Au programme :Korben a quitté les réseaux sociaux, il nous explique pourquoiPourquoi l'explosion de l'IA menace la bourse américaineL'UE ne veut pas céder face pas à Trump et aux GAFAMLe reste de l'actualitéInfos :Animé par Patrick Beja (Bluesky, Instagram, Twitter, TikTok).Co-animé par Jérôme Keinborg (Bluesky).Co-animé par Korben (site)Produit par Patrick Beja (LinkedIn) et Fanny Cohen Moreau (LinkedIn).Musique libre de droit par Daniel BejaLe Rendez-vous Tech épisode 630 – L'IA menace la bourse, mais c'est pas (si) grave ?---Liens :

Oracle University Podcast
The AI Workflow

Oracle University Podcast

Play Episode Listen Later Sep 2, 2025 22:08


Join Lois Houston and Nikita Abraham as they chat with Yunus Mohammed, a Principal Instructor at Oracle University, about the key stages of AI model development. From gathering and preparing data to selecting, training, and deploying models, learn how each phase impacts AI's real-world effectiveness. The discussion also highlights why monitoring AI performance and addressing evolving challenges are critical for long-term success.   AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/252500   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode.   --------------------------------------------------------------   Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hey everyone! In our last episode, we spoke about generative AI and gen AI agents. Today, we're going to look at the key stages in a typical AI workflow. We'll also discuss how data quality, feedback loops, and business goals influence AI success. With us today is Yunus Mohammed, a Principal Instructor at Oracle University.  01:00 Lois: Hi Yunus! We're excited to have you here! Can you walk us through the various steps in developing and deploying an AI model?  Yunus: The first point is the collect data. We gather relevant data, either historical or real time. Like customer transactions, support tickets, survey feedbacks, or sensor logs. A travel company, for example, can collect past booking data to predict future demand. So, data is the most crucial and the important component for building your AI models. But it's not just the data. You need to prepare the data. In the prepared data process, we clean, organize, and label the data. AI can't learn from messy spreadsheets. We try to make the data more understandable and organized, like removing duplicates, filling missing values in the data with some default values or formatting dates. All these comes under organization of the data and give a label to the data, so that the data becomes more supervised. After preparing the data, I go for selecting the model to train. So now, we pick what type of model fits your goals. It can be a traditional ML model or a deep learning network model, or it can be a generative model. The model is chosen based on the business problems and the data we have. So, we train the model using the prepared data, so it can learn the patterns of the data. Then after the model is trained, I need to evaluate the model. You check how well the model performs. Is it accurate? Is it fair? The metrics of the evaluation will vary based on the goal that you're trying to reach. If your model misclassifies emails as spam and it is doing it very much often, then it is not ready. So I need to train it further. So I need to train it to a level when it identifies the official mail as official mail and spam mail as spam mail accurately.  After evaluating and making sure your model is perfectly fitting, you go for the next step, which is called the deploy model. Once we are happy, we put it into the real world, like into a CRM, or a web application, or an API. So, I can configure that with an API, which is application programming interface, or I add it to a CRM, Customer Relationship Management, or I add it to a web application that I've got. Like for example, a chatbot becomes available on your company's website, and the chatbot might be using a generative AI model. Once I have deployed the model and it is working fine, I need to keep track of this model, how it is working, and need to monitor and improve whenever needed. So I go for a stage, which is called as monitor and improve. So AI isn't set in and forget it. So over time, there are lot of changes that is happening to the data. So we monitor performance and retrain when needed. An e-commerce recommendation model needs updates as there might be trends which are shifting.  So the end user finally sees the results after all the processes. A better product, or a smarter service, or a faster decision-making model, if we do this right. That is, if we process the flow perfectly, they may not even realize AI is behind it to give them the accurate results.  04:59 Nikita: Got it. So, everything in AI begins with data. But what are the different types of data used in AI development?  Yunus: We work with three main types of data: structured, unstructured, and semi-structured. Structured data is like a clean set of tables in Excel or databases, which consists of rows and columns with clear and consistent data information. Unstructured is messy data, like your email or customer calls that records videos or social media posts, so they all comes under unstructured data.  Semi-structured data is things like logs on XML files or JSON files. Not quite neat but not entirely messy either. So they are, they are termed semi-structured. So structured, unstructured, and then you've got the semi-structured. 05:58 Nikita: Ok… and how do the data needs vary for different AI approaches?  Yunus: Machine learning often needs labeled data. Like a bank might feed past transactions labeled as fraud or not fraud to train a fraud detection model. But machine learning also includes unsupervised learning, like clustering customer spending behavior. Here, no labels are needed. In deep learning, it needs a lot of data, usually unstructured, like thousands of loan documents, call recordings, or scan checks. These are fed into the models and the neural networks to detect and complex patterns. Data science focus on insights rather than the predictions. So a data scientist at the bank might use customer relationship management exports and customer demographies to analyze which age group prefers credit cards over the loans. Then we have got generative AI that thrives on diverse, unstructured internet scalable data. Like it is getting data from books, code, images, chat logs. So these models, like ChatGPT, are trained to generate responses or mimic the styles and synthesize content. So generative AI can power a banking virtual assistant trained on chat logs and frequently asked questions to answer customer queries 24/7. 07:35 Lois: What are the challenges when dealing with data?  Yunus: Data isn't just about having enough. We must also think about quality. Is it accurate and relevant? Volume. Do we have enough for the model to learn from? And is my data consisting of any kind of unfairly defined structures, like rejecting more loan applications from a certain zip code, which actually gives you a bias of data? And also the privacy. Are we handling personal data responsibly or not? Especially data which is critical or which is regulated, like the banking sector or health data of the patients. Before building anything smart, we must start smart.  08:23 Lois: So, we've established that collecting the right data is non-negotiable for success. Then comes preparing it, right?  Yunus: This is arguably the most important part of any AI or data science project. Clean data leads to reliable predictions. Imagine you have a column for age, and someone accidentally entered an age of like 999. That's likely a data entry error. Or maybe a few rows have missing ages. So we either fix, remove, or impute such issues. This step ensures our model isn't misled by incorrect values. Dates are often stored in different formats. For instance, a date, can be stored as the month and the day values, or it can be stored in some places as day first and month next. We want to bring everything into a consistent, usable format. This process is called as transformation. The machine learning models can get confused if one feature, like example the income ranges from 10,000 to 100,000, and another, like the number of kids, range from 0 to 5. So we normalize or scale values to bring them to a similar range, say 0 or 1. So we actually put it as yes or no options. So models don't understand words like small, medium, or large. We convert them into numbers using encoding. One simple way is assigning 1, 2, and 3 respectively. And then you have got removing stop words like the punctuations, et cetera, and break the sentence into smaller meaningful units called as tokens. This is actually used for generative AI tasks. In deep learning, especially for Gen AI, image or audio inputs must be of uniform size and format.  10:31 Lois: And does each AI system have a different way of preparing data?  Yunus: For machine learning ML, focus is on cleaning, encoding, and scaling. Deep learning needs resizing and normalization for text and images. Data science, about reshaping, aggregating, and getting it ready for insights. The generative AI needs special preparation like chunking, tokenizing large documents, or compressing images. 11:06 Oracle University's Race to Certification 2025 is your ticket to free training and certification in today's hottest tech. Whether you're starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That's education.oracle.com/race-to-certification-2025. 11:50 Nikita: Welcome back! Yunus, how does a user choose the right model to solve their business problem?  Yunus: Just like a business uses different dashboards for marketing versus finance, in AI, we use different model types, depending on what we are trying to solve. Like classification is choosing a category. Real-world example can be whether the email is a spam or not. Use in fraud detection, medical diagnosis, et cetera. So what you do is you classify that particular data and then accurately access that classification of data. Regression, which is used for predicting a number, like, what will be the price of a house next month? Or it can be a useful in common forecasting sales demands or on the cost. Clustering, things without labels. So real-world examples can be segmenting customers based on behavior for targeted marketing. It helps discovering hidden patterns in large data sets.  Generation, that is creating new content. So AI writing product description or generating images can be a real-world example for this. And it can be used in a concept of generative AI models like ChatGPT or Dall-E, which operates on the generative AI principles. 13:16 Nikita: And how do you train a model? Yunus: We feed it with data in small chunks or batches and then compare its guesses to the correct values, adjusting its thinking like weights to improve next time, and the cycle repeats until the model gets good at making predictions. So if you're building a fraud detection system, ML may be enough. If you want to analyze medical images, you will need deep learning. If you're building a chatbot, go for a generative model like the LLM. And for all of these use cases, you need to select and train the applicable models as and when appropriate. 14:04 Lois: OK, now that the model's been trained, what else needs to happen before it can be deployed? Yunus: Evaluate the model, assess a model's accuracy, reliability, and real-world usefulness before it's put to work. That is, how often is the model right? Does it consistently perform well? Is it practical in the real world to use this model or not? Because if I have bad predictions, doesn't just look bad, it can lead to costly business mistakes. Think of recommending the wrong product to a customer or misidentifying a financial risk.  So what we do here is we start with splitting the data into two parts. So we train the data by training data. And this is like teaching the model. And then we have got the testing data. This is actually used for checking how well the model has learned. So once trained, the model makes predictions. We compare the predictions to the actual answers, just like checking your answer after a quiz. We try to go in for tailored evaluation based on AI types. Like machine learning, we care about accuracy in prediction. Deep learning is about fitting complex data like voice or images, where the model repeatedly sees examples and tunes itself to reduce errors. Data science, we look for patterns and insights, such as which features will matter. In generative AI, we judge by output quality. Is it coherent, useful, and is it natural?  The model improves with the accuracy and the number of epochs the training has been done on.  15:59 Nikita: So, after all that, we finally come to deploying the model… Yunus: Deploying a model means we are integrating it into our actual business system. So it can start making decisions, automating tasks, or supporting customer experiences in real time. Think of it like this. Training is teaching the model. Evaluating is testing it. And deployment is giving it a job.  The model needs a home either in the cloud or inside your company's own servers. Think of it like putting the AI in place where it can be reached by other tools. Exposed via API or embedded in an app, or you can say application, this is how the AI becomes usable.  Then, we have got the concept of receives live data and returns predictions. So receives live data and returns prediction is when the model listens to real-time inputs like a user typing, or user trying to search or click or making a transaction, and then instantly, your AI responds with a recommendation, decisions, or results. Deploying the model isn't the end of the story. It is just the beginning of the AI's real-world journey. Models may work well on day one, but things change. Customer behavior might shift. New products get introduced in the market. Economic conditions might evolve, like the era of COVID, where the demand shifted and the economical conditions actually changed. 17:48 Lois: Then it's about monitoring and improving the model to keep things reliable over time. Yunus: The monitor and improve loop is a continuous process that ensures an AI model remains accurate, fair, and effective after deployment. The live predictions, the model is running in real time, making decisions or recommendations. The monitor performance are those predictions still accurate and helpful. Is latency acceptable? This is where we track metrics, user feedbacks, and operational impact. Then, we go for detect issues, like accuracy is declining, are responses feeling biased, are customers dropping off due to long response times? And the next step will be to reframe or update the model. So we add fresh data, tweak the logic, or even use better architectures to deploy the uploaded model, and the new version replaces the old one and the cycle continues again. 18:58 Lois: And are there challenges during this step? Yunus: The common issues, which are related to monitor and improve consist of model drift, bias, and latency of failures. In model drift, the model becomes less accurate as the environment changes. Or bias, the model may favor or penalize certain groups unfairly. Latency or failures, if the model is too slow or fails unpredictably, it disrupts the user experience. Let's take the loan approvals. In loan approvals, if we notice an unusually high rejection rate due to model bias, we might retrain the model with more diverse or balanced data. For a chatbot, we watch for customer satisfaction, which might arise due to model failure and fine-tune the responses for the model. So in forecasting demand, if the predictions no longer match real trends, say post-pandemic, due to the model drift, we update the model with fresh data.  20:11 Nikita: Thanks for that, Yunus. Any final thoughts before we let you go? Yunus: No matter how advanced your model is, its effectiveness depends on the quality of the data you feed it. That means, the data needs to be clean, structured, and relevant. It should map itself to the problem you're solving. If the foundation is weak, the results will be also. So data preparation is not just a technical step, it is a business critical stage. Once deployed, AI systems must be monitored continuously, and you need to watch for drops in performance for any bias being generated or outdated logic, and improve the model with new data or refinements. That's what makes AI reliable, ethical, and sustainable in the long run. 21:09 Nikita: Yunus, thank you for this really insightful session. If you're interested in learning more about the topics we discussed today, go to mylearn.oracle.com and search for the AI for You course.  Lois: That's right. You'll find skill checks to help you assess your understanding of these concepts. In our next episode, we'll discuss the idea of buy versus build in the context of AI. Until then, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 21:39 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

Practical AI
GenAI risks and global adoption

Practical AI

Play Episode Listen Later Aug 27, 2025 43:20 Transcription Available


Daniel and Chris sit with Citadel AI's Rick Kobayashi and Kenny Song and unpack AI safety and security challenges in the generative AI era. They compare Japan's approach to AI adoption with the US's, and explore the implications of real-world failures in AI systems, along with strategies for AI monitoring and evaluation.Featuring:Rick Kobayashi – LinkedInKenny Song – LinkedInChris Benson – Website, LinkedIn, Bluesky, GitHub, XDaniel Whitenack – Website, GitHub, XLinks:Citadel AIRegister for upcoming webinars here!

Le rendez-vous Tech
Drames et dramas d'août - RDV Tech

Le rendez-vous Tech

Play Episode Listen Later Aug 26, 2025 94:09


Au programme :La légitimité de Luc Julia remise en question dans le cadre d'un debunkDécès de Jean Permanove en live sur KickGPT-5 et GPT-OSSRevue des innovations en matière d'intelligence artificielleNouveaux appareils Google Pixel et le reste du matosToujours plus d'enjeux de sécuritéInfos :Animé par Guillaume Vendé (Bluesky, Mastodon, Threads, Instagram, TikTok, YouTube, techcafe.fr)Co-animé par Mat (profduweb.com, Apple Différemment, Threads, Instagram, YouTube)Produit par Patrick Beja (LinkedIn) et Fanny Cohen Moreau (LinkedIn).Musique libre de droit par Daniel BejaLe Rendez-vous Tech épisode 629 – Drames et dramas d'août---Liens :

Product Guru's
Personalização em Produtos Digitais: Como Gerar Valor Real para o Cliente - Larissa Alcântara

Product Guru's

Play Episode Listen Later Aug 25, 2025 57:27


Como transformar personalização em resultados de negócio?Neste episódio do Product Guru's, Larissa Alcântara (Product Manager na RD Saúde) compartilha insights sobre como usar personalização, navegação e checkout para reduzir fricção, aumentar conversão e entregar valor real para o cliente./// Você vai aprender: • O que é personalização de verdade em produtos digitais • Como evitar “over personalization” que gera frustração • Estratégias de checkout que aumentam vendas • Quais métricas e indicadores acompanhar (conversão, NPS, métricas de saúde) • Escala vs. personalização: quando aplicar para todos ou para nichos • O papel do PM em comunicar resultados e gerar impactoEsse episódio é essencial para quem trabalha com produto digital, e-commerce, healthtech, experiência do cliente, UX e gestão de produto./// Recado Importante: O futuro dos produtos digitais já começou e a Inteligência Artificial é parte do time. A PM3 acaba de lançar a Formação em Gestão de Produtos de IA: um curso pensado para Product Managers que querem criar, delegar e inovar com mais inteligência. Muito além dos prompts: você vai aprender a liderar produtos baseados em IA, dominar temas como Machine Learning, Deep Learning e IA Generativa, e aplicar novas formas de discovery, experimentação e validação.Prepare-se para o mercado que mais cresce no mundo e torne-se o PM que lidera a transformação.Acesse o link e saiba mais: https://go.pm3.com.br/ProductGurus-AI-SpecialistCupom: PRODUCTGURUS/// LinkedIn da Larissa: https://www.linkedin.com/in/larissalcantara/// Capítulos00:00 – Abertura e apresentação da Larissa Alcântara02:46 – O perigo do over personalization e o papel real do PM09:36 – Personalização além da compra: olhar cross e pós-compra16:17 – AI e personalização não são bala de prata23:48 – Plano de guerra e métricas para validar personalização33:23 – Checkout simples: atalhos e redução de fricção42:24 – Escala vs. personalização: como adaptar para regiões e perfis50:14 – Conselhos para PMs: comunicar, entender o negócio e dizer “não sei”55:39 – Encerramento e aprendizados finais/// Onde encontrar a Product Guru's:WhatsApp: https://whatsapp.com/channel/0029Va7uwHS5fM5U0LIatu3XX (antigo Twitter): ⁠https://twitter.com/product_gurus⁠LinkedIn: ⁠https://www.linkedin.com/company/product-guru-s/⁠Instagram: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.instagram.com/product.gurus/⁠

ToKCast
Ep 244: Deep learning is not "inductive".

ToKCast

Play Episode Listen Later Aug 22, 2025 22:35


We are told by people working in the field, researchers and those who publish academic papers on the topic that artificial intelligence or deep learning or LLMs or Machine Learning or Recurrent Neural Networks - call them what you like - employ some form of inductive reasoning. But do they? What is inductive reasoning? What is deductive or adductive for that matter? Is "new physics" or other new science being discovered by the most recent and best chatbots or other "artificially intelligent" computer systems? My response to all that is contained herein.   For images see: https://youtu.be/9Dimv7mOls4 For more information: https://www.bretthall.org/blog/induction

DataTalks.Club
From Medicine to Machine Learning: How Public Learning Turned into a Career - Pastor Soto

DataTalks.Club

Play Episode Listen Later Aug 22, 2025 59:31


In this episode, We talked with Pastor, a medical doctor who built a career in machine learning while studying medicine. Pastor shares how he balanced both fields, leveraged live courses and public sharing to grow his skills, and found opportunities through freelancing and mentoring.TIMECODES00:00 Pastor's background and early programming journey06:05 Learning new tools and skills on the job while studying medicine11:44 Balancing medical studies with data science work and motivation13:48 Applying medical knowledge to data science and vice versa18:44 Starting freelance work on Upwork and overcoming language challenges24:03 Joining the machine learning engineering course and benefits of live cohorts27:41 Engaging with the course community and sharing progress publicly35:16 Using LinkedIn and social media for career growth and interview opportunities41:03 Building reputation, structuring learning, and leveraging course projects50:53 Volunteering and mentoring with DeepLearning.AI and Stanford Coding Place57:00 Managing time and staying productive while studying medicine and machine learningConnect with PastorTwitter - https://x.com/PastorSotoB1Linkedin -   / pastorsoto  Github - https://github.com/sotoblancoWebsite - https://substack.com/@pastorsotoConnect with DataTalks.Club:Join the community - https://datatalks.club/slack.htmlSubscribe to our Google calendar to have all our events in your calendar - https://calendar.google.com/calendar/...Check other upcoming events - https://lu.ma/dtc-eventsGitHub: https://github.com/DataTalksClubLinkedIn -   / datatalks-club   Twitter -   / datatalksclub   Website - https://datatalks.club/

No Priors: Artificial Intelligence | Machine Learning | Technology | Startups
How Agentic AI is Transforming The Startup Landscape with Andrew Ng

No Priors: Artificial Intelligence | Machine Learning | Technology | Startups

Play Episode Listen Later Aug 21, 2025 42:11


Andrew Ng has always been at the bleeding edge of fast-evolving AI technologies, founding companies and projects like Google Brain, AI Fund, and DeepLearning.AI. So he knows better than anyone that founders who operate the same way in 2025 as they did in 2022 are doing it wrong. Sarah Guo and Elad Gil sit down with Andrew Ng, the godfather of the AI revolution, to discuss the rise of agentic AI, and how the technology has changed everything from what makes a successful founder to the value of small teams. They talk about where future capability growth may come from, the potential for models to bootstrap themselves, and why Andrew doesn't like the term “vibe coding.” Also, Andrew makes the case for why everybody in an organization—not just the engineers—should learn to code.  Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @AndrewYNg Chapters: 00:00 – Andrew Ng Introduction 00:32 – The Next Frontier for Capability Growth 01:29 – Andrew's Definition of Agentic AI 02:44 – Obstacles to Building True Agents 06:09 – The Bleeding Edge of Agentic AI 08:12 – Will Models Bootstrap Themselves? 09:05 – Vibe Coding vs. AI Assisted Coding 09:56 – Is Vibe Coding Changing the Nature of Startups? 11:35 – Speeding Up Project Management 12:55 – The Evolution of the Successful Founder Profile 19:23 – Finding Great Product People 21:14 – Building for One User Profile vs. Many 22:47 – Requisites for Leaders and Teams in the AI Age 28:21 – The Value of Keeping Teams Small 32:13 – The Next Industry Transformations 34:04 – Future of Automation in Investing Firms and Incubators 37:39 – Technical People as First Time Founders 41:08– Broad Impact of AI Over the Next 5 Years 41:49 – Conclusion

Practical AI
Inside America's AI Action Plan

Practical AI

Play Episode Listen Later Aug 19, 2025 43:52 Transcription Available


Dan and Chris break down Winning the Race: America's AI Action Plan, issued by the White House in July 2025.  Structured as three "pillars" — Accelerate AI Innovation, Build American AI Infrastructure, and Lead in International AI Diplomacy and Security — our dynamic duo unpack the plan's policy goals and its associated suggestions — while also exploring the mixed reactions it's sparked across political lines. They connect the plan to international AI diplomacy and national security interests, discuss its implications for practitioners, and consider how political realities could shape its success in the years ahead. Featuring:Chris Benson – Website, LinkedIn, Bluesky, GitHub, XDaniel Whitenack – Website, GitHub, XLinks:Press Release: White House Unveils America's AI Action PlanPaper: America's AI Action PlanSponsors:Shopify – The commerce platform trusted by millions. From idea to checkout, Shopify gives you everything you need to launch and scale your business—no matter your level of experience. Build beautiful storefronts, market with built-in AI tools, and tap into the platform powering 10% of all U.S. eCommerce. Start your one-dollar trial at shopify.com/practicalaiRegister for upcoming webinars here!

Le rendez-vous Tech
Crossover: RDV Jeux 393 - Switch 2: Nintendo veut votre argent (COMBIEN??!!)

Le rendez-vous Tech

Play Episode Listen Later Aug 19, 2025 119:46


Au programme :Comme le mois dernier je vous propose un épisode "crossover" pour découvrir mon autre podcast, le RDV Jeux.Cette fois-ci c'est l'épisode 393, où on décortique l'annonce de la Switch 2 et la possible scission de Ubisoft avec son partenaire Tencent, en plus des jeux du moment bien sûr.Plus d'infos : https://frenchspin.fr/2025/04/switch-2-nintendo-veut-votre-argent-combien-rdv-jeux/Infos :Animé par Patrick Beja (Bluesky, Instagram, Twitter, TikTok)Produit par Patrick Beja (LinkedIn) et Fanny Cohen Moreau (LinkedIn).Musique libre de droit par Daniel Beja---Liens :

Oracle University Podcast
Core AI Concepts – Part 2

Oracle University Podcast

Play Episode Listen Later Aug 19, 2025 12:42


In this episode, Lois Houston and Nikita Abraham continue their discussion on AI fundamentals, diving into Data Science with Principal AI/ML Instructor Himanshu Raj. They explore key concepts like data collection, cleaning, and analysis, and talk about how quality data drives impactful insights.   AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/252500   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ---------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast. I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me today is Nikita Abraham, Team Lead: Editorial Services.  Nikita: Hi everyone! Last week, we began our exploration of core AI concepts, specifically machine learning and deep learning. I'd really encourage you to go back and listen to the episode if you missed it.   00:52 Lois: Yeah, today we're continuing that discussion, focusing on data science, with our Principal AI/ML Instructor Himanshu Raj.  Nikita: Hi Himanshu! Thanks for joining us again. So, let's get cracking! What is data science?  01:06 Himanshu: It's about collecting, organizing, analyzing, and interpreting data to uncover valuable insights that help us make better business decisions. Think of data science as the engine that transforms raw information into strategic action.  You can think of a data scientist as a detective. They gather clues, which is our data. Connect the dots between those clues and ultimately solve mysteries, meaning they find hidden patterns that can drive value.  01:33 Nikita: Ok, and how does this happen exactly?  Himanshu: Just like a detective relies on both instincts and evidence, data science blends domain expertise and analytical techniques. First, we collect raw data. Then we prepare and clean it because messy data leads to messy conclusions. Next, we analyze to find meaningful patterns in that data. And finally, we turn those patterns into actionable insights that businesses can trust.  02:00 Lois: So what you're saying is, data science is not just about technology; it's about turning information into intelligence that organizations can act on. Can you walk us through the typical steps a data scientist follows in a real-world project?  Himanshu: So it all begins with business understanding. Identifying the real problem we are trying to solve. It's not about collecting data blindly. It's about asking the right business questions first. And once we know the problem, we move to data collection, which is gathering the relevant data from available sources, whether internal or external.  Next one is data cleaning. Probably the least glamorous but one of the most important steps. And this is where we fix missing values, remove errors, and ensure that the data is usable. Then we perform data analysis or what we call exploratory data analysis.  Here we look for patterns, prints, and initial signals hidden inside the data. After that comes the modeling and evaluation, where we apply machine learning or deep learning techniques to predict, classify, or forecast outcomes. Machine learning, deep learning are like specialized equipment in a data science detective's toolkit. Powerful but not the whole investigation.  We also check how good the models are in terms of accuracy, relevance, and business usefulness. Finally, if the model meets expectations, we move to deployment and monitoring, putting the model into real world use and continuously watching how it performs over time.  03:34 Nikita: So, it's a linear process?  Himanshu: It's not linear. That's because in real world data science projects, the process does not stop after deployment. Once the model is live, business needs may evolve, new data may become available, or unexpected patterns may emerge.  And that's why we come back to business understanding again, defining the questions, the strategy, and sometimes even the goals based on what we have learned. In a way, a good data science project behaves like living in a system which grows, adapts, and improves over time. Continuous improvement keeps it aligned with business value.   Now, think of it like adjusting your GPS while driving. The route you plan initially might change as new traffic data comes in. Similarly, in data science, new information constantly help refine our course. The quality of our data determines the quality of our results.   If the data we feed into our models is messy, inaccurate, or incomplete, the outputs, no matter how sophisticated the technology, will be also unreliable. And this concept is often called garbage in, garbage out. Bad input leads to bad output.  Now, think of it like cooking. Even the world's best Michelin star chef can't create a masterpiece with spoiled or poor-quality ingredients. In the same way, even the most advanced AI models can't perform well if the data they are trained on is flawed.  05:05 Lois: Yeah, that's why high-quality data is not just nice to have, it's absolutely essential. But Himanshu, what makes data good?   Himanshu: Good data has a few essential qualities. The first one is complete. Make sure we aren't missing any critical field. For example, every customer record must have a phone number and an email. It should be accurate. The data should reflect reality. If a customer's address has changed, it must be updated, not outdated. Third, it should be consistent. Similar data must follow the same format. Imagine if the dates are written differently, like 2024/04/28 versus April 28, 2024. We must standardize them.   Fourth one. Good data should be relevant. We collect only the data that actually helps solve our business question, not unnecessary noise. And last one, it should be timely. So data should be up to date. Using last year's purchase data for a real time recommendation engine wouldn't be helpful.  06:13 Nikita: Ok, so ideally, we should use good data. But that's a bit difficult in reality, right? Because what comes to us is often pretty messy. So, how do we convert bad data into good data? I'm sure there are processes we use to do this.  Himanshu: First one is cleaning. So this is about correcting simple mistakes, like fixing typos in city names or standardizing dates.  The second one is imputation. So if some values are missing, we fill them intelligently, for instance, using the average income for a missing salary field. Third one is filtering. In this, we remove irrelevant or noisy records, like discarding fake email signups from marketing data. The fourth one is enriching. We can even enhance our data by adding trusted external sources, like appending credit scores from a verified bureau.  And the last one is transformation. Here, we finally reshape data formats to be consistent, for example, converting all units to the same currency. So even messy data can become usable, but it takes deliberate effort, structured process, and attention to quality at every step.  07:26 Oracle University's Race to Certification 2025 is your ticket to free training and certification in today's hottest technology. Whether you're starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That's education.oracle.com/race-to-certification-2025. 08:10 Nikita: Welcome back! Himanshu, we spoke about how to clean data. Now, once we get high-quality data, how do we analyze it?  Himanshu: In data science, there are four primary types of analysis we typically apply depending on the business goal we are trying to achieve.  The first one is descriptive analysis. It helps summarize and report what has happened. So often using averages, totals, or percentages. For example, retailers use descriptive analysis to understand things like what was the average customer spend last quarter? How did store foot traffic trend across months?  The second one is diagnostic analysis. Diagnostic analysis digs deeper into why something happened. For example, hospitals use this type of analysis to find out, for example, why a certain department has higher patient readmission rates. Was it due to staffing, post-treatment care, or patient demographics?  The third one is predictive analysis. Predictive analysis looks forward, trying to forecast future outcomes based on historical patterns. For example, energy companies predict future electricity demand, so they can better manage resources and avoid shortages. And the last one is prescriptive analysis. So it does not just predict. It recommends specific actions to take.  So logistics and supply chain companies use prescriptive analytics to suggest the most efficient delivery routes or warehouse stocking strategies based on traffic patterns, order volume, and delivery deadlines.   09:42 Lois: So really, we're using data science to solve everyday problems. Can you walk us through some practical examples of how it's being applied?  Himanshu: The first one is predictive maintenance. It is done in manufacturing a lot. A factory collects real time sensor data from machines. Data scientists first clean and organize this massive data stream, explore patterns of past failures, and design predictive models.  The goal is not just to predict breakdowns but to optimize maintenance schedules, reducing downtime and saving millions. The second one is a recommendation system. It's prevalent in retail and entertainment industries. Companies like Netflix or Amazon gather massive user interaction data such as views, purchases, likes.  Data scientists structure and analyze this behavioral data to find meaningful patterns of preferences and build models that suggest relevant content, eventually driving more engagement and loyalty. The third one is fraud detection. It's applied in finance and banking sector.  Banks store vast amounts of transaction record records. Data scientists clean and prepare this data, understand typical spending behaviors, and then use statistical techniques and machine learning to spot unusual patterns, catching fraud faster than manual checks could ever achieve.  The last one is customer segmentation, which is often applied in marketing. Businesses collect demographics and behavioral data about their customers. Instead of treating all the customers same, data scientists use clustering techniques to find natural groupings, and this insight helps businesses tailor their marketing efforts, offers, and communication for each of those individual groups, making them far more effective.  Across all these examples, notice that data science isn't just building a model. Again, it's understanding the business need, reviewing the data, analyzing it thoughtfully, and building the right solution while helping the business act smarter.  11:44 Lois: Thank you, Himanshu, for joining us on this episode of the Oracle University Podcast. We can't wait to have you back next week for part 3 of this conversation on core AI concepts, where we'll talk about generative AI and gen AI agents.     Nikita: And if you want to learn more about data science, visit mylearn.oracle.com and search for the AI for You course. Until next time, this is Nikita Abraham…  Lois: And Lois Houston signing off!  12:13 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

Learning Bayesian Statistics
BITESIZE | What's Missing in Bayesian Deep Learning?

Learning Bayesian Statistics

Play Episode Listen Later Aug 13, 2025 20:34 Transcription Available


Today's clip is from episode 138 of the podcast, with Mélodie Monod, François-Xavier Briol and Yingzhen Li.During this live show at Imperial College London, Alex and his guests delve into the complexities and advancements in Bayesian deep learning, focusing on uncertainty quantification, the integration of machine learning tools, and the challenges faced in simulation-based inference.The speakers discuss their current projects, the evolution of Bayesian models, and the need for better computational tools in the field.Get the full discussion here.Attend Alex's tutorial at PyData Berlin: A Beginner's Guide to State Space Modeling Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.

Practical AI
Confident, strategic AI leadership

Practical AI

Play Episode Listen Later Aug 12, 2025 47:40 Transcription Available


Allegra Guinan of Lumiera helps leaders turn uncertainty about AI into confident, strategic leadership. In this conversation, she brings some actionable insights for navigating the hype and complexity of AI. The discussion covers challenges with implementing responsible AI practices, the growing importance of user experience and product thinking, and how leaders can focus on real-world business problems over abstract experimentation.Featuring:Allegra Guinan – LinkedInChris Benson – Website, LinkedIn, Bluesky, GitHub, XDaniel Whitenack – Website, GitHub, XLinks:LumieraSponsors:Shopify – The commerce platform trusted by millions. From idea to checkout, Shopify gives you everything you need to launch and scale your business—no matter your level of experience. Build beautiful storefronts, market with built-in AI tools, and tap into the platform powering 10% of all U.S. eCommerce. Start your one-dollar trial at shopify.com/practicalaiRegister for upcoming webinars here!

Le rendez-vous Tech
Spécial: Tech et IA dans l'imagerie médicale - RDV Tech

Le rendez-vous Tech

Play Episode Listen Later Aug 12, 2025 89:02


Au programme :Alexandre nous parle de la manière dont la tech et l'IA a fait évoluer son métier de radiologue depuis presque 20 ans.Infos :Animé par Patrick Beja (Bluesky, Instagram, Twitter, TikTok)Co-animé par AlexandreProduit par Patrick Beja (LinkedIn) et Fanny Cohen Moreau (LinkedIn).Musique libre de droit par Daniel BejaLe Rendez-vous Tech épisode 628 – Spécial : Tech et IA dans l'imagerie médicale---Liens :

Oracle University Podcast
Core AI Concepts – Part 1

Oracle University Podcast

Play Episode Listen Later Aug 12, 2025 20:08


Join hosts Lois Houston and Nikita Abraham, along with Principal AI/ML Instructor Himanshu Raj, as they dive deeper into the world of artificial intelligence, analyzing the types of machine learning. They also discuss deep learning, including how it works, its applications, and its advantages and challenges. From chatbot assistants to speech-to-text systems and image recognition, they explore how deep learning is powering the tools we use today.   AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/252500   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast. I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! Last week, we went through the basics of artificial intelligence. If you missed it, I really recommend listening to that episode before you start this one. Today, we're going to explore some foundational AI concepts, starting with machine learning. After that, we'll discuss the two main machine learning models: supervised learning and unsupervised learning. And we'll close with deep learning. Lois: Himanshu Raj, our Principal AI/ML Instructor, joins us for today's episode. Hi Himanshu! Let's dive right in. What is machine learning?  01:12 Himanshu: Machine learning lets computers learn from examples to make decisions or predictions without being told exactly what to do. They help computers learn from past data and examples so they can spot patterns and make smart decisions just like humans do, but faster and at scale.  01:31 Nikita: Can you give us a simple analogy so we can understand this better? Himanshu: When you train a dog to sit or fetch, you don't explain the logic behind the command. Instead, you give this dog examples and reinforce correct behavior with rewards, which could be a treat, a pat, or a praise. Over time, the dog learns to associate the command with the action and reward. Machine learning learns in a similar way, but with data instead of dog treats. We feed a mathematical system called models with multiple examples of input and the desired output, and it learns the pattern. It's trial and error, learning from the experience.  Here is another example. Recognizing faces. Humans are incredibly good at this, even as babies. We don't need someone to explain every detail of the face. We just see many faces over time and learn the patterns. Machine learning models can be trained the same way. We showed them thousands or millions of face images, each labeled, and they start to detect patterns like eyes, nose, mouth, spacing, different angles. So eventually, they can recognize faces they have seen before or even match new ones that are similar. So machine learning doesn't have any rules, it's just learning from examples. This is the kind of learning behind things like face ID on your smartphone, security systems that recognizes employees, or even Facebook tagging people in your photos. 03:05 Lois: So, what you're saying is, in machine learning, instead of telling the computer exactly what to do in every situation, you feed the model with data and give it examples of inputs and the correct outputs. Over time, the model figures out patterns and relationships within the data on its own, and it can make the smart guess when it sees something new. I got it! Now let's move on to how machine learning actually works? Can you take us through the process step by step? Himanshu: Machine learning actually happens in three steps. First, we have the input, which is the training data. Think of this as showing the model a series of examples. It could be images of historical sales data or customer complaints, whatever we want the machine to learn from. Next comes the pattern finding. This is the brain of the system where the model starts spotting relationships in the data. It figures out things like customer who churn or leave usually contacts support twice in the same month. It's not given rules, it just learns patterns based on the example. And finally, we have output, which is the prediction or decision. This is the result of all this learning. Once trained, the computer or model can say this customer is likely to churn or leave. It's like having a smart assistant that makes fast, data-driven guesses without needing step by step instruction. 04:36 Nikita: What are the main elements in machine learning? Himanshu: In machine learning, we work with two main elements, features and labels. You can think of features as the clues we provide to the model, pieces of information like age, income, or product type. And the label is the solution we want the model to predict, like whether a customer will buy or not.  04:55 Nikita: Ok, I think we need an example here. Let's go with the one you mentioned earlier about customers who churn. Himanshu: Imagine we have a table with data like customer age, number of visits, whether they churned or not. And each of these rows is one example. The features are age and visit count. The label is whether the customer churned, that is yes or no. Over the time, the model might learn patterns like customer under 30 who visit only once are more likely to leave. Or frequent visitors above age 45 rarely churn. If features are the clues, then the label is the solution, and the model is the brain of the system. It's what's the machine learning builds after learning from many examples, just like we do. And again, the better the features are, the better the learning. ML is just looking for patterns in the data we give it. 05:51 Lois: Ok, we're with you so far. Let's talk about the different types of machine learning. What is supervised learning? Himanshu: Supervised learning is a type of machine learning where the model learns from the input data and the correct answers. Once trained, the model can use what it learned to predict the correct answer for new, unseen inputs. Think of it like a student learning from a teacher. The teacher shows labeled examples like an apple and says, "this is an apple." The student receives feedback whether their guess was right or wrong. Over time, the student learns to recognize new apples on their own. And that's exactly how supervised learning works. It's learning from feedback using labeled data and then make predictions. 06:38 Nikita: Ok, so supervised learning means we train the model using labeled data. We already know the right answers, and we're essentially teaching the model to connect the dots between the inputs and the expected outputs. Now, can you give us a few real-world examples of supervised learning? Himanshu: First, house price prediction. In this case, we give the model features like a square footage, location, and number of bedrooms, and the label is the actual house price. Over time, it learns how to predict prices for new homes. The second one is email: spam or not. In this case, features might include words in the subject line, sender, or links in the email. The label is whether the email is spam or not. The model learns patterns to help us filter our inbox, as you would have seen in your Gmail inboxes. The third one is cat versus dog classification. Here, the features are the pixels in an image, and the label tells us whether it's a cat or a dog. After seeing many examples, the model learns to tell the difference on its own. Let's now focus on one very common form of supervised learning, that is regression. Regression is used when we want to predict a numerical value, not a category. In simple terms, it helps answer questions like, how much will it be? Or what will be the value be? For example, predicting the price of a house based on its size, location, and number of rooms. Or estimating next quarter's revenue based on marketing spend.  08:18 Lois: Are there any other types of supervised learning? Himanshu: While regression is about predicting a number, classification is about predicting a category or type. You can think of it as the model answering is this yes or no, or which group does this belong to.  Classification is used when the goal is to predict a category or a class. Here, the model learns patterns from historical data where both the input variables, known as features, and the correct categories, called labels, are already known.  08:53 Ready to level-up your cloud skills? The 2025 Oracle Fusion Cloud Applications Certifications are here! These industry-recognized credentials validate your expertise in the latest Oracle Fusion Cloud solutions, giving you a competitive edge and helping drive real project success and customer satisfaction. Explore the certification paths, prepare with MyLearn, and position yourself for the future. Visit mylearn.oracle.com to get started today. 09:25 Nikita: Welcome back! So that was supervised machine learning. What about unsupervised machine learning, Himanshu? Himanshu: Unlike supervised learning, here, the model is not given any labels or correct answers. It just handed the raw input data and left to make sense of it on its own.  The model explores the data and discovers hidden patterns, groupings, or structures on its own, without being explicitly told what to look for. And it's more like a student learning from observations and making their own inferences. 09:55 Lois: Where is unsupervised machine learning used? Can you take us through some of the use cases? Himanshu: The first one is product recommendation. Customers are grouped based on shared behavior even without knowing their intent. This helps show what the other users like you also prefer. Second one is anomaly detection. Unusual patterns, such as fraud, network breaches, or manufacturing defects, can stand out, all without needing thousands of labeled examples. And third one is customer segmentation. Customers can be grouped by purchase history or behavior to tailor experiences, pricing, or marketing campaigns. 10:32 Lois: And finally, we come to deep learning. What is deep learning, Himanshu? Himanshu: Humans learn from experience by seeing patterns repeatedly. Brain learns to recognize an image by seeing it many times. The human brain contains billions of neurons. Each neuron is connected to others through synapses. Neurons communicate by passing signals. The brain adjusts connections based on repeated stimuli. Deep learning was inspired by how the brain works using artificial neurons and connections. Just like our brains need a lot of examples to learn, so do the deep learning models. The more the layers and connections are, the more complex patterns it can learn. The brain is not hard-coded. It learns from patterns. Deep learning follows the same idea. Metaphorically speaking, a deep learning model can have over a billion neurons, more than a cat's brain, which have around 250 million neurons. Here, the neurons are mathematical units, often called nodes, or simply as units. Layers of these units are connected, mimicking how biological neurons interact. So deep learning is a type of machine learning where the computer learns to understand complex patterns. What makes it special is that it uses neural networks with many layers, which is why we call it deep learning. 11:56 Lois: And how does deep learning work? Himanshu: Deep learning is all about finding high-level meaning from low-level data layer by layer, much like how our brains process what we see and hear. A neural network is a system of connected artificial neurons, or nodes, that work together to learn patterns and make decisions.  12:15 Nikita: I know there are different types of neural networks, with ANNs or Artificial Neural Networks being the one for general learning. How is it structured? Himanshu: There is an input layer, which is the raw data, which could be an image, sentence, numbers, a hidden layer where the patterns are detected or the features are learned, and the output layer where the final decision is made. For example, given an image, is this a dog? A neural network is like a team of virtual decision makers, called artificial neurons, or nodes, working together, which takes input data, like a photo, and passes it through layers of neurons. And each neuron makes a small judgment and passes its result to the next layer.  This process happens across multiple layers, learning more and more complex patterns as it goes, and the final layer gives the output. Imagine a factory assembly line where each station, or the layer, refines the input a bit more. By the end, you have turned raw parts into something meaningful. And this is a very simple analogy. This structure forms the foundations of many deep learning models.  More advanced architectures, like convolutional neural networks, CNNs, for images, or recurrent neural networks, RNN, for sequences built upon this basic idea. So, what I meant is that the ANN is the base structure, like LEGO bricks. CNNs and RNNs use those same bricks, but arrange them in a way that are better suited for images, videos, or sequences like text or speech.  13:52 Nikita: So, why do we call it deep learning? Himanshu: The word deep in deep learning does not refer to how profound or intelligent the model is. It actually refers to the number of layers in the neural network. It starts with an input layer, followed by hidden layers, and ends with an output layer. The layers are called hidden, in the sense that these are black boxes and their data is not visible or directly interpretable to the user. Models which has only one hidden layer is called shallow learning. As data moves, each layer builds on what the previous layer has learned. So layer one might detect a very basic feature, like edges or colors in an image. Layer two can take those edges and starts forming shapes, like curves or lines. And layer three use those shapes to identify complete objects, like a face, a car, or a person. This hierarchical learning is what makes deep learning so powerful. It allows the model to learn abstract patterns and generalize across complex data, whether it's visual, audio, or even language. And that's the essence of deep learning. It's not just about layers. It's about how each layer refines the information and one step closer to understanding. 15:12 Nikita: Himanshu, where does deep learning show up in our everyday lives? Himanshu: Deep learning is not just about futuristic robots, it's already powering the tools we use today. So think of when you interact with a virtual assistant on a website. Whether you are booking a hotel, resolving a banking issue, or asking customer support questions, behind the scenes, deep learning models understand your text, interpret your intent, and respond intelligently. There are many real-life examples, for example, ChatGPT, Google's Gemini, any airline website's chatbots, bank's virtual agent. The next one is speech-to-text systems. Example, if you have ever used voice typing on your phone, dictated a message to Siri, or used Zoom's live captions, you have seen this in action already. The system listens to your voice and instantly converts it into a text. And this saves time, enhances accessibility, and helps automate tasks, like meeting transcriptions. Again, you would have seen real-life examples, such as Siri, Google Assistant, autocaptioning on Zoom, or YouTube Live subtitles. And lastly, image recognition. For example, hospitals today use AI to detect early signs of cancer in x-rays and CT scans that might be missed by the human eye. Deep learning models can analyze visual patterns, like a suspicious spot on a lung's X-ray, and flag abnormalities faster and more consistently than humans. Self-driving cars recognize stop signs, pedestrians, and other vehicles using the same technology. So, for example, cancer detection in medical imaging, Tesla's self-driving navigation, security system synchronizes face are very prominent examples of image recognition. 17:01 Lois: Deep learning is one of the most powerful tools we have today to solve complex problems. But like any tool, I'm sure it has its own set of pros and cons. What are its advantages, Himanshu? Himanshu: It is high accuracy. When trained with enough data, deep learning models can outperform humans. For example, again, spotting early signs of cancer in X-rays with higher accuracy. Second is handling of unstructured data. Deep learning shines when working with messy real-world data, like images, text, and voice. And it's why your phone can recognize your face or transcribe your speech into text. The third one is automatic pattern learning. Unlike traditional models that need hand-coded features, deep learning models figure out important patterns by themselves, making them extremely flexible. And the fourth one is scalability. Once trained, deep learning systems can scale easily, serving millions of customers, like Netflix recommending movies personalized to each one of us. 18:03 Lois: And what about its challenges? Himanshu: The first one is data and resource intensive. So deep learning demands huge amount of labeled data and powerful computing hardware, which means high cost, especially during training. The second thing is lacks explainability. These models often act like a black box. We know the output, but it's hard to explain exactly how the model reached that decision. This becomes a problem in areas like health care and finance where transparency is critical. The third challenge is vulnerability to bias. If the data contains biases, like favoring certain groups, the model will learn and amplify those biases unless we manage them carefully. The fourth and last challenge is it's harder to debug and maintain. Unlike a traditional software program, it's tough to manually correct a deep learning model if it starts behaving unpredictably. It requires retraining with new data. So deep learning offers powerful opportunities to solve complex problems using data, but it also brings challenges that require careful strategy, resources, and responsible use. 19:13 Nikita: We're taking away a lot from this conversation. Thank you so much for your insights, Himanshu.  Lois: If you're interested to learn more, make sure you log into mylearn.oracle.com and look for the AI for You course. Join us next week for part 2 of the discussion on AI Concepts & Terminology, where we'll focus on Data Science. Until then, this is Lois Houston… Nikita: And Nikita Abraham signing off! 19:39 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

Practical AI
Educating a data-literate generation

Practical AI

Play Episode Listen Later Aug 8, 2025 44:41 Transcription Available


Dan sits down with guests Mark Daniel Ward and Katie Sanders from The Data Mine at Purdue University to explore how higher education is evolving to meet the demands of the AI-driven workforce. They share how their program blends interdisciplinary learning, corporate partnerships, and real-world data science projects to better prepare students across 160+ majors. From AI chatbots to agricultural forecasting, they discuss the power of living-learning communities, how the data mine model is spreading to other institutions and what it reveals about the future of education, workforce development, and applied AI training.Featuring:Mark Daniel Ward – LinkedInKatie Sanders – LinkedInDaniel Whitenack – Website, GitHub, XLinks:The Data MineSponsors:Shopify – The commerce platform trusted by millions. From idea to checkout, Shopify gives you everything you need to launch and scale your business—no matter your level of experience. Build beautiful storefronts, market with built-in AI tools, and tap into the platform powering 10% of all U.S. eCommerce. Start your one-dollar trial at shopify.com/practicalaiRegister for upcoming webinars here!

Order in the Court
To Fear or Not to Fear: The Fundamentals of AI and the Law

Order in the Court

Play Episode Listen Later Aug 7, 2025 46:02


On this episode, host Paul W. Grimm speaks with Professor Maura R. Grossman about the fundamentals of artificial intelligence and its growing influence on the legal system. They explore what AI is (and isn't), how machine learning and natural language processing work, and the differences between traditional automation and modern generative AI. In layman's terms, they discuss other key concepts, such as supervised and unsupervised learning, reinforcement training, and deepfakes, and other advances that have accelerated AI's development. Finally, they address a few potential risks of generative AI, including hallucinations, bias, and misuse in court, which sets the stage for a deeper conversation about legal implications on the next episode, "To Trust or Not to Trust: AI in Legal Practice." ABOUT THE HOSTJudge Paul W. Grimm (ret.) is the David F. Levi Professor of the Practice of Law and Director of the Bolch Judicial Institute at Duke Law School. From December 2012 until his retirement in December 2022, he served as a district judge of the United States District Court for the District of Maryland, with chambers in Greenbelt, Maryland. Click here to read his full bio.

Learning Bayesian Statistics
#138 Quantifying Uncertainty in Bayesian Deep Learning, Live from Imperial College London

Learning Bayesian Statistics

Play Episode Listen Later Aug 6, 2025 83:10 Transcription Available


Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Bayesian deep learning is a growing field with many challenges.Current research focuses on applying Bayesian methods to neural networks.Diffusion methods are emerging as a new approach for uncertainty quantification.The integration of machine learning tools into Bayesian models is a key area of research.The complexity of Bayesian neural networks poses significant computational challenges.Future research will focus on improving methods for uncertainty quantification. Generalized Bayesian inference offers a more robust approach to uncertainty.Uncertainty quantification is crucial in fields like medicine and epidemiology.Detecting out-of-distribution examples is essential for model reliability.Exploration-exploitation trade-off is vital in reinforcement learning.Marginal likelihood can be misleading for model selection.The integration of Bayesian methods in LLMs presents unique challenges.Chapters:00:00 Introduction to Bayesian Deep Learning03:12 Panelist Introductions and Backgrounds10:37 Current Research and Challenges in Bayesian Deep Learning18:04 Contrasting Approaches: Bayesian vs. Machine Learning26:09 Tools and Techniques for Bayesian Deep Learning31:18 Innovative Methods in Uncertainty Quantification36:23 Generalized Bayesian Inference and Its Implications41:38 Robust Bayesian Inference and Gaussian Processes44:24 Software Development in Bayesian Statistics46:51 Understanding Uncertainty in Language Models50:03 Hallucinations in Language Models53:48 Bayesian Neural Networks vs Traditional Neural Networks58:00 Challenges with Likelihood Assumptions01:01:22 Practical Applications of Uncertainty Quantification01:04:33 Meta Decision-Making with Uncertainty01:06:50 Exploring Bayesian Priors in Neural Networks01:09:17 Model Complexity and Data Signal01:12:10 Marginal Likelihood and Model Selection01:15:03 Implementing Bayesian Methods in LLMs01:19:21 Out-of-Distribution Detection in LLMsThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer,...

Podlodka Podcast
Podlodka #436 – Математика в ИИ

Podlodka Podcast

Play Episode Listen Later Aug 5, 2025 86:43


Многие знают, что когда модели обучаются, где-то под капотом перемножаются матрицы и тензоры, и все это связано с дифференцированием. Мы с Денисом Степановым взялись за нелегкую задачу – разобраться, что же именно там происходит! Также ждем вас, ваши лайки, репосты и комменты в мессенджерах и соцсетях!
 Telegram-чат: https://t.me/podlodka Telegram-канал: https://t.me/podlodkanews Страница в Facebook: www.facebook.com/podlodkacast/ Twitter-аккаунт: https://twitter.com/PodcastPodlodka Ведущие в выпуске: Женя Кателла, Аня Симонова Полезные ссылки: Dive into Deep Learning Aston Zhang, Zachary C. Lipton, Mu Li, Alexander J. Smola (online book with code and formulas) https://d2l.ai/ https://www.amazon.com/s/ref=dp_byline_sr_book_2?ie=UTF8&field-author=Zachary+C.+Lipton&text=Zachary+C.+Lipton&sort=relevancerank&search-alias=books Micrograd by Andrej Karpathy https://github.com/karpathy/micrograd Andrej Karpathy builds GPT from scratch https://www.youtube.com/watch?v=kCc8FmEb1nY Scott Aaronson on LLM Watermarking https://www.youtube.com/watch?v=YzuVet3YkkA Annotated history of Modern AI and Deep Learning by Jurgen Schmidhuber https://people.idsia.ch/~juergen/deep-learning-history.html Probabilistic Machine Learning: An Introduction Kevin Patrick Murphy https://probml.github.io/pml-book/book1.html Probabilistic Machine Learning: Advanced Topics Kevin Patrick Murphy https://probml.github.io/pml-book/book2.html Pattern Recognition and Machine Learning Christopher Bishop https://www.microsoft.com/en-us/research/wp-content/uploads/2006/01/Bishop-Pattern-Recognition-and-Machine-Learning-2006.pdf Deep Learning: Foundations and Concepts Christopher Bishop, Hugh Bishop https://www.bishopbook.com/ Deep Learning Ian Goodfellow, Yoshua Bengio, Aaron Courville https://www.deeplearningbook.org/ Глубокое обучение: Погружение в мир нейронных сетей С. Николенко, А. Кадурин, Е. Архангельская https://www.k0d.cc/storage/books/AI,%20Neural%20Networks/%D0%93%D0%BB%D1%83%D0%B1%D0%BE%D0%BA%D0%BE%D0%B5%20%D0%BE%D0%B1%D1%83%D1%87%D0%B5%D0%BD%D0%B8%D0%B5%20(%D0%9D%D0%B8%D0%BA%D0%BE%D0%BB%D0%B5%D0%BD%D0%BA%D0%BE).pdf Gonzo-обзоры ML статей Григорий Сапунов, Алексей Тихонов https://t.me/gonzo_ML Machine Learning Street Talk podcast https://www.youtube.com/c/machinelearningstreettalk Feedforward NNs, Autograd, Backprop (Datalore report, Denis Stepanov) https://datalore.jetbrains.com/report/static/Ht_isxs4iB2.BNIqv-C3WUp/pEpNv2eMVU9tEkPsaboR9y Softmax Regression, Adversarial Attacks (Datalore report, Denis Stepanov) https://datalore.jetbrains.com/report/static/Ht_isxs4iB2.BNIqv-C3WUp/cIvd6zX1B5I3kULNiVCEyy Dual Numbers, PINN (Datalore report, Denis Stepanov) https://datalore.jetbrains.com/report/static/Ht_isxs4iB2.BNIqv-C3WUp/3oa1BNrPGpQ8uc82tCaz5d

Le rendez-vous Tech
Hors-série : La mort de Steve Jobs (rediff) - RDV Tech

Le rendez-vous Tech

Play Episode Listen Later Aug 5, 2025 47:23


Au programme :Dans cet épisode hors-série je vous propose une plongée dans les archives du RDV Tech on va remonter en octobre 2011 au moment de la mort de Steve Jobs. On en parlait dans le RDV Tech 71 avec au micro Guillaume Main, Olivier Frigara et Cédric Ingrand, une longue discussion de 45 minutes très intéressante, vous verrez que la qualité de certains invités n'est pas optimale mais c'est largement écoutable. Infos :Animé par Patrick Beja (Bluesky, Instagram, Twitter, TikTok)Produit par Patrick Beja (LinkedIn) et Fanny Cohen Moreau (LinkedIn).Musique libre de droit par Daniel Beja---Liens :

Oracle University Podcast

In this episode, hosts Lois Houston and Nikita Abraham, together with Senior Cloud Engineer Nick Commisso, break down the basics of artificial intelligence (AI). They discuss the differences between Artificial General Intelligence (AGI) and Artificial Narrow Intelligence (ANI), and explore the concepts of machine learning, deep learning, and generative AI. Nick also shares examples of how AI is used in everyday life, from navigation apps to spam filters, and explains how AI can help businesses cut costs and boost revenue.   AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/252500   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ----------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Nikita: Hello and welcome to the Oracle University Podcast. I'm Nikita Abraham, Team Lead of Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi everyone! Welcome to a new season of the podcast. I'm so excited about this one because we're going to dive into the world of artificial intelligence, speaking to many experts in the field. Nikita: If you've been listening to us for a while, you probably know we've covered AI from a bunch of different angles. But this time, we're dialing it all the way back to basics. We wanted to create something for the absolute beginner, so no jargon, no assumptions, just simple conversations that anyone can follow. 01:08 Lois: That's right, Niki. You don't need to have a technical background or prior experience with AI to get the most out of these episodes. In our upcoming conversations, we'll break down the basics of AI, explore how it's shaping the world around us, and understand its impact on your business. Nikita: The idea is to give you a practical understanding of AI that you can use in your work, especially if you're in sales, marketing, operations, HR, or even customer service.  01:37 Lois: Today, we'll talk about the basics of AI with Senior Cloud Engineer Nick Commisso. Hi Nick! Welcome back to the podcast. Can you tell us about human intelligence and how it relates to artificial intelligence? And within AI, I know we have Artificial General Intelligence, or AGI, and Artificial Narrow Intelligence, or ANI. What's the difference between the two? Nick: Human intelligence is the intellectual capability of humans that allow us to learn new skills through observation and mental digestion, to think through and understand abstract concepts and apply reasoning, to communicate using language and understand non-verbal cues, such as facial expressions, tone variation, body language. We can handle objections and situations in real time, even in a complex setting. We can plan for short and long-term situations or projects. And we can create music, art, or invent something new or have original ideas. If machines can replicate a wide range of human cognitive abilities, such as learning, reasoning, or problem solving, we call it artificial general intelligence.  Now, AGI is hypothetical for now, but when we apply AI to solve problems with specific, narrow objectives, we call it artificial narrow intelligence, or ANI. AGI is a hypothetical AI that thinks like a human. It represents the ultimate goal of artificial intelligence, which is a system capable of chatting, learning, and even arguing like us. If AGI existed, it would take the form like a robot doctor that accurately diagnoses and comforts patients, or an AI teacher that customizes lessons in real time based on each student's mood, pace, and learning style, or an AI therapist that comprehends complex emotions and provides empathetic, personalized support. ANI, on the other hand, focuses on doing one thing really well. It's designed to perform specific tasks by recognizing patterns and following rules, but it doesn't truly understand or think beyond its narrow scope. Think of ANI as a specialist. Your phone's face ID can recognize you instantly, but it can't carry on a conversation. Google Maps finds the best route, but it can't write you a poem. And spam filters catch junk mail, but it can't make you coffee. So, most of the AI you interact with today is ANI. It's smart, efficient, and practical, but limited to specific functions without general reasoning or creativity. 04:22 Nikita: Ok then what about Generative AI?  Nick: Generative AI is a type of AI that can produce content such as audio, text, code, video, and images. ChatGPT can write essays, but it can't fact check itself. DALL-E creates art, but it doesn't actually know if it's good. Or AI song covers can create deepfakes like Drake singing "Baby Shark."  04:47 Lois: Why should I care about AI? Why is it important? Nick: AI is already part of your everyday life, often working quietly in the background. ANI powers things like navigation apps, voice assistants, and spam filters. Generative AI helps create everything from custom playlists to smart writing tools. And while AGI isn't here yet, it's shaping ideas about what the future might look like. Now, AI is not just a buzzword, it's a tool that's changing how we live, work, and interact with the world. So, whether you're using it or learning about it or just curious, it's worth knowing what's behind the tech that's becoming part of everyday life.  05:32 Lois: Nick, whenever people talk about AI, they also throw around terms like machine learning and deep learning. What are they and how do they relate to AI? Nick: As we shared earlier, AI is the ability of machines to imitate human intelligence. And Machine Learning, or ML, is a subset of AI where the algorithms are used to learn from past data and predict outcomes on new data or to identify trends from the past. Deep Learning, or DL, is a subset of machine learning that uses neural networks to learn patterns from complex data and make predictions or classifications. And Generative AI, or GenAI, on the other hand, is a specific application of DL focused on creating new content, such as text, images, and audio, by learning the underlying structure of the training data.  06:24 Nikita: AI is often associated with key domains like language, speech, and vision, right? So, could you walk us through some of the specific tasks or applications within each of these areas? Nick: Language-related AI tasks can be text related or generative AI. Text-related AI tasks use text as input, and the output can vary depending on the task. Some examples include detecting language, extracting entities in a text, extracting key phrases, and so on.  06:54 Lois: Ok, I get you. That's like translating text, where you can use a text translation tool, type your text in the box, choose your source and target language, and then click Translate. That would be an example of a text-related AI task. What about generative AI language tasks? Nick: These are generative, which means the output text is generated by the model. Some examples are creating text, like stories or poems, summarizing texts, and answering questions, and so on. 07:25 Nikita: What about speech and vision? Nick: Speech-related AI tasks can be audio related or generative AI. Speech-related AI tasks use audio or speech as input, and the output can vary depending on the task. For example, speech to text conversion, speaker recognition, or voice conversion, and so on. Generative AI tasks are generative, i.e., the output audio is generated by the model (for example, music composition or speech synthesis). Vision-related AI tasks can be image related or generative AI. Image-related AI tasks use an image as the input, and the output depends on the task. Some examples are classifying images or identifying objects in an image. Facial recognition is one of the most popular image-related tasks that's often used for surveillance and tracking people in real time. It's used in a lot of different fields, like security and biometrics, law enforcement, entertainment, and social media. For generative AI tasks, the output image is generated by the model. For example, creating an image from a textual description or generating images of specific style or high resolution, and so on. It can create extremely realistic new images and videos by generating original 3D models of objects, such as machine, buildings, medications, people and landscapes, and so much more. 08:58 Lois: This is so fascinating. So, now we know what AI is capable of. But Nick, what is AI good at? Nick: AI frees you to focus on creativity and more challenging parts of your work. Now, AI isn't magic. It's just very good at certain tasks. It handles work that's repetitive, time consuming, or too complex for humans, like processing data or spotting patterns in large data sets.  AI can take over routine tasks that are essential but monotonous. Examples include entering data into spreadsheets, processing invoices, or even scheduling meetings, freeing up time for more meaningful work. AI can support professionals by extending their abilities. Now, this includes tools like AI-assisted coding for developers, real-time language translation for travelers or global teams, and advanced image analysis to help doctors interpret medical scans much more accurately. 10:00 Nikita: And what would you say is AI's sweet spot? Nick: That would be tasks that are both doable and valuable. A few examples of tasks that are feasible technically and have business value are things like predicting equipment failure. This saves downtime and the loss of business. Call center automation, like the routing of calls to the right person. This saves time and improves customer satisfaction. Document summarization and review. This helps save time for busy professionals. Or inspecting power lines. Now, this task is dangerous. By automating it, it protects human life and saves time. 10:48 Oracle University's Race to Certification 2025 is your ticket to free training and certification in today's hottest tech. Whether you're starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That's education.oracle.com/race-to-certification-2025. 11:30 Nikita: Welcome back! Now one big way AI is helping businesses today is by cutting costs, right? Can you give us some examples of this?  Nick: Now, AI can contribute to cost reduction in several key areas. For instance, chatbots are capable of managing up to 50% of customer queries. This significantly reduces the need for manual support, thereby lowering operational costs. AI can streamline workflows, for example, reducing invoice processing time from 10 days to just 1 hour. This leads to substantial savings in both time and resources. In addition to cost savings, AI can also support revenue growth. One way is enabling personalization and upselling. Platforms like Netflix use AI-driven recommendation systems to influence user choices. This not only enhances the user experience, but it also increases the engagement and the subscription revenue. Or unlocking new revenue streams. AI technologies, such as generative video tools and virtual influencers, are creating entirely new avenues for advertising and branded content, expanding business opportunities in emerging markets. 12:50 Lois: Wow, saving money and boosting bottom lines. That's a real win! But Nick, how is AI able to do this?  Nick: Now, data is what teaches AI. Just like we learn from experience, so does AI. It learns from good examples, bad examples, and sometimes even the absence of examples. The quality and variety of data shape how smart, accurate, and useful AI becomes. Imagine teaching a kid to recognize animals using only pictures of squirrels that are labeled dogs. That would be very confusing at the dog park. AI works the exact same way, where bad data leads to bad decisions. With the right data, AI can be powerful and accurate. But with poor or biased data, it can become unreliable and even misleading.  AI amplifies whatever you feed it. So, give it gourmet data, not data junk food. AI is like a chef. It needs the right ingredients. It needs numbers for predictions, like will this product sell? It needs images for cool tricks like detecting tumors, and text for chatting, or generating excuses for why you'd be late. Variety keeps AI from being a one-trick pony. Examples of the types of data are numbers, or machine learning, for predicting things like the weather. Text or generative AI, where chatbots are used for writing emails or bad poetry. Images, or deep learning, can be used for identifying defective parts in an assembly line, or an audio data type to transcribe a dictation from a doctor to a text. 14:35 Lois: With so much data available, things can get pretty confusing, which is why we have the concept of labeled and unlabeled data. Can you help us understand what that is? Nick: Labeled data are like flashcards, where everything has an answer. Spam filters learned from emails that are already marked as junk, and X-rays are marked either normal or pneumonia. Let's say we're training AI to tell cats from dogs, and we show it a hundred labeled pictures. Cat, dog, cat, dog, etc. Over time, it learns, hmm fluffy and pointy ears? That's probably a cat. And then we test it with new pictures to verify. Unlabeled data is like a mystery box, where AI has to figure it out itself. Social media posts, or product reviews, have no labels. So, AI clusters them by similarity. AI finding trends in unlabeled data is like a kid sorting through LEGOs without instructions. No one tells them which blocks will go together.  15:36 Nikita: With all the data that's being used to train AI, I'm sure there are issues that can crop up too. What are some common problems, Nick? Nick: AI's performance depends heavily on the quality of its data. Poor or biased data leads to unreliable and unfair outcomes. Dirty data includes errors like typos, missing values, or duplicates. For example, an age record as 250, or NA, can confuse the AI. And a variety of data cleaning techniques are available, like missing data can be filled in, or duplicates can be removed. AI can inherit human prejudices if the data is unbalanced. For example, a hiring AI may favor one gender if the past three hires were mostly male. Ensuring diverse and representative data helps promote fairness. Good data is required to train better AI. Data could be messy, and needs to be processed before to train AI. 16:39 Nikita: Thank you, Nick, for sharing your expertise with us. To learn more about AI, go to mylearn.oracle.com and search for the AI for You course. As you complete the course, you'll find skill checks that you can attempt to solidify your learning.  Lois: In our next episode, we'll dive deep into fundamental AI concepts and terminologies. Until then, this is Lois Houston… Nikita: And Nikita Abraham signing off! 17:05 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

Practical AI
Workforce dynamics in an AI-assisted world

Practical AI

Play Episode Listen Later Aug 1, 2025 44:06 Transcription Available


We unpack how AI is reshaping hiring decisions, shifting job roles, and creating new expectations for professionals — from engineers to marketers. They explore the rise of AI-assisted teams, the growing compensation bubble, why continuous learning is now table stakes, and how some service providers are quietly riding the AI wave.Featuring:Chris Benson – Website, LinkedIn, Bluesky, GitHub, XDaniel Whitenack – Website, GitHub, XSponsors:Outshift by Cisco: AGNTCY is an open source collective building the Internet of Agents. It's a collaboration layer where AI agents can communicate, discover each other, and work across frameworks. For developers, this means standardized agent discovery tools, seamless protocols for inter-agent communication, and modular components to compose and scale multi-agent workflows."Register for upcoming webinars here!

Le rendez-vous Tech
Les agents IA ne prennent pas de vacances - RDV Tech

Le rendez-vous Tech

Play Episode Listen Later Jul 29, 2025 81:30


Au programme :OpenAI, Perplexity et même Proton : ils profitent de l'été pour continuer à lancer toujours plus de solutions par intelligence artificielleVibe coding et no code : des possibilités toujours plus grandes de faire du développer sans saisir de ligne de code… pour toujours plus de risques ?Réflexion de l'été : les smartphones sont-ils de parasites ?Infos :Animé par Guillaume Vendé (Bluesky, Mastodon, Threads, Instagram, TikTok, YouTube, techcafe.fr)Co-animé par Benoît Curdy (X, Niptech)Co-animé par Baptiste Freydt (X, Niptech)Produit par Patrick Beja (LinkedIn) et Fanny Cohen Moreau (LinkedIn).Musique libre de droit par Daniel BejaLe Rendez-vous Tech épisode 627 – Les agents IA ne prennent pas de vacances---Liens :

On the Way to New Work - Der Podcast über neue Arbeit
#501 Richard Socher | CEO at you.com

On the Way to New Work - Der Podcast über neue Arbeit

Play Episode Listen Later Jul 28, 2025 76:18


Unser heutiger Gast wurde in Dresden geboren, studierte Computerlinguistik in Leipzig und Saarbrücken und promovierte später an der Stanford University – betreut von keinem Geringeren als Andrew Ng und Chris Manning. Seine Dissertation wurde als beste Informatik-Promotion ausgezeichnet. Nach Stationen bei Microsoft und Siemens gründete er sein erstes Unternehmen: MetaMind, ein Deep-Learning-Startup, das 2016 von Salesforce übernommen wurde. Dort war er anschließend Chief Scientist, leitete große Forschungsteams und trieb die KI-Strategie des Konzerns maßgeblich voran. Heute ist er Gründer und CEO von you.com, einer KI-basierten Suchmaschine, die als datenschutzfreundliche, transparente und anpassbare Alternative zu klassischen Anbietern auftritt, mit einem starken Fokus auf Nutzendenkontrolle und verantwortungsvoller KI. Zudem investiert er über seinen Fonds AI+X in KI-Startups weltweit. Seine wissenschaftlichen Arbeiten zählen zu den meistzitierten im Bereich NLP und Deep Learning, über 170.000 Mal, und viele seiner Ideen haben die Entwicklung heutiger Sprachmodelle mitgeprägt. Ein herzliches Dankeschön an Adrian Locher, CEO und Gründer von Merantix, für die Vermittlung dieses Gesprächs. Seit über acht Jahren beschäftigen wir uns in diesem Podcast mit der Frage, wie Arbeit den Menschen stärkt, statt ihn zu schwächen. In 500 Gesprächen mit über 600 Menschen haben wir darüber gesprochen, was sich für sie geändert hat, und was sich noch ändern muss. Wie können wir verhindern, dass KI-Systeme nur effizienter, aber nicht gerechter werden und worauf kommt es bei der Gestaltung wirklich an? Welche Rolle spielt Transparenz, wenn es um Vertrauen in KI geht, besonders in sensiblen Anwendungen wie Suche, Bildung oder Arbeit? Und was braucht es, um KI so zu entwickeln, dass sie unsere Fähigkeiten erweitert, statt sie zu ersetzen? Fest steht: Für die Lösung unserer aktuellen Herausforderungen brauchen wir neue Impulse. Daher suchen wir weiter nach Methoden, Vorbildern, Erfahrungen, Tools und Ideen, die uns dem Kern von New Work näherbringen. Darüber hinaus beschäftigt uns von Anfang an die Frage, ob wirklich alle Menschen das finden und leben können, was sie im Innersten wirklich, wirklich wollen. Ihr seid bei On the Way to New Work – heute mit Richard Socher. [Hier](https://linktr.ee/onthewaytonewwork) findet ihr alle Links zum Podcast und unseren aktuellen Werbepartnern

Practical AI
Reimagining actuarial science with AI

Practical AI

Play Episode Listen Later Jul 25, 2025 40:59 Transcription Available


In this episode, Chris sits down with Igor Nikitin, CEO and co-founder of Nice Technologies, to explore how AI and modern engineering practices are transforming the actuarial field and setting the stage for the future of actuarial modeling. We discuss the introduction of programming into insurance pricing workflows, and how their Python-based calc engine, AI copilots, and DevOps-inspired workflows are enabling actuaries to collaborate more effectively across teams while accelerating innovation. Featuring:Igor Nikitin – LinkedInChris Benson – Website, LinkedIn, Bluesky, GitHub, XLinks:Nice TechnologiesSponsors:Shopify – The commerce platform trusted by millions. From idea to checkout, Shopify gives you everything you need to launch and scale your business—no matter your level of experience. Build beautiful storefronts, market with built-in AI tools, and tap into the platform powering 10% of all U.S. eCommerce. Start your one-dollar trial at shopify.com/practicalai

Cloud Realities
CR107: Reflecting on Season 4 – Highlights what we learned, loved and are planning next

Cloud Realities

Play Episode Listen Later Jul 24, 2025 91:46


Dave, Esmee, and Rob take a moment to look back on the wild ride that was Season 4—revisiting the themes that sparked the biggest conversations and the guests who left a lasting impression. They also reveal what's on their summer to-do lists and drop a few juicy hints about what's coming in Season 5. Get ready—it's going to be even bigger and bolder.Thank you to all our listeners and guests for joining us in Season 4 - have a great summer and we will see you in September!TLDR:00:40 Season 4 by the numbers – and a fun mix-up with round figures03:20 Reflecting on standout topics and memorable guests03:42 Scaling AI: Hyperscaler narratives, tech momentum, and the adoption gap13:18 Ethics in the AI era – how organizations can and must stay grounded18:12 The human factor: Why “human-in-the-loop” matters more than ever27:29 Sovereignty in tech – geopolitics, shifting narratives, and the rise of Sovereign AI37:16 A deep dive into Telco – highlights from our dedicated mini-series53:48 2025 tech trends with Gene Kim55:33 Listener Q&A: Daniel Delicate on Cynefin vs. IT operating models1:01:44 Andrea Kis on keeping humanity in fast-paced tech1:06:09 Ezhil Suresh on how we prep and record our podcast with top-tier guests1:11:38 John Eaton-Griffin on how guests have shaped our thinking1:17:19 A word from our co-host1:19:57 Looking ahead to Season 5: AAA episodes, new industry mini-series, and Hyperscaler events1:22:09 Meet our new AI companions: Substack and the Cloud Realities chatbot1:23:40 What's next for us this summerHostsDave Chapman: https://www.linkedin.com/in/chapmandr/Esmee van de Giessen: https://www.linkedin.com/in/esmeevandegiessen/Rob Kernahan: https://www.linkedin.com/in/rob-kernahan/ProductionMarcel van der Burg: https://www.linkedin.com/in/marcel-vd-burg/Dave Chapman: https://www.linkedin.com/in/chapmandr/SoundBen Corbett: https://www.linkedin.com/in/ben-corbett-3b6a11135/Louis Corbett:  https://www.linkedin.com/in/louis-corbett-087250264/'Cloud Realities' is an original podcast from Capgemini

Le rendez-vous Tech
Crossover : RDV Jeux 370 - Microsoft n'a qu'une solution

Le rendez-vous Tech

Play Episode Listen Later Jul 22, 2025 91:07


Au programme :Cette semaine je vous propose un épisode "crossover" pour découvrir mon autre podcast, le RDV Jeux.C'est l'épisode 370, où on traite de la stratégie de Microsoft / Xbox, de l'équation impossible entre gros jeux gamers et jeux mobiles, et de nos jeux du moment. Enjoy!Plus d'infos : https://frenchspin.fr/2024/10/microsoft-na-quune-solution-rdv-jeux/Infos :Animé par Patrick Beja (Bluesky, Instagram, Twitter, TikTok)Produit par Patrick Beja (LinkedIn) et Fanny Cohen Moreau (LinkedIn).Musique libre de droit par Daniel Beja---Liens :

The Shift
O Brasil tem suas estrelas da Inteligência Artificial

The Shift

Play Episode Listen Later Jul 20, 2025 52:40


A startup brasileira Avra criou um modelo fundacional próprio de IA para apoiar clientes do mercado financeiro a analisar e conceder crédito para PMEs. A startup já captou US$ 2 milhões em financiamento de venture capital e foi reconhecida, pela AWS, como uma das três empresas mais inovadoras da América Latina na criação de modelos fundacionais. Viviane Meister, CEO, e Bruno Alano, CTO, ambos cofundadores da Avra, explicam o que fazem e provam que o Brasil pode, sim, competir globalmente no terreno da IA.Links do episódio:O site da AvraA página do LinkedIn de Viviane MeisterA página do LinkedIn de Bruno AlanoPessoa inspiradora: Shivani Siroya, fundadora e CEO da Tala, a startup de microcrédito que entrou para a lista de 2025 das 50 Top Fintechs globais da Forbes.O blog/livro "Neural Networks and DeepLearning", criado por Michael NielsenO livro "The Master Switch: The Rise and Fall of Information Empires", de Tim WuO livro "Visionários", de Steven JohnsonO livro "Inteligência artificial a nosso favor: Como manter o controle sobre a tecnologia", de Stuart Russell   A The Shift é uma plataforma de conteúdo que descomplica os contextos da inovação disruptiva e da economia digital.Visite o site www.theshift.info e assine a newsletter

Cloud Realities
CR106: Changing nature of large scale apps with Timo Elliott SAP

Cloud Realities

Play Episode Listen Later Jul 17, 2025 62:41


The rise of structure software fueled globalization by streamlining operations across borders. Now, Cloud and AI are accelerating this momentum, enabling faster innovation, smarter decision-making, and scalable growth. By modernizing ERP with intelligent technologies, organizations can stay agile, competitive, and ready for the next wave of global transformation.This week, Dave, Esmee and Rob talk to Timo Elliott, Innovation Evangelist at SAP, to explore how SAP is driving globalization—and how organizations can accelerate innovation through the power of Cloud and AI. TLDR00:55 Introduction of Timo Elliott02:40 Rob shares his confusion about misleading online ads08:06 In-depth conversation with Timo46:32 Rethinking control in enterprise systems1:00:00 Brunch at a Paris café or joining an event?GuestTimo Elliott: https://www.linkedin.com/in/timoelliott/HostsDave Chapman: https://www.linkedin.com/in/chapmandr/Esmee van de Giessen: https://www.linkedin.com/in/esmeevandegiessen/Rob Kernahan: https://www.linkedin.com/in/rob-kernahan/ProductionMarcel van der Burg: https://www.linkedin.com/in/marcel-vd-burg/Dave Chapman: https://www.linkedin.com/in/chapmandr/SoundBen Corbett: https://www.linkedin.com/in/ben-corbett-3b6a11135/Louis Corbett:  https://www.linkedin.com/in/louis-corbett-087250264/'Cloud Realities' is an original podcast from Capgemini

Practical AI
Agentic AI for Drone & Robotic Swarming

Practical AI

Play Episode Listen Later Jul 15, 2025 46:27 Transcription Available


In this episode of Practical AI, Chris and Daniel explore the fascinating world of agentic AI for drone and robotic swarms, which is Chris's passion and professional focus. They unpack how autonomous vehicles (UxV), drones (UaV), and other autonomous multi-agent systems can collaborate without centralized control while exhibiting complex emergent behavior with agency and self-governance to accomplish a mission or shared goals. Chris and Dan delve into the role of AI real-time inference and edge computing to enable complex agentic multi-model autonomy, especially in challenging environments like disaster zones and remote industrial operations.Featuring:Chris Benson – Website, LinkedIn, Bluesky, GitHub, XDaniel Whitenack – Website, GitHub, XLinks:ROS - Robotic Operating SystemGazeboHugging Face Agents CourseSwarm Robotics | WikipediaChris's definition of Swarming:Swarming occurs when numerous independent fully-autonomous multi-agentic platforms exhibit highly-coordinated locomotive and emergent behaviors with agency and self-governance in any domain (air, ground, sea, undersea, space), functioning as a single independent logical distributed decentralized decisioning entity for purposes of C3 (command, control, communications) with human operators on-the-loop, to implement actions that achieve strategic, tactical, or operational effects in the furtherance of a mission.© 2025 Chris BensonSponsors:Outshift by Cisco: AGNTCY is an open source collective building the Internet of Agents. It's a collaboration layer where AI agents can communicate, discover each other, and work across frameworks. For developers, this means standardized agent discovery tools, seamless protocols for inter-agent communication, and modular components to compose and scale multi-agent workflows.

Le rendez-vous Tech
Spécial : Portrait de Jeff Clavier – RDV Tech

Le rendez-vous Tech

Play Episode Listen Later Jul 15, 2025 84:52


Au programme :Le parcours de Jeff, de la France et l'informatique aux États-Unis et l'investissementInfos :Animé par Patrick Beja (Bluesky, Instagram, Twitter, TikTok)Co-animé par Jeff ClavierProduit par Patrick Beja (LinkedIn) et Fanny Cohen Moreau (LinkedIn).Musique libre de droit par Daniel BejaLe Rendez-vous Tech épisode626 - Spécial : Portrait de Jeff Clavier---Liens :

The Agile World with Greg Kihlstrom
#703: How AI and Deep Learning is affecting advertising with Jaysen Gillespie, RTB House

The Agile World with Greg Kihlstrom

Play Episode Listen Later Jul 11, 2025 28:36


Are we on the brink of advertising becoming too smart for its own good, or is Deep Learning finally getting us closer to what customers actually want? Agility requires us to constantly evaluate how technology like AI reshapes the relationships between brands and consumers—sometimes for better, sometimes for far more complex. The advertising landscape is shifting under our feet, with new rules, new tech, and frankly, a lot of new guesswork.Today we're going to talk about how Deep Learning and AI are impacting advertising effectiveness, personalization, and the future of advertising—with or without cookies. To help me discuss this topic, I'd like to welcome Jaysen Gillespie, VP, Global Head of Analytics and Product Marketing at RTB House. About Jaysen Gillespie Jaysen is a Southern California analytics pro with 15+ years in tech leadership. Currently holding the position of VP, Global Head of Product Marketing and Analytics at RTB House, he turns data into insights that drive relevant decisions. He is an experienced speaker and content creator, simplifying complex ideas and making them easily consumable and applicable. For Jaysen, analytics isn't just interesting—it's essential. Resources RTB House: https://www.rtbhouse.com https://www.rtbhouse.com The Agile Brand podcast is brought to you by TEKsystems. Learn more here: https://www.teksystems.com/versionnextnow Catch the future of e-commerce at eTail Boston, August 11-14, 2025. Register now: https://bit.ly/etailboston and use code PARTNER20 for 20% off for retailers and brandsDon't Miss MAICON 2025, October 14-16 in Cleveland - the event bringing together the brights minds and leading voices in AI. Use Code AGILE150 for $150 off registration. Go here to register: https://bit.ly/agile150" Connect with Greg on LinkedIn: https://www.linkedin.com/in/gregkihlstromDon't miss a thing: get the latest episodes, sign up for our newsletter and more: https://www.theagilebrand.showCheck out The Agile Brand Guide website with articles, insights, and Martechipedia, the wiki for marketing technology: https://www.agilebrandguide.com The Agile Brand is produced by Missing Link—a Latina-owned strategy-driven, creatively fueled production co-op. From ideation to creation, they craft human connections through intelligent, engaging and informative content. https://www.missinglink.company

Cloud Realities
CR0105: How little we still understand about GreenOps with James Hall, Green Pixie

Cloud Realities

Play Episode Listen Later Jul 10, 2025 32:39


GreenOps is a cultural transformation that empowers developers to turn emissions data into meaningful action, bridging the communication gap with ESG teams and exposing the critical truth that cloud cost and carbon cost are not the same, which fundamentally reshapes how we approach sustainable IT.This week, Dave, Esmee and Rob talk to James Hall, Head of GreenOps at Green Pixie, to unpack the real state of GreenOps today—and why we've only just scratched the surface.  TLDR 01:57 Rob is confused about AGI 06:11 Cloud conversation with James Hall 22:10 Esmee as media archeologist, found GreenOps is 50 years old 30:46 Having some drinks in the summer Guest James Hall: https://www.linkedin.com/in/james-f-hall/ Hosts Dave Chapman: https://www.linkedin.com/in/chapmandr/ Esmee van de Giessen: https://www.linkedin.com/in/esmeevandegiessen/ Rob Kernahan: https://www.linkedin.com/in/rob-kernahan/Production Marcel van der Burg: https://www.linkedin.com/in/marcel-vd-burg/ Dave Chapman: https://www.linkedin.com/in/chapmandr/ Sound Ben Corbett: https://www.linkedin.com/in/ben-corbett-3b6a11135/ Louis Corbett:  https://www.linkedin.com/in/louis-corbett-087250264/'Cloud Realities' is an original podcast from Capgemini

Le rendez-vous Tech
Hors-série : Les débuts de ChatGPT (rediff) - RDV Tech

Le rendez-vous Tech

Play Episode Listen Later Jul 8, 2025 34:44


Au programme :En ce mois de juillet 2025 je vous propose une plongée dans les archives du RDV Tech on va remonter juste de quelques années pour se repencher sur le début de quelque chose qui bouleverse beaucoup de choses ces dernières années, l'arrivée de ChatGPT fin 2022. On en parlait dans le RDV Tech 490 le 6 décembre 2022 avec au micro avec moi Korben et Cedric de Luca, avec le début de la version conversationnelle de ChatGPT pour le grand public, on était impressionnés par la génération de textes. Infos :Animé par Patrick Beja (Bluesky, Instagram, Twitter, TikTok)Produit par Patrick Beja (LinkedIn) et Fanny Cohen Moreau (LinkedIn).Musique libre de droit par Daniel Beja---Liens :

Practical AI
AI in the shadows: From hallucinations to blackmail

Practical AI

Play Episode Listen Later Jul 7, 2025 44:50 Transcription Available


In the first episode of an "AI in the shadows" theme, Chris and Daniel explore the increasing concerning world of agentic misalignment. Starting out with a reminder about hallucinations and reasoning models, they break down how today's models only mimic reasoning, which can lead to serious ethical considerations. They unpack a fascinating (and slightly terrifying) new study from Anthropic, where agentic AI models were caught simulating blackmail, deception, and even sabotage — all in the name of goal completion and self-preservation. Featuring:Chris Benson – Website, LinkedIn, Bluesky, GitHub, XDaniel Whitenack – Website, GitHub, XLinks:Agentic Misalignment: How LLMs could be insider threatsHugging Face Agents CourseRegister for upcoming webinars here!

Machine Learning Street Talk
The Fractured Entangled Representation Hypothesis (Kenneth Stanley, Akarsh Kumar)

Machine Learning Street Talk

Play Episode Listen Later Jul 6, 2025 136:22


Are the AI models you use today imposters?Please watch the intro video we did before this: https://www.youtube.com/watch?v=o1q6Hhz0MAgIn this episode, hosts Dr. Tim Scarfe and Dr. Duggar are joined by AI researcher Prof. Kenneth Stanley and MIT PhD student Akash Kumar to discuss their fascinating paper, "Questioning Representational Optimism in Deep Learning."Imagine you ask two people to draw a perfect skull. One is a brilliant artist who understands anatomy, the other is a machine that just traces the image. Both drawings look identical, but the artist understands what a skull is—they know where the mouth is, how the jaw works, and that it's symmetrical. The machine just has a tangled mess of lines that happens to form the right picture.An AI with an elegant representation, has the building blocks to generate truly new ideas.The Path Is the Goal: As Kenneth Stanley puts it, "it matters not just where you get, but how you got there". Two students can ace a math test, but the one who truly understands the concepts—instead of just memorizing formulas—is the one who will go on to make new discoveries.The show is a mixture of 3 separate recordings we have done, the original Patreon warmup with Tim/Kenneth, the Tim/Keith "Steakhouse" recorded after the main interview, then the main interview with Kenneth/Akarsh/Keith/Tim. Feel free to skip around. We had to edit this in a rush as we are travelling next week but it's reasonably cleaned up. TOC:00:00:00 Intro: Garbage vs. Amazing Representations00:05:42 How Good Representations Form00:11:14 Challenging the "Bitter Lesson"00:18:04 AI Creativity & Representation Types00:22:13 Steakhouse: Critiques & Alternatives00:28:30 Steakhouse: Key Concepts & Goldilocks Zone00:39:42 Steakhouse: A Sober View on AI Risk00:43:46 Steakhouse: The Paradox of Open-Ended Search00:47:58 Main Interview: Paper Intro & Core Concepts00:56:44 Main Interview: Deception and Evolvability01:36:30 Main Interview: Reinterpreting Evolution01:56:16 Main Interview: Impostor Intelligence02:11:15 Main Interview: Recommendations for AI ResearchREFS:Questioning Representational Optimism in Deep Learning:The Fractured Entangled Representation HypothesisAkarsh Kumar, Jeff Clune, Joel Lehman, Kenneth O. Stanleyhttps://arxiv.org/pdf/2505.11581Kenneth O. Stanley, Joel LehmanWhy Greatness Cannot Be Planned: The Myth of the Objectivehttps://amzn.to/44xLaXKOriginal show with Kenneth from 4 years ago:https://www.youtube.com/watch?v=lhYGXYeMq_EKenneth Stanley is SVP Open Endedness at Lila Scienceshttps://x.com/kenneth0stanleyAkarsh Kumar (MIT)https://akarshkumar.com/AND... Kenneth is HIRING (this is an OPPORTUNITY OF A LIFETIME!)Research Engineer: https://job-boards.greenhouse.io/lila/jobs/7890007002Research Scientist: https://job-boards.greenhouse.io/lila/jobs/8012245002TRANSCRIPT:https://app.rescript.info/public/share/W_T7E1OC2Wj49ccqlIOOztg2MJWaaVbovTeyxcFEQdU

Le rendez-vous Tech
[Edito] Le programme de l'été !

Le rendez-vous Tech

Play Episode Listen Later Jul 6, 2025 21:07


Hello à tous et toutes ! Retour à l'audio pour cet édito "public" où je vous détaille le programme de l'été pour tous les podcasts (et je vous parle aussi des raisons de ce break).Passez d'excellentes vacances, et on se retrouve en septembre.

Practical AI
Finding Nemotron

Practical AI

Play Episode Listen Later Jul 2, 2025 46:23 Transcription Available


In this episode, we sit down with Joey Conway to explore NVIDIA's open source AI, from the reasoning-focused Nemotron models built on top of Llama, to the blazing-fast Parakeet speech model. We chat about what makes open foundation models so valuable, how enterprises can think about deploying multi-model strategies, and why reasoning is becoming the key differentiator in real-world AI applications.Featuring:Joey Conway – LinkedInChris Benson – Website, LinkedIn, Bluesky, GitHub, XLinks:Llama Nemotron UltraNVIDIA Llama Nemotron Ultra Open Model Delivers Groundbreaking Reasoning AccuracyIndependent analysis of AIParakeet ModelParakeet LeaderboardTry the Llama-3.1-Nemotron-Ultra-253B-v1 model here and here

Big Brains
Are We Making AI Too Human?, with James Evans

Big Brains

Play Episode Listen Later Jun 12, 2025 31:15


Prof. James Evans, a University of Chicago sociologist and data scientist, believes we're training AI to think too much like humans—and it's holding science back.In this episode, Evans shares how our current models risk narrowing scientific exploration rather than expanding it, and explains why he's pushing for AIs that think differently from us—what he calls “cognitive aliens.” Could these “alien minds” help us unlock hidden breakthroughs? And what would it take to build them?