How AI Happens is a podcast featuring experts and practitioners explaining their work at the cutting edge of Artificial Intelligence. Tune in to hear AI Researchers, Data Scientists, ML Engineers, and the leaders of today’s most exciting AI companies explain the newest and most challenging facets of their field. Powered by Sama.
The rapid evolution of technology excites Anthony, especially in AI.User preferences are shifting towards more human-like AI interactions.Empathy in AI is crucial for better customer service experiences.The partnership between Amdocs and NVIDIA emphasizes the importance of software efficiencySoftware and hardware advancements must progress in parallel to maximize productivity.Physical AI integration will enhance daily life through automation and smart devices.Emergent behavior in AI represents a new frontier in reasoning and decision-making.Generative AI can learn and adapt beyond traditional if-then programming.An audit trail is essential for transparency in AI decision-making processes.
Mica shares the methods behind Augury's fault testing processes, why they use the highest quality data available, how in-house experts help them filter their data reliably, and their approach to communicating with customers. Our conversation also explores the balance between edge computing and cloud computing, and why both are necessary for optimal performance and monitoring.Key Points From This Episode:Mica's journey from studying physics at the Weizmann Institute to her current role at Augury.How her background in physics and neuroscience inform her work in AI.Why physicists are drawn to AI and data science; how scientists follow their curiosity.Mica's responsibilities in her role as algorithms team lead at Augury.How they develop algorithms and test for faults; why this requires the highest quality data.Understanding the role of their in-house expert vibration analysts.The importance of domain expertise in labeling and annotating data.Finding the balance between manual and automated processes in data labeling.How to communicate with customers and present metrics that matter to them.Augury's use of edge and cloud computing for optimal performance and monitoring.Quotes:“We look for better ways to adjust our algorithms and also develop new ones for all kinds of faults that could happen in the machines catching events that are trickier to catch, and for that we need highest quality data.” — Mica Rubinson [0:08:20]“At Aubrey, we have internal vibration analysts that are experts in their field. They go through very rigorous training process. There are international standards to how you do vibration analysis, and we have them in-house.” — Mica Rubinson [0:09:07]“[It's] really helpful for us to have [these] in-house experts. We have massive amounts of records – signal recordings from 10 years of machine monitoring. Thanks to these experts [in] labeling, we can filter out a lot of noisy parts of this data.” — Mica Rubinson [0:10:32]“We quantify [our services] for the customer as their ROI [and] how much they saved by using Augury. You had this [issue, and] we avoided this downtime. [We show] how much does it translates eventually [into] money that you saved.” — Mica Rubinson [0:22:28]Links Mentioned in Today's Episode:Mica Rubinson on LinkedInMica Rubinson on ResearchGateAuguryWeizmann Institute of ScienceHow AI HappensSama
Srini highlights the importance of integrating these agents into real-world applications, enhancing productivity and user experiences across industries. Srini also delves into the challenges of building reliable, ethical, and secure AI systems while fostering developer innovation. His insights offer a roadmap for harnessing advanced agents to drive meaningful technological progress. Don't miss this informative conversation. Key Points From This Episode:Introducing today's guest, Srini Iragavarapu, a leader at AWS. His thoughts on how Agentic and AI are intersecting today. The state of the union of agents in the world and at AWS.How AWS is leveraging agents to build specific tasks for customers.Two mechanisms that software agents use to operate.Understanding the reasoning capabilities of large foundational models.How AWS makes use of a test agent. Qdeveloper's instantaneous conversational capabilities.Bringing different options to the customers as a long-term strategy.Three layers at which AWS is innovating today.Why the end user is ultimately the person who benefits. Quotes:“Think of it as an iterative way of solving a problem rather than just calling a single API and coming back: that's in a nutshell how generative AI and the foundation models are working with reasoning capabilities.” — Srini Iragavarapu [0:03:04]“The models are becoming more powerful and more available, faster, a lot more dependable.” — Srini Iragavarapu [0:29:57]Links Mentioned in Today's Episode:Srini Iragavarapu on LinkedInHow AI HappensSama
We explore the current trends of AI-based solutions in retail, what has driven its adoption in the industry, and how AI-based customer service technology has improved over time. We also discuss the correct mix of technology and humans, the importance of establishing boundaries for AI, and why it won't replace humans but will augment workflow. Hear examples of AI retail success stories, what companies got AI wrong, and the reasons behind the wins and failures. Gain insights into the value of copilots, business strategies to avoid investing in ineffective AI solutions, and much more. Tune in now!Key Points From This Episode:Learn about Lisa and Mika's backgrounds in retail technology and AI-based solutions. Hear how AI has become more accessible to businesses beyond the typical tech giants.Explore how AI-powered chatbots and copilots have evolved to improve customer service.The Coca-Cola AI ad controversy and why oversight on AI-generated content is vital.Discover the innovative and exciting ways AI can be leveraged in the retail industry.AI success stories: Target's AI copilot for employees and Nordstrom's personalization tool.How AI is making the return process more efficient and improving inventory management.Uncover the multimodal connections of AI and how it will enhance customer personalization.Important considerations for businesses regarding the adoption of AI and the pitfalls to avoid.Quotes:“I think [the evolution] in terms of accessibility to AI-solutions for people who don't have the massive IT departments and massive data analytics departments is really remarkable.” — Mika Yamamoto [0:04:25]“Whether it's generative AI for creative or content or whatever, it's not going to replace humans. It's going to augment our workflows.” — Lisa Avvocato [0:10:46]“Retail is actually one of the fastest adopting industries out there [of] AI.” — Mika Yamamoto [0:14:17]“Having conversations with peers, I think, is absolutely invaluable to figure out what's hype and what's reality [regarding AI].” — Mika Yamamoto [0:30:19]Links Mentioned in Today's Episode:Lisa Avvocato on LinkedInMika Yamamoto on LinkedInFreshworksThe Coca‑Cola CompanyHow AI HappensSama
We hear about Nitzan's AI expertise, motivation for joining eBay, and approach to implementing AI into eBay's business model. Gain insights into the impacts of centralizing and federating AI, leveraging generative AI to create personalized content, and why patience is essential to AI development. We also unpack eBay's approach to LLM development, tailoring AI tools for eBay sellers, the pitfalls of generic marketing content, and the future of AI in retail. Join us to discover how AI is revolutionizing e-commerce and disrupting the retail sector with Nitzan Mekel-Bobrov! Key Points From This Episode:Nitzan's career experience, his interest in sustainability, and his sneaker collection.Why he decided to begin a career at eBay and his role at the company.His approach to aligning the implementation of AI with eBay's overall strategy.How he identifies the components of eBay's business model that will benefit from AI.What makes eBay highly suitable for the implementation of AI tools.Challenges of using generative AI models to create personalized content for users.Why experimentation is vital to the AI development and implementation process.Aspects of the user experience that Nitzan uses to train and develop eBay's LLMs.The potential of knowledge graphs to uncover the complexity of user behavior.Reasons that the unstructured nature of eBay's data is fundamental to its business model.Incorporating a seller's style into AI tools to avoid creating generic marketing material.Details about Nitzan's team and their diverse array of expertise.Final takeaways and how companies can ensure they survive the AI transition. Quotes:“It's tricky to balance the short-term wins with the long-term transformation.” — Nitzan Mekel-Bobrov [0:06:50]“An experiment is only a failure if you haven't learned anything yourself and – generated institutional knowledge from it.” — Nitzan Mekel-Bobrov [0:09:36]“What's nice about [eBay's] business model — is that our incentive is to enable each seller to maintain their own uniqueness.” — Nitzan Mekel-Bobrov [0:27:33]“The companies that will thrive in this AI transformation are the ones that can figure out how to marry parts of their current culture and what all of their talent brings with what the AI delivers.” — Nitzan Mekel-Bobrov [0:33:58]Links Mentioned in Today's Episode:Nitzan Mekel-Bobrov on LinkedIneBayHow AI HappensSama
Satya unpacks how Unilever utilizes its database to inform its models and how to determine the right amount of data needed to solve complex problems. Dr. Wattamwar explains why contextual problem-solving is vital, the notion of time constraints in data science, the system point of view of modeling, and how Unilever incorporates AI into its models. Gain insights into how AI can increase operational efficiency, exciting trends in the AI space, how AI makes experimentation accessible, and more! Tune in to learn about the power of data science and AI with Dr. Satyajit Wattamwar. Key Points From This Episode:Background on Dr. Wattamwar, his PhD research, and data science expertise.Unpacking some of the commonalities between data science and physics. Why the outcome of using significantly large data sets depends on the situation. The minimum amount of data needed to make meaningful and quality models.Examples of the common mistakes and pitfalls that data scientists make.How Unilever works with partner organizations to integrate AI into its models.Ways that Dr. Wattamwar uses AI-based tools to increase his productivity.The difference between using AI for innovation versus operational efficiency.Insight into the shifting data science landscape and advice for budding data scientists.Quotes:“Around – 30 or 40 years ago, people started realizing the importance of data-driven modeling because you can never capture physics perfectly in an equation.” — Dr. Satyajit Wattamwar [0:03:10]“Having large volumes of data which are less related with each other is a different thing than a large volume of data for one problem.” — Dr. Satyajit Wattamwar [0:09:12]“More data [does] not always lead to good quality models. Unless it is for the same use-case.” — Dr. Satyajit Wattamwar [0:11:56]“If somebody is looking [to] grow in their career ladder, then it's not about one's own interest.” — Dr. Satyajit Wattamwar [0:24:07]Links Mentioned in Today's Episode:Dr. Satyajit Wattamwar on LinkedInUnileverHow AI HappensSama
Jing explains how Vanguard uses machine learning and reinforcement learning to deliver personalized "nudges," helping investors make smarter financial decisions. Jing dives into the importance of aligning AI efforts with Vanguard's mission and discusses generative AI's potential for boosting employee productivity while improving customer experiences. She also reveals how generative AI is poised to play a key role in transforming the company's future, all while maintaining strict data privacy standards.Key Points From This Episode:Jing Wang's time at Fermilab and the research behind her PhD in high-energy physics.What she misses most about academia and what led to her current role at Vanguard.How she aligns her team's AI strategy with Vanguard's business goals.Ways they are utilizing AI for nudging investors to make better decisions.Their process for delivering highly personalized recommendations for any given investor.Steps that ensure they adhere to finance industry regulations with their AI tools.The role of reinforcement learning and their ‘next best action' models in personalization.Their approach to determining the best use of their datasets while protecting privacy.Vanguard's plans for generative AI, from internal productivity to serving clients.How Jing stays abreast of all the latest developments in physics.Quotes:“We make sure all our AI work is aligned with [Vanguard's] four pillars to deliver business impact.” — Jing Wang [0:08:56]“We found those simple nudges have tremendous power in terms of guiding the investors to adopt the right things. And this year, we started to use a machine learning model to actually personalize those nudges.” — Jing Wang [0:19:39]“Ultimately, we see that generative AI could help us to build more differentiated products. – We want to have AI be able to train language models [to have] much more of a Vanguard mindset.” — Jing Wang [0:29:22]Links Mentioned in Today's Episode:Jing Wang on LinkedInVanguardFermilabHow AI HappensSama
Key Points From This Episode:Ram Venkatesh describes his career journey to founding Sema4.ai. The pain points he was trying to ease with Sema4.ai.How our general approach to big data is becoming more streamlined, albeit rather slowly. The ins and outs of Sema4.ai and how it serves its clients. What Ram means by “agent” and “agent agency” when referring to machine learning copilots.The difference between writing a program to execute versus an agent reasoning with it. Understanding the contextual work training method for agents. The relationship between an LLM and an agent and the risks of training LLMs on agent data.Exploring the next generation of LLM training protocols in the hopes of improving efficiency. The requirements of an LLM if you're not training it and unpacking modality improvements. Why agent input and feedback are major disruptions to SaaS and beyond. Our guest shares his hopes for the future of AI. Quotes:“I've spent the last 30 years in data. So, if there's a database out there, whether it's relational or object or XML or JSON, I've done something unspeakable to it at some point.” — @ramvzz [0:01:46]“As people are getting more experienced with how they could apply GenAI to solve their problems, then they're realizing that they do need to organize their data and that data is really important.” — @ramvzz [0:18:58]“Following the technology and where it can go, there's a lot of fun to be had with that.” — @ramvzz [0:23:29]“Now that we can see how software development itself is evolving, I think that 12-year-old me would've built so many more cooler things than I did with all the tech that's out here now.” — @ramvzz [0:29:14]Links Mentioned in Today's Episode:Ram Venkatesh on LinkedInRam Venkatesh on XSema4.aiClouderaHow AI HappensSama
Pascal & Yannick delve into the kind of human involvement SAM-2 needs before discussing the use cases it enables. Hear all about the importance of having realistic expectations of AI, what the cost of SAM-2 looks like, and the the importance of humans in LLMs.Key Points From This Episode:Introducing Pascal Jauffret and Yannick Donnelly to the show.Our guests explain what the SAM-2 model is. A description of what getting information from video entails.What made our guests interested in researching SAM-2. A few things that stand out about this tool. The level of human involvement that SAM-2 needs. Some of the use cases they see SAM-2 enabling. Whether manually annotating is easier than simply validating data. The importance of setting realistic expectations of what AI can do. When LLM models work best, according to our experts.A discussion about the cost of the models at the moment. Why humans are so important in coaching people to use models. What we can expect from Sama in the near future. Quotes:“We're kind of shifting towards more of a validation period than just annotating from scratch.” — Yannick Donnelly [0:22:01]“Models have their place but they need to be evaluated.” — Yannick Donnelly [0:25:16]“You're never just using a model for the sake of using a model. You're trying to solve something and you're trying to improve a business metric.” — Pascal Jauffret [0:32:59]“We really shouldn't underestimate the human aspect of using models.” — Pascal Jauffret [0:40:08]Links Mentioned in Today's Episode:Pascal Jauffret on LinkedInYannick Donnelly on LinkedInHow AI HappensSama
Today we are joined by Siddhika Nevrekar, an experienced product leader passionate about solving complex problems in ML by bringing people and products together in an environment of trust. We unpack the state of free computing, the challenges of training AI models for edge, what Siddhika hopes to achieve in her role at Qualcomm, and her methods for solving common industry problems that developers face.Key Points From This Episode:Siddhika Nevrekar walks us through her career pivot from cloud to edge computing. Why she's passionate about overcoming her fears and achieving the impossible. Increasing compute on edge devices versus developing more efficient AI models.Siddhika explains what makes Apple a truly unique company. The original inspirations for edge computing and how the conversation has evolved. Unpacking the current state of free computing and what may happen in the near future. The challenges of training AI models for edge. Exploring Siddhika's role at Qualcomm and what she hopes to achieve. Diving deeper into her process for achieving her goals. Common industry challenges that developers are facing and her methods for solving themQuotes:“Ultimately, we are constrained with the size of the device. It's all physics. How much can you compress a small little chip to do what hundreds and thousands of chips can do which you can stack up in a cloud? Can you actually replicate that experience on the device?” — @siddhika_ “By the time I left Apple, we had 1000-plus [AI] models running on devices and 10,000 applications that were powered by AI on the device, exclusively on the device. Which means the model is entirely on the device and is not going into the cloud. To me, that was the realization that now the moment has arrived where something magical is going to start happening with AI and ML.” — @siddhika_ Links Mentioned in Today's Episode:Siddhika Nevrekar on LinkedInSiddhika Nevrekar on XQualcomm AI HubHow AI HappensSama
Today we are joined by Developer Advocate at Block, Rizel Scarlett, who is here to explain how to bridge the gap between the technical and non-technical aspects of a business. We also learn about AI hallucinations and how Rizel and Block approach this particular pain point, the burdens of responsibility of AI users, why it's important to make AI tools accessible to all, and the ins and outs of G{Code} House – a learning community for Indigenous and women of color in tech. To end, Rizel explains what needs to be done to break down barriers to entry for the G{Code} population in tech, and she describes the ideal relationship between a developer advocate and the technical arm of a business. Key Points From This Episode:Rizel Scarlett describes the role and responsibilities of a developer advocate. Her role in getting others to understand how GitHub Copilot should be used. Exploring her ongoing projects and current duties at Block. How the conversation around AI copilot tools has shifted in the last 18 months. The importance of objection handling and why companies must pay more attention to it. AI hallucinations and Rizel's advice for approaching this particular pain point. Why “I don't know” should be encouraged as a response from AI companions, not shunned. Taking a closer look at how Block addresses AI hallucinations. The burdens of responsibility of users of AI, and the need to democratize access to AI tools. Unpacking G{Code} House and Rizel's working relationship with this learning community.Understanding what prevents Indigenous and women of color from having careers in tech.The ideal relationship between a developer advocate and the technical arm of a business. Quotes:“Every company is embedding AI into their product someway somehow, so it's being more embraced.” — @blackgirlbytes [0:11:37]“I always respect someone that's like, ‘I don't know, but this is the closest I can get to it.'” — @blackgirlbytes [0:15:25]“With AI tools, when you're more specific, the results are more refined.” — @blackgirlbytes [0:16:29]Links Mentioned in Today's Episode:Rizel ScarlettRizel Scarlett on LinkedInRizel Scarlett on InstagramRizel Scarlett on XBlockGooseGitHubGitHub CopilotG{Code} HouseHow AI HappensSama
Key Points From This Episode:Drew and his co-founders' background working together at RJ Metrics.The lack of existing data solutions for Amazon Redshift and how they started dbt Labs.Initial adoption of dbt Labs and why it was so well-received from the very beginning.The concept of a semantic layer and how dbt Labs uses it in conjunction with LLMs.Drew's insights on a recent paper by Apple on the limitations of LLMs' reasoning.Unpacking examples where LLMs struggle with specific questions, like math problems.The importance of thoughtful prompt engineering and application design with LLMs.What is needed to maximize the utility of LLMs in enterprise settings.How understanding the specific use case can help you get better results from LLMs.What developers can do to constrain the search space and provide better output.Why Drew believes prompt engineering will become less important for the average user.The exciting potential of vector embeddings and the ongoing evolution of LLMs.Quotes:“Our observation was [that] there needs to be some sort of way to prepare and curate data sets inside of a cloud data warehouse. And there was nothing out there that could do that on [Amazon] Redshift, so we set out to build it.” — Drew Banin [0:02:18]“One of the things we're thinking a ton about today is how AI and the semantic layer intersect.” — Drew Banin [0:08:49]“I don't fundamentally think that LLMs are reasoning in the way that human beings reason.” — Drew Banin [0:15:36]“My belief is that prompt engineering will – become less important – over time for most use cases. I just think that there are enough people that are not well versed in this skill that the people building LLMs will work really hard to solve that problem.” — Drew Banin [0:23:06]Links Mentioned in Today's Episode: Understanding the Limitations of Mathematical Reasoning in Large Language ModelsDrew Banin on LinkedIndbt LabsHow AI HappensSama
In this episode, you'll hear about Meeri's incredible career, insights from the recent AI Pact conference she attended, her company's involvement, and how we can articulate the reality of holding companies accountable to AI governance practices. We discuss how to know if you have an AI problem, what makes third-party generative AI more risky, and so much more! Meeri even shares how she thinks the Use AI Act will impact AI companies and what companies can do to take stock of their risk factors and ensure that they are building responsibly. You don't want to miss this one, so be sure to tune in now!Key Points From This Episode:Insights from the AI Pact conference. The reality of holding AI companies accountable. What inspired her to start Saidot to offer solutions for AI transparency and accountability.How Meeri assesses companies and their organizational culture. What makes generative AI more risky than other forms of machine learning. Reasons that use-related risks are the most common sources of AI risks.Meeri's thoughts on the impact of the Use AI Act in the EU. Quotes:“It's best to work with companies who know that they already have a problem.” — @meerihaataja [0:09:58]“Third-party risks are way bigger in the context of [generative AI].” — @meerihaataja [0:14:22]“Use and use-context-related risks are the major source of risks.” — @meerihaataja [0:17:56]“Risk is fine if it's on an acceptable level. That's what governance seeks to do.” — @meerihaataja [0:21:17]Links Mentioned in Today's Episode:SaidotMeeri Haataja on LinkedInMeeri Haataja on InstagramMeeri Haataja on XHow AI HappensSama
In this episode, Dr. Zoldi offers insight into the transformative potential of blockchain for ensuring transparency in AI development, the critical need for explainability over mere predictive power, and how FICO maintains trust in its AI systems through rigorous model development standards. We also delve into the essential integration of data science and software engineering teams, emphasizing that collaboration from the outset is key to operationalizing AI effectively. Key Points From This Episode:How Scott integrates his role as an inventor with his duties as FICO CAO.Why he believes that mindshare is an essential leadership quality.What sparked his interest in responsible AI as a physicist.The shifting demographics of those who develop machine learning models.Insight into the use of blockchain to advance responsible AI.How FICO uses blockchain to ensure auditable ML decision-making.Operationalizing AI and the typical mistakes companies make in the process.The value of integrating data science and software engineering teams from the start.A fear-free perspective on what Scott finds so uniquely exciting about AI.Quotes:“I have to stay ahead of where the industry is moving and plot out the directions for FICO in terms of where AI and machine learning is going – [Being an inventor is critical for] being effective as a chief analytics officer.” — @ScottZoldi [0:01:53]“[AI and machine learning] is software like any other type of software. It's just software that learns by itself and, therefore, we need [stricter] levels of control.” — @ScottZoldi [0:23:59]“Data scientists and AI scientists need to have partners in software engineering. That's probably the number one reason why [companies fail during the operationalization process].” — @ScottZoldi [0:29:02]Links Mentioned in Today's Episode:FICODr. Scott ZoldiDr. Scott Zoldi on LinkedInDr. Scott Zoldi on XFICO Falcon Fraud ManagerHow AI HappensSama
Jay breaks down the critical role of software optimizations and how they drive performance gains in AI, highlighting the importance of reducing inefficiencies in hardware. He also discusses the long-term vision for Lemurian Labs and the broader future of AI, pointing to the potential breakthroughs that could redefine industries and accelerate innovation, plus a whole lot more. Key Points From This Episode:Jay's diverse professional background and his attraction to solving unsolvable problems.How his unfinished business in robotics led him to his current work at Lemurian Labs.What he has learned from being CEO and the biggest obstacles he has had to overcome.Why he believes engineers with a problem-solving mindset can be effective CEOs.Lemurian Labs: making AI computing more efficient, affordable, and environmentally friendly.The critical role of software in increasing AI efficiency.Some of the biggest challenges in programming GPUs.Why better software is needed to optimize the use of hardware.Common inefficiencies in AI development and how to solve them.Reflections on the future of Lemurian Labs and AI more broadly.Quotes:“Every single problem I've tried to pick up has been one that – most people have considered as being almost impossible. There's something appealing about that.” — Jay Dawani [0:02:58]“No matter how good of an idea you put out into the world, most people don't have the motivation to go and solve it. You have to have an insane amount of belief and optimism that this problem is solvable, regardless of how much time it's going to take.” — Jay Dawani [0:07:14]“If the world's just betting on one company, then the amount of compute you can have available is pretty limited. But if there's a lot of different kinds of compute that are slightly optimized with different resources, making them accessible allows us to get there faster.” — Jay Dawani [0:19:36]“Basically what we're trying to do [at Lemurian Labs] is make it easy for programmers to get [the best] performance out of any hardware.” — Jay Dawani [0:20:57]Links Mentioned in Today's Episode:Jay Dawani on LinkedInLemurian LabsHow AI HappensSama
Melissa explains the importance of giving developers the choice of working with open source or proprietary options, experimenting with flexible application models, and choosing the size of your model according to the use case you have in mind. Discussing the democratization of technology, we explore common challenges in the context of AI including the potential of generative AI versus the challenge of its implementation, where true innovation lies, and what Melissa is most excited about seeing in the future.Key Points From This Episode:An introduction to Melissa Evers, Vice President and General Manager of Strategy and Execution at Intel Corporation.More on the communities she has played a leadership role in.Why open source governance is not an oxymoron and why it is critical.The hard work that goes on behind the scenes at open source.What to strive for when building a healthy open source community.Intel's perspective on the importance of open source and open AI.Enabling developer choices about open source or proprietary options.Growing awareness around building architecture around the freedom of choice.Identifying that a model is a bad choice or lacking in accuracy.Thinking critically about future-proofing yourself with regard to model choice. Opportunities for large and smaller models.Finding the perfect intersection between value delivery, value creation, and cost. Common challenges in the context of AI, including the potential of generative AI and its implementation.Why there is such a commonality of use cases in the realm of generative AI.Where true innovation and value lies even though there may be commonality in use cases.Examples of creative uses of generative AI; retail, compound AI systems, manufacturing, and more.Understanding that innovation in this area is still in its early development stages. How Wardley Mapping can support an understanding of scale. What she is most excited about for the future of AI: Rapid learning in healthcare. Quotes:“One of the things that is true about software in general is that the role that open source plays within the ecosystem has dramatically shifted and accelerated technology development at large.” — @melisevers [0:03:02]“It's important for all citizens of the open source community, corporate or not, to understand and own their responsibilities with regard to the hard work of driving the technology forward.” — @melisevers [0:05:18]“We believe that innovation is best served when folks have the tools at their disposal on which to innovate.” — @melisevers [0:09:38]“I think the focus for open source broadly should be on the elements that are going to be commodified.” — @melisevers [0:25:04]Links Mentioned in Today's Episode:Melissa Evers on LinkedInMelissa Evers on XIntel Corporation
VP of AI and ML at Synopsys, Thomas Andersen joins us to discuss designing AI chips. Tuning in, you'll hear all about our guest's illustrious career, how he became interested in technology, tech in East Germany, what it was like growing up there, and so much more! We delve into his company, Synopsys, and the chips they build before discussing his role in building algorithms. Key Points From This Episode:A warm welcome to today's guest, Thomas Andersen. How he got into the tech world and his experience growing up in East Germany. The cost of Compute AI coming down at the same time the demand is going up. Thomas tells us about Synopsys and what goes into building their chips. Other traditional software companies that are now designing their own AI chips. What Thomas' role looks like in machine learning and building AI algorithms. How the constantly changing rules of AI chip design continue to create new obstacles. Thomas tells us how they use reinforcement learning in their processes.The different applications for generative AI and why it needs good input data. Thomas' advice for anyone wanting to get into the world of AI. Quotes:“It's not really the technology that makes life great, it's how you use it, and what you make of it.” — Thomas Andersen [0:07:31]“There is, of course, a lot of opportunities to use AI in chip design.” — Thomas Andersen [0:25:39]“Be bold, try as many new things [as you can, and] make sure you use the right approach for the right tasks.” — Thomas Andersen [0:40:09]Links Mentioned in Today's Episode:Thomas Andersen on LinkedInSynopsysHow AI HappensSama
Developing AI and generative AI initiatives demands significant investment, and without delivering on customer satisfaction, these costs can be tough to justify. Today, SVP of Engineering and General Manager of Xactly India, Kandarp Desai joins us to discuss Xactly's AI initiatives and why customer satisfaction remains their top priority. Key Points From This Episode:An introduction to Kandarp and his transition from hardware to software.How he became SVP of Engineering and General Manager of Xactly India.His move to Bangalore and the expansion of Xactly's presence in India.The rapid modernization of India as a key factor in Xactly's growth strategy.An overview of Xactly's AI and generative AI initiatives.Insight into the development of Xactly's AI Copilot.Four key stakeholders served by the Xactly AI Copilot.The Xactly Extend, an enterprise platform for building custom apps.Challenges in justifying the ROI of AI initiatives.Why customer satisfaction and business outcomes are essential.How AI is overhyped in the short term and underhyped in the long term.The difficulties in quantifying the value of AI.Kandarp's career advice to AI practitioners, from taking risks to networking.Quotes:“[Generative AI] is only useful if it drives higher customer satisfaction. Otherwise, it doesn't matter.” — Kandarp Desai [0:11:36]“Justifying the ROI of anything is hard – If you can tie any new invention back to its ROI in customer satisfaction, that can drive an easy sell across an organization.” — Kandarp Desai [0:15:35]“The whole AI trend is overhyped in the short term and underhyped long term. [It's experienced an] oversell recently, and people are still trying to figure it out.” — Kandarp Desai [0:20:48]Links Mentioned in Today's Episode:Kandarp Desai on LinkedInXactlyHow AI HappensSama
Srujana is Vice President and Group Director at Walmart's Machine Learning Center of Excellence and is an experienced and respected AI, machine learning, and data science professional. She has a strong background in developing AI and machine learning models, with expertise in natural language processing, deep learning, and data-driven decision-making. Srujana has worked in various capacities in the tech industry, contributing to advancing AI technologies and their applications in solving complex problems. In our conversation, we unpack the trends shaping AI governance, the importance of consumer data protection, and the role of human-centered AI. Explore why upskilling the workforce is vital, the potential impact AI could have on white-collar jobs, and which roles AI cannot replace. We discuss the interplay between bias and transparency, the role of governments in creating AI development guardrails, and how the regulatory framework has evolved. Join us to learn about the essential considerations of deploying algorithms at scale, striking a balance between latency and accuracy, the pros and cons of generative AI, and more. Key Points From This Episode:Srujana breaks down the top concerns surrounding technology and data.Learn how AI can be utilized to drive innovation and economic growth.Navigating the adoption of AI with upskilling and workforce retention.The AI gaps that upskilling should focus on to avoid workforce displacement.Common misconceptions about biases in AI and how they can be mitigated. Why establishing regulations, laws, and policies is vital for ethical AI development.Outline of the nuances of creating an effective worldwide regulatory framework.She explains the challenges and opportunities of deploying algorithms at scale. Hear about the strategies for building architecture that can adapt to future changes. She shares her perspective on generative AI and what its best use cases are.Find out what area of AI Srujana is most excited about.Quotes:“By deploying [bias] algorithms we may be going ahead and causing some unintended consequences.” — @Srujanadev [0:03:11]“I think it is extremely important to have the right regulations and guardrails in place.” — @Srujanadev [0:11:32]“Just using generative AI for the sake of it is not necessarily a great idea.” — @Srujanadev [0:25:27]“I think there are a lot of applications in terms of how generative AI can be used but not everybody is seeing the return on investment.” — @Srujanadev [0:27:12]Links Mentioned in Today's Episode:Srujana KaddevarmuthSrujana Kaddevarmuth on XSrujana Kaddevarmuth on LinkedInUnited Nations Association (UNA) San Francisco The World in 2050American INSIGHTHow AI HappensSama
Our guest goes on to share the different kinds of research they use for machine learning development before explaining why he is more conservative when it comes to driving generative AI use cases. He even shares some examples of generative use cases he feels are worthwhile. We hear about how these changes will benefit all UPS customers and how they avoid sharing private and non-compliant information with chatbots. Finally, Sunzay shares some advice for anyone wanting to become a leader in the tech world.Key Points From This Episode:Introducing Sunzay Passari to the show and how he landed his current role at UPS.Why Sunzay believes that this huge operation he's part of will drive transformational change. How AI and machine learning have made their way into UPS over the past few years. The way Sunzay and his team have decided where AI will be most disruptive within UPS. Qualitative and qualitative research and what that looks like for this project. Why Sunzay is conservative when it comes to driving generative AI use cases. Sunzay shares some of the generative use cases that he thinks are worthwhile. The way these new technologies will benefit everyday UPS customers. How they are preventing people from accessing non-compliant data through chatbots. Sunzay passes on some advice for anyone looking to forge their career as a leader in tech. Quotes:“There's a lot of complexities in the kind of global operations we are running on a day-to-day basis [at UPS].” — Sunzay Passari [0:04:35]“There is no magic wand – so it becomes very important for us to better our resources at the right time in the right initiative.” — Sunzay Passari [0:09:15]“Keep learning on a daily basis, keep experimenting and learning, and don't be afraid of the failures.” — Sunzay Passari [0:22:48]Links Mentioned in Today's Episode:Sunzay Passari on LinkedInUPSHow AI HappensSama
Martin shares what reinforcement learning does differently in executing complex tasks, overcoming feedback loops in reinforcement learning, the pitfalls of typical agent-based learning methods, and how being a robotic soccer champion exposed the value of deep learning. We unpack the advantages of deep learning over modeling agent approaches, how finding a solution can inspire a solution in an unrelated field, and why he is currently focusing on data efficiency. Gain insights into the trade-offs between exploration and exploitation, how Google DeepMind is leveraging large language models for data efficiency, the potential risk of using large language models, and much more. Key Points From This Episode:What it is like being a five times world robotic soccer champion.The process behind training a winning robotic soccer team.Why standard machine learning tools could not train his team effectively. Discover the challenges AI and machine learning are currently facing.Explore the various exciting use cases of reinforcement learning.Details about Google DeepMind and the role of him and his team. Learn about Google DeepMind's overall mission and its current focus.Hear about the advantages of being a scientist in the AI industry. Martin explains the benefits of exploration to reinforcement learning.How data mining using large language models for training is implemented. Ways reinforcement learning will impact people in the tech industry.Unpack how AI will continue to disrupt industries and drive innovation.Quotes:“You really want to go all the way down to learn the direct connections to actions only via learning [for training AI].” — Martin Riedmiller [0:07:55]“I think engineers often work with analogies or things that they have learned from different [projects].” — Martin Riedmiller [0:11:16]“[With reinforcement learning], you are spending the precious real robots time only on things that you don't know and not on the things you probably already know.” — Martin Riedmiller [0:17:04]“We have not achieved AGI (Artificial General Intelligence) until we have removed the human completely out of the loop.” — Martin Riedmiller [0:21:42]Links Mentioned in Today's Episode:Martin RiedmillerMartin Riedmiller on LinkedInGoogle DeepMindRoboCupHow AI HappensSama
Jia shares the kinds of AI courses she teaches at Stanford, how students are receiving machine learning education, and the impact of AI agents, as well as understanding technical boundaries, being realistic about the limitations of AI agents, and the importance of interdisciplinary collaboration. We also delve into how Jia prioritizes latency at LiveX before finding out how machine learning has changed the way people interact with agents; both human and AI. Key Points From This Episode:The AI courses that Jia teaches at Stanford. Jia's perspective on the future of AI. What the potential impact of AI agents is. The importance of understanding technical boundaries. Why interdisciplinary collaboration is imperative. How Jia is empowering other businesses through LiveX AI. Why she prioritizes latency and believes that it's crucial. How AI has changed people's expectations and level of courtesy.A glimpse into Jia's vision for the future of AI agents. Why she is not satisfied with the multi-model AI models out there. Challenges associated with data in multi-model machine learning. Quotes:“[The field of AI] is advancing so fast every day.” — Jia Li [0:03:05]“It is very important to have more sharing and collaboration within the [AI field].” — Jia Li [0:12:40]“Having an efficient algorithm [and] having efficient hardware and software optimization is really valuable.” — Jia Li [0:14:42]Links Mentioned in Today's Episode:Jia Li on LinkedInLiveX AIHow AI HappensSama
Key Points From This Episode:Reid Robinson's professional background, and how he ended up at Zapier. What he learned during his year as an NFT founder, and how it serves him in his work today.How he gained his diverse array of professional skills.Whether one can differentiate between AI and mere automation. How Reid knew that partnering with OpenAI and ChatGPT would be the perfect fit. The way the Zapier team understands and approaches ML accuracy and generative data.Why real-world data is better as it stands, and whether generative data will one day catch up. How Zapier uses generative data with its clients. Why AI is still mostly beneficial for those with a technical background. Reid Robinson's next big idea, and his parting words of advice.Quotes:“Sometimes, people are very bad at asking for what they want. If you do any stint in, particularly, the more hardcore sales jobs out there, it's one of the things you're going to have to learn how to do to survive. You have to be uncomfortable and learn how to ask for things.” — @Reidoutloud_ [0:05:07]“In order to really start to drive the accuracy of [our AI models], we needed to understand, what were users trying to do with this?” — @Reidoutloud_ [0:15:34]“The people who being enabled the most with AI in the current stage are the technical tinkerers. I think a lot of these tools are too technical for average-knowledge workers.” — @Reidoutloud_ [0:28:32]“Quick advice for anyone listening to this, do not start a company when you have your first kid! Horrible idea.” — @Reidoutloud_ [0:29:28]Links Mentioned in Today's Episode:Reid Robinson on LinkedInReid Robinson on XZapierCocoNFTHow AI HappensSama
In this episode of How AI Happens, Justin explains how his project, Wondr Search, injects creativity into AI in a way that doesn't alienate creators. You'll learn how this new form of AI uses evolutionary algorithms (EAs) and differential evolution (DE) to generate music without learning from or imitating existing creative work. We also touch on the success of the six songs created by Wondr Search, why AI will never fully replace artists, and so much more. For a fascinating conversation at the intersection of art and AI, be sure to tune in today!Key Points From This Episode:How genetic algorithms can preserve human creativity in the age of AI.Ways that Wondr Search differs from current generative AI models.Why the songs produced by Wondr Search were so well-received by record labels.Justin's motivations for creating an AI model that doesn't learn from existing music.Differentiating between AI-generated content and creative work made by humans.Insight into Justin's PhD topic focused on mathematical optimization.Key differences between operations research and data science.An understanding of the relationship between machine learning and physics.Our guest's take on “big data” and why more data isn't always better.Problems Justin focuses on as a technical advisor to Fortune 500 companies.What he is most excited (and most concerned) about for the future of AI.Quotes:“[Wondr Search] is definitely not an effort to stand up against generative AI that uses traditional ML methods. I use those a lot and there's going to be a lot of good that comes from those – but I also think there's going to be a market for more human-centric generative methods.” — Justin Kilb [0:06:12]“The definition of intelligence continues to change as [humans and artificial systems] progress.” — Justin Kilb [0:24:29]“As we make progress, people can access [AI] everywhere as long as they have an internet connection. That's exciting because you see a lot of people doing a lot of great things.” — Justin Kilb [0:26:06]Links Mentioned in Today's Episode:Justin Kilb on LinkedInWondr Search‘Conserving Human Creativity with Evolutionary Generative Algorithms: A Case Study in Music Generation'How AI HappensSama
Jacob shares how Gong uses AI, how it empowers its customers to build their own models, and how this ease of access for users holds the promise of a brighter future. We also learn more about the inner workings of Gong and how it trains its own models, why it's not too interested in tracking soft skills right now, what we need to be doing more of to build more trust in chatbots, and our guest's summation of why technology is advancing like a runaway train.Key Points From This Episode:Jacob Eckel walks us through his professional background and how he ended up at Gong.The ins and outs of Gong, and where AI fits in. How Gong empowers its customers to build their own models, and the results thereof. Understanding the data ramifications when customers build their own models on Gong.How Gong trains its own models, and the way the platform assists users in real time. Why its models aren't tracking softer skills like rapport-building, yet.Everything that needs to be solved before we can fully trust chatbots. Jacob's summation of why technology is growing at an increasingly rapid rate. Quotes:“We don't expect our customers to suddenly become data scientists and learn about modeling and everything, so we give them a very intuitive, relatively simple environment in which they can define their own models.” — @eckely [0:07:03]“[Data] is not a huge obstacle to adopting smart trackers.” — @eckely [0:12:13]“Our current vibe is there's a limit to this technology. We are still unevolved apes.” — @eckely [0:16:27]Links Mentioned in Today's Episode:Jacob Eckel on LinkedInJacob Eckel on XGongHow AI HappensSama
Bobak further opines on the pros and cons of Perplexity and GPT 4.0, why the technology uses both models, the differences, and the pros and cons. Finally, our guest tells us why Brilliant Labs is open-source and reminds us why public participation is so important. Key Points From This Episode:Introducing Bobak Tavangar to today's episode of How AI Happens. Bobak tells us about his background and what led him to start his company, Brilliant Labs. Our guest shares his interesting Lord of the Rings analogy and how it relates to his business. How wearable technology is creeping more and more into our lives. The hurdles they face with generative AI glasses and how they're overcoming them. How Bobak chose the most important factors to incorporate into the glasses. What the glasses can do at this stage of development. Bobak explains how the glasses know whether to query GPT 4.0 or Perplexity AI. GPT 4.0 versus Perplexity and why Bobak prefers to use them both. The importance of gauging public reaction and why Brilliant Labs is open-source. Quotes:“To have a second pair of eyes that can connect everything we see with all the information on the web and everything we've seen previously – is an incredible thing.” — @btavangar [0:13:12]“For live web search, Perplexity – is the most precise [and] it gives the most meaningful answers from the live web.” — @btavangar [0:26:40]“The [AI] space is changing so fast. It's exciting [and] it's good for all of us but we don't believe you should ever be locked to one model or another.” — @btavangar [0:28:45]Links Mentioned in Today's Episode:Bobak Tavangar on LinkedInBobak Tavangar on XBobak Tavangar on InstagramBrilliant LabsPerplexity AIGPT 4.0How AI HappensSama
Andrew shares how generative AI is used by academic institutions, why employers and educators need to curb their fear of AI, what we need to consider for using AI responsibly, and the ins and outs of Andrew's podcast, Insight x Design. Key Points From This Episode:Andrew Madson explains what a tech evangelist is and what his role at Dremio entails. The ins and outs of Dremio. Understanding the pain points that Andrew wanted to alleviate by joining Dremio. How Andrew became a tech evangelist, and why he values this role.Why all tech roles now require one to upskill and branch out into other areas of expertise. The problems that Andrew most commonly faces at work, and how he overcomes them. How Dremio uses generative AI, and how the technology is used in academia. Why employers and educators need to do more to encourage the use of AI. The provenance of training data, and other considerations for the responsible use of AI. Learning more about Andrew's new podcast, Insight x Design. Quotes:“Once I learned about lakehouses and Apache Iceberg and how you can just do all of your work on top of the data lake itself, it really made my life a lot easier with doing real-time analytics.” — @insightsxdesign [0:04:24]“Data analysts have always been expected to be technical, but now, given the rise of the amount of data that we're dealing with and the limitations of data engineering teams and their capacity, data analysts are expected to do a lot more data engineering.” — @insightsxdesign [0:07:49]“Keeping it simple and short is ideal when dealing with AI.” — @insightsxdesign [0:12:58]“The purpose of higher education isn't to get a piece of paper, it's to learn something and to gain new skills.” — @insightsxdesign [0:17:35]Links Mentioned in Today's Episode:Andrew MadsonAndrew Madson on LinkedInAndrew Madson on XAndrew Madson on InstagramDremio Insights x DesignApache IcebergChatGPTPerplexity AIGeminiAnaconda Peter Wang on LinkedInHow AI HappensSama
Tom shares further thoughts on financing AI tech venture capital and whether or not data centers pose a threat to the relevance of the Cloud, as well as his predictions for the future of GPUs and much more. Key Points From This Episode:Introducing Tomasz Tunguz, General Partner at Theory Ventures.What he is currently working on including AI research and growing the team at Theory.How he goes about researching the present to predict the future.Why professionals often work in both academia and the field of AI.What stands out to Tom when he is looking for companies to invest in.Varying applications where an 80% answer has differing relevance.The importance of being at the forefront of AI developments as a leader. Why the metrics of risk and success used in the past are no longer relevant.Tom's thoughts on whether or not Generative AI will replace search.Financing in the AI tech venture capital space.Differentiating between the Cloud and data centers.Predictions for the future of GPUs.Why ‘hello' is the best opener for a cold email.Quotes:“Innovation is happening at such a deep technological level and that is at the core of machine learning models.” — @tomastungusz [0:03:37]“Right now, we're looking at where [is] there rote work or human toil that can be repeated with AI? That's one big question where there's not a really big incumbent.” — @tomastungusz [0:05:51]“If you are the leader of a team or a department or a business unit or a company, you can not be in a position where you are caught off guard by AI. You need to be on the forefront.” — @tomastungusz [0:08:30]“The dominant dynamic within consumer products is the least friction in a user experience always wins.” — @tomastungusz [0:14:05]Links Mentioned in Today's Episode:Tomasz TunguzTomasz Tunguz on LinkedInTomasz Tunguz on XTheory VenturesHow AI HappensSama
Kordel is the CTO and Founder of Theta Diagnostics, and today he joins us to discuss the work he is doing to develop a sense of smell in AI. We discuss the current and future use cases they've been working on, the advancements they've made, and how to answer the question “What is smell?” in the context of AI. Kordel also provides a breakdown of their software program Alchemy, their approach to collecting and interpreting data on scents, and how he plans to help machines recognize the context for different smells. To learn all about the fascinating work that Kordel is doing in AI and the science of smell, be sure to tune in!Key Points From This Episode:Introducing today's guest, Kordel France.How growing up on a farm encouraged his interest in AI.An overview of Kordel's education and the subjects he focused on.His work today and how he is teaching machines to smell.Existing use cases for smell detection, like the breathalyzer test and smoke detectors.The fascinating ways that the ability to pick up certain smells differs between people.Unpacking the elusive question “What is smell?”How to apply this question to AI development.Conceptualizing smell as a pattern that machines can recognize.Examples of current and future use cases that Kordel is working on.How he trains his devices to recognize smells and compounds.A breakdown of their autonomous gas system (AGS).How their software program, Alchemy, helps them make sense of their data.Kordel's aspiration to add modalities to his sensors that will create context for smells.Quotes:“I became interested in machine smell because I didn't see a lot of work being done on that.” — @kordelkfrance [0:08:25]“There's a lot of people that argue we can't actually achieve human-level intelligence until we've met we've incorporated all five senses into an artificial being.” — @kordelkfrance [0:08:36]“To me, a smell is a collection of compounds that represent something that we can recognize. A pattern that we can recognize.” — @kordelkfrance [0:17:28]“Right now we have about three dozen to four dozen compounds that we can with confidence detect.” — @kordelkfrance [0:19:04]“[Our autonomous gas system] is really this interesting system that's hooked up to a bunch of machine learning, that helps calibrate and detect and determine what a smell looks like for a specific use case and breaking that down into its constituent compounds.” — @kordelkfrance [0:23:20]“The success of our device is not just the sensing technology, but also the ability of Alchemy [our software program] to go in and make sense of all of these noise patterns and just make sense of the signals themselves.” — @kordelkfrance [0:25:41]Links Mentioned in Today's Episode:Kordel FranceKordel France on LinkedInKordel France on XTheta DiagnosticsAlchemy by Theta DiagnosticsHow AI HappensSama
After describing the work done at StoneX and her role at the organization, Elettra explains what drew her to neural networks, defines data science and how she overcame the challenges of learning something new on the job, breaks down what a data scientist needs to succeed, and shares her thoughts on why many still don't fully understand the industry. Our guest also tells us how she identifies an inadequate data set, the recent innovations that are under construction at StoneX, how to ensure that your AI and ML models are compliant, and the importance of understanding AI as a mere tool to help you solve a problem. Key Points From This Episode:Elettra Damaggio explains what StoneX Group does and how she ended up there. Her professional journey and how she acquired her skills. The state of neural networks while she was studying them, why she was drawn to the subject, and how it's changed. StoneX's data science and ML capabilities when she arrived, and Elettra's role in the system. Her first experience of being thrown into the deep end of data science, and how she swam. A data scientist's tools for success. The multidisciplinary leaders and departments that she sought to learn from when she entered data science. Defining data science, and why many do not fully understand the industry. How Elettra knows when her data set is inadequate. The recent projects and ML models that she's been working on. Exploring the types of guardrails that are needed when training chatbots to be compliant.Elettra's advice to those following a similar career path as hers. Quotes:“The best thing that you can have as a data scientist to be set up for success is to have a decent data warehouse.” — Elettra Damaggio [0:09:17]“I am very much an introverted person. With age, I learned how to talk to people, but that wasn't [always] the case.” — Elettra Damaggio [0:12:38]“In reality, the hard part is to get to the data set – and the way you get to that data set is by being curious about the business you're working with.” — Elettra Damaggio [0:13:58]“[First], you need to have an idea of what is doable, what is not doable, [and] more importantly, what might solve the problem that [the client may] have, and then you can have a conversation with them.” — Elettra Damaggio [0:19:58]“AI and ML is not the goal; it's the tool. The goal is solving the problem.” — Elettra Damaggio [0:28:28]Links Mentioned in Today's Episode:Elettra Damaggio on LinkedInStoneX GroupHow AI HappensSama
Mike Miller is the Director of Project Management at AWS, and he joins us today to share about the inspirational AI-powered products and services that are making waves at Amazon, particularly those with generative prompt engineering capabilities. We discuss how Mike and his team choose which products to bring to market, the ins and outs of PartyRock including the challenges of developing it, AWS's strategy for generative AI, and how the company aims to serve everyone, even those with very little technical knowledge. Mike also explains how customers are using his products and what he's learned from their behaviors, and we discuss what may lie ahead in the future of generative prompt engineering. Key Points From This Episode:Mike Miller's professional background, and how he got into AI and AWS. How Mike and his team decide on the products to bring to market for developers. Where PartyRock came from and how it fits into AWS's strategy. How AWS decided on the timing to make PartyRock accessible to all. What AWS's products mean for those with zero coding experience. The level of oversight that is required to service clients who have no technical background. Taking a closer look at AWS's strategy for generative AI. How customers are using PartyRock, and what Mike has learned from these observations.The challenges that the team faced whilst developing PartyRock, and how they persevered. Trying to understand the future of generative prompt engineering. A reminder that PartyRock is free, so go try it out! Quotes:“We were working on AI and ML [at Amazon] and discovered that developers learned best when they found relevant, interesting, [and] hands-on projects that they could work on. So, we built DeepLens as a way to provide a fun opportunity to get hands-on with some of these new technologies.” — Mike Miller [0:02:20]“When we look at AIML and generative AI, these things are transformative technologies that really require almost a new set of intuition for developers who want to build on these things.” — Mike Miller [0:05:19]“In the long run, innovations are going to come from everywhere; from all walks of life, from all skill levels, [and] from different backgrounds. The more of those people that we can provide the tools and the intuition and the power to create innovations, the better off we all are.” — Mike Miller [0:13:58]“Given a paintbrush and a blank canvas, most people don't wind up with The Sistine Chapel. [But] I think it's important to give people an idea of what is possible.” — Mike Miller [0:25:34]Links Mentioned in Today's Episode:Mike Miller on LinkedInAmazon Web ServicesAWS DeepLensAWS DeepRacerAWS DeepComposerPartyRockAmazon BedrockHow AI HappensSama
Key Points From This Episode:Welcoming Seth Walker to the podcast. Why Seth jokes about being chaotic in his approach to machine learning. The importance of being agile in AI. All about Seth's company, Carrier, and what they do. Seth tells us about his background and how he ended up at Carrier. How Seth goes about unlocking the power of AI.The different levels of success when it comes to AI creation and how to measure them. Seth breaks down the different things Carrier focuses on. The importance of prompt engineering.What makes him excited about the new iterations of machine learning. Quotes:“In many ways, Carrier is going to be a necessary condition in order for AI to exist.” — Seth Walker [0:04:08]“What's hard about generating value with AI is doing it in a way that is actually actionable toward a specific business problem.” — Seth Walker [0:09:49]“One of the things that we've found through experimentation with generative AI models is that they're very sensitive to your content. I mean, there's a reason that prompt engineering has become such an important skill to have.” — Seth Walker [0:25:56]Links Mentioned in Today's Episode:Seth Walker on LinkedInCarrierHow AI HappensSama
Philip recently had the opportunity to speak with 371 customers from 15 different countries to hear their thoughts, fears, and hopes for AI. Tuning in you'll hear Philip share his biggest takeaways from these conversations, his opinion on the current state of AI, and his hopes and predictions for the future. Our conversation explores key topics, like government and company attitudes toward AI, why adversarial datasets will need to be audited, and much more. To hear the full scope of our conversation with Philip – and to find out how 2024 resembles 1997 – be sure to tune in today! Key Points From This Episode:Some background on Philip Moyer and his role as part of Google's AI engineering team.What he learned from speaking with 371 customers from 15 different countries about AI.Philip shares his insights on how governments and companies are approaching AI.Recognizing the risks and requirements of models and how to manage them.Adversarial datasets; what they are and why they need to be audited.Understanding how adversarial datasets can vary between industries.A breakdown of Google's approach to adversarial datasets in different languages.The most relevant takeaways from Philip's cross-continental survey.How 2024 resembles the technological and competitive business landscape of 1997.Google's partnership with Nvidia and how they are providing technologies at every layer.The new class of applications that come with generative AI.Using a company's proprietary data to train generative AI models.The collective challenges we are all facing when it comes to creating generative AI at scale.Understanding the vectorization of knowledge and why it will need to be auditable.Philip shares what he is most excited about when it comes to AI.Quotes:“What's been so incredible to me is how forward-thinking – a lot of governments are on this topic [of AI] and their understanding of – the need to be able to make sure that both their citizens as well as their businesses make the best use of artificial intelligence.” — Philip Moyer [0:02:52]“Nobody's ahead and nobody's behind. Every single company that I'm speaking to, has about one to five use cases live. And they have hundreds that are on the docket.” — Philip Moyer [0:15:36]“All of us are facing the exact same challenges right now of doing [generative AI] at scale.” — Philip Moyer [0:17:03]“You should just make an assumption that you're going to be somewhere on the order of about 10 to 15% more productive with AI.” — Philip Moyer [0:25:22] “[With AI] I get excited around proficiency and job satisfaction because I really do think – we have an opportunity to make work fun again.” — Philip Moyer [0:27:10]Links Mentioned in Today's Episode:Philip Moyer on LinkedInHow AI HappensSama
Joelle further discusses the relationship between her work, AI, and the end users of her products as well as her summation of information modalities, world models versus word models, and the role of responsibility in the current high-stakes of technology development. Key Points From This Episode:Joelle Pineau's professional background and how she ended up at Meta.The aspects of AI robotics that fascinate her the most.Why elegance is an important element in Joelle's machine learning systems.How asking the right question is the most vital part of research and how to get better at it.FRESCO: how Joelle chooses which projects to work on.The relationship between her work, AI, and the end users of her final products.What success looks like for her and her team at Meta.World models versus word models and her summation of information modalities.What Joelle thinks about responsibility in the current high-stakes of technology development.Quotes:“Perhaps, the most important thing in research is asking the right question.” — @jpineau1 [0:05:10]“My role isn't to set the problems for [the research team], it's to set the conditions for them to be successful.” — @jpineau1 [0:07:29]“If we're going to push for state-of-the-art on the scientific and engineering aspects, we must push for state-of-the-art in terms of social responsibility.” — @jpineau1 [0:20:26]Links Mentioned in Today's Episode:Joelle Pineau on LinkedInJoelle Pineau on XMetaHow AI HappensSama
Key Points From This Episode:Amii's machine learning project management tool: MLPL.Amii's ultimate goal of building capacity and how it differs from an agency model. Asking the right questions to ascertain the appropriate use for AI. Instances where AI is not a relevant solution. Common challenges people face when adopting AI strategies. Mara's perspective on the education necessary to excel in a career in machine learning.Quotes:“Amii is all about capacity building, so we're not a traditional agent in that sense. We are trying to educate and inform industry on how to do this work, with Amii at first, but then without Amii at the end.” — Mara Cairo [0:06:20]“We need to ask the right questions. That's one of the first things we need to do, is to explore where the problems are.” — Mara Cairo [0:07:46]“We certainly are comfortable turning certain business problems away if we don't feel it's an ethical match or if we truly feel it isn't a problem that will benefit much from machine learning.” — Mara Cairo [0:11:52]Links Mentioned in Today's Episode:Maria CairoMaria Cairo on LinkedInAlberta Machine Intelligence UnitHow AI HappensSama
Jerome discusses Meta's Segment Anything Model, Ego Exo 4D, the nature of Self Supervised Learning, and what it would mean to have a non-language based approach to machine teaching. For more, including quotes from Meta Researchers, check out the Sama Blog
Bryan discusses what constitutes industrial AI, its applications, and how it differs from standard AI processes. We explore the innovative process of deep reinforcement learning (DRL), replicating human expertise with machines, and the types of AI approaches available. Gain insights into the current trends and the future of generative AI, the existing gaps and opportunities, why DRL is a game-changer and much more! Join us as we unpack the nuances of industrial AI, its vast potential, and how it is shaping the industries of tomorrow. Tune in now!Key Points From This Episode:Bryan's professional background and his role in the company.Unpack the concept of “industrial AI” and its various applications.The current state and trends of AI in the industrial landscape.Deep reinforcement learning (DRL) and how it applies to industrial AI.Why deep RL is a game-changer for solving industrial problems.Learn about autonomous AI, machine teaching, and explainable AI.Discover the approach for replicating human expertise with machines.Opportunities and challenges of using machine teaching techniques.Differences between monolithic deep learning and standard deep learning.His perspective on current trends and the future of generative AI. Quotes:“We typically look at industrial [AI] as you are either making something or you are moving something.” — Bryan DeBois [0:04:36]“One of the key distinctions with deep reinforcement learning is that it learns by doing and not by data.” — Bryan DeBois [0:10:22]“Autonomous AI is more of a technique than a technology.” — Bryan DeBois [0:16:00]“We have to have [AI] systems that we can count on, that work within constraints, and give right answers every time.” — Bryan DeBois [0:29:04]Links Mentioned in Today's Episode:Bryan DeBois on LinkedInBryan DeBois EmailRoviSysRoviSys AIDesigning Autonomous AIHow AI HappensSama
2023 ML Pulse Report Joining us today are our panelists, Duncan Curtis, SVP of AI products and technology at Sama, and Jason Corso, a professor of robotics, electrical engineering, and computer science at the University of Michigan. Jason is also the chief science officer at Voxel51, an AI software company specializing in developer tools for machine learning. We use today's conversation to discuss the findings of the latest Machine Learning (ML) Pulse report, published each year by our friends at Sama. This year's report focused on the role of generative AI by surveying thousands of practitioners in this space. Its findings include feedback on how respondents are measuring their model's effectiveness, how confident they feel that their models will survive production, and whether they believe generative AI is worth the hype. Tuning in you'll hear our panelists' thoughts on key questions in the report and its findings, along with their suggested solutions for some of the biggest challenges faced by professionals in the AI space today. We also get into a bunch of fascinating topics like the opportunities presented by synthetic data, the latent space in language processing approaches, the iterative nature of model development, and much more. Be sure to tune in for all the latest insights on the ML Pulse Report!Key Points From This Episode:Introducing today's panelists, Duncan Curtis and Jason Corso.An overview of what the Machine Learning (ML) Pulse report focuses on.Breaking down what the term generative means in AI.Our thoughts on key findings from the ML Pulse Report.What respondents, and our panelists, think of hype around generative AI.Unpacking one of the biggest advances in generative AI: accessibility.Insights on cloud versus local in an AI context.Generative AI use cases in the field of computer vision.The powerful opportunities presented by synthetic data.Why the role of human feedback in synthetic data is so important.Finding a middle ground between human language and machine understanding.Unpacking the notion of latent space in language processing approaches.How confident respondents feel that their models will survive production.The challenges of predicting how well a model will perform.An overview of the biggest challenges reported by respondents.Suggested solutions from panelists on key challenges from the report.How respondents are measuring the effectiveness of their models.What Duncan and Jason focus on to measure success.Career advice from our panelists on making meaningful contributions to this space.Quotes:“It's really hard to know how well your model is going to do.” — Jason Corso [0:27:10]“With debugging and detecting errors in your data, I would definitely say look at some of the tooling that can enable you to move more quickly and understand your data better.” — Duncan Curtis [0:33:55]“Work with experts – there's no replacement for good experience when it comes to actually boxing in a problem, especially in AI.” — Jason Corso [0:35:37]“It's not just about how your model performs. It's how your model performs when it's interacting with the end user.” — Duncan Curtis [0:41:11]“Remember, what we do in this field, and in all fields really, is by humans, for humans, and with humans. And I think if you miss that idea [then] you will not achieve – either your own potential, the group you're working with, or the tool.” — Jason Corso [0:48:20]Links Mentioned in Today's Episode:Duncan Curtis on LinkedInJason CorsoJason Corso on LinkedInVoxel512023 ML Pulse ReportChatGPTBardDALL·E 3How AI HappensSama
Sama 2023 ML Pulse ReportML Pulse Report: How AI Happens Live WebinarAMD's Advancing AI EventOur guest today is Ian Ferreira, the Chief Product Officer for Artificial Intelligence over at Core Scientific until they were purchased by his current employer Advanced Micro Devices, AMD, where he is now the Senior Director of AI Software. In our conversation, we talk about when in his career he shifted his focus to AI, his thoughts on the nobility of ChatGPT and applications beyond advertising for AI, and he touches on the scary aspect of Large Language Models (LLMs). We explore the possibility of replacing our standard conceptions of search, how he conceptualizes his role at AMD, and Ian shares his insights and thoughts on the “Arms Race for GPUs”. Be sure not to miss out on this episode as Ian shares valuable insights from his perspective as the Senior Director of AI Software at AMD. Key Points From This Episode:An introduction to our guest on today's episode: Ian Ferreira.The point in his career when AI became the main focus. His thoughts on the idea that ChatGPT is noble. The scary aspect of Large Language Models (LLMs).The possibilities of replacing our standard conceptions of search.Ian shares how he conceptualizes his role as Senior Director of AI Software at AMD, and the projects they're currently working on. His thoughts on the “Arms Race” for GPUs. Ian underlines their partnership with research companies like the Allen Institute.Attempting to make a powerful GPU model easily available to the general public.He explains what he means by a sovereign model. Ian talks about AMD's upcoming events and announcements. Quotes:“It's just remarkable, the potential of AI —and now I'm fully in it and I think it's a game-changer.” — @Ianfe [0:03:41]“There are significantly more noble applications than advertising for AI and ChatGPT was great in that it put a face on AI for a lot of people who couldn't really get their heads wrapped around [AI].” — @Ianfe [0:04:25]“An LLM allows you to have a natural conversation with the search agent, so to speak.” — @Ianfe [0:09:21]“All our stuff is open-sourced. AMD has a strong ethos, both in open-source and in partnerships. We don't compete with our customers, and so being open allows you to go and look at all our code and make sure that whatever you are going to deploy is something you've looked at.” — @Ianfe [0:12:15]Links Mentioned in Today's Episode:Advancing AI EventIan Ferreira on LinkedInIan Ferreira on XAMDAMD Software StackHugging FaceAllen InstituteOpen AIHow AI HappensSama
Generative AI is becoming more common in our lives as the technology grows and evolves. There are now AI companions to help other AI models execute their tasks more efficiently, and Amazon CodeWhisperer (ACW) is among the best in the game. We are joined today by the General Manager of Amazon CodeWhisperer and Director of Software Development at Amazon Web Services (AWS), Doug Seven. We discuss how Doug and his team are able to remain agile in such a huge organization like Amazon before getting a crash course on the two-pizza-team philosophy and everything you need to know about ACW and how it works. Then, we dive into the characteristics that make up a generative AI model, why Amazon felt it necessary to create its own AI companion, why AI is not here to take our jobs, how Doug and his team ensure that ACW is safe and responsible, and how generative AI will become common in most households much sooner than we may think. Key Points From This Episode:Introducing the Director of Software Development and General Manager of Amazon CodeWhisperer at Amazon Web Services, Doug Seven. A day in the life of Doug in his role at Amazon. What his team currently looks like.Whether he and his team retain their agility in a massive organization like Amazon. A crash course on the two-pizza-team philosophy. How Doug ended up at Amazon Web Services (AWS) and leading ACW. What ACW is, how it works, and why you need it for you and your business. Assessing if generative AI models need to produce new code to be considered generative. Why Amazon felt it pertinent to create its own AI companion in ACW. How to use ACW to its full potential. The way recommendations change and improve once ACW has access to your code base. Examples that reiterate how AI is not here to take your job but to do the jobs you hate.Guardrails that ACW is putting up to ensure that it remains safe, secure, and responsible. How generative AI will become more accessible to the masses as it evolves.
In today's episode, we are joined by Dalia Shanshal, Senior Data Scientist at Bell, Canada's largest communications company that offers advanced broadband wireless, Internet, TV, media, and business communications services. With over five years of experience working on hands-on projects, Dalia has a diverse background in data science and AI. We start our conversation by talking about the recent GeekFest Conference, what it is about, and key takeaways from the event. We then delve into her professional career journey and how a fascinating article inspired her to become a data scientist. During our conversation, Dalia reflects on the evolving nature of data science, discussing the skills and qualities that are now more crucial than ever for excelling in the field. We also explore why creativity is essential for problem-solving, the value of starting simple, and how to stand out as a data scientist before she explains her unique root cause analysis framework.Key Points From This Episode:Highlights of the recent Bell GeekFest Conference.AI-related topics focused on at the event.Why Bell's GeekFest is only an internal conference.Details about Bell and Dalia's role at the company.Her background and professional career journey.How the role of a data scientist has changed over time.The importance of creativity in problem-solving.Overview of why quality data is fundamental.Qualities of a good data scientist.The research side of data science.Dalia reveals her root cause analysis framework.Exciting projects she is currently working on.Tweetables:“What I do is to try leverage AI and machine learning to speed up and fastrack investigative processes.” — Dalia Shanshal [0:06:52]“Data scientists today are key in business decisions. We always need business decisions based on facts and data, so the ability to mine that data is super important.” — Dalia Shanshal [0:08:35]“The most important skill set [of a data scientist] is to be able to [develop] creative approaches to problem-solving. That is why we are called scientists.” — Dalia Shanshal [0:11:24]“I think it is very important for data scientists to keep up to date with the science. Whenever I am [faced] with a problem, I start by researching what is out there.” — Dalia Shanshal [0:22:18]“One of the things that is really important to me is making sure that whatever [data scientists] are doing has an impact.” — Dalia Shanshal [0:33:50]Links Mentioned in Today's Episode:Dalia ShanshalDalia Shanshal on LinkedInDalia Shanshal on GitHubDalia Shanshal EmailBellGeekFest 2023 | BellCanadian Conference on Artificial Intelligence (CANAI)‘Towards an Automated Framework of Root Cause Analysis in the Canadian Telecom Industry'Ohm Dome ProjectHow AI HappensSama
EXAMPLE: AgriSynth Synthetic Data-- Weeds as Seen By AIData is the backbone of agricultural innovation when it comes to increasing yields, reducing pests, and improving overall efficiency, but generating high-quality real-world data is an expensive and time-consuming process. Today, we are joined by Colin Herbert, the CEO and Founder of AgriSynth, to find out how the advent of synthetic data will ultimately transform the industry for the better. AgriSynth is revolutionizing how AI can be trained for agricultural solutions using synthetic imagery. He also gives us an overview of his non-linear career journey (from engineering to medical school to agriculture, then through clinical trials and back to agriculture with a detour in Deep Learning), shares the fascinating origin story of AgriSynth, and more. Key Points From This Episode:Colin's career trajectory and the surprising role that Star Wars plays in AgriSynth's origin story.Reasons that the use of AI in agriculture is still limited, despite its vast potential.Ways that AgriSynth seeks to bridge these gaps in the industry using synthetic imagery.Insight into the vast amount of parameters and values required.What synthetic data looks like in AgriSynth's “closed-loop train/test system.”Why photorealistic data is completely unnecessary for AI models.How AgriSynth is working towards eliminating human cognition from the process.Dispelling some of the criticism often directed at synthetic data.Just a few of the many applications for AgriSynth's tech and how their output will evolve.Why real-world images aren't necessarily superior to synthetic data!Quotes:“The complexity of biological images and agricultural images is way beyond driverless cars and most other applications [of AI].” — Colin Herbert [0:06:45]“It's parameter rich to represent the rules of growth of a plant.” — Colin Herbert [0:09:21]“We know exactly where the edge cases are – we know the distribution of every parameter in that dataset, so we can design the dataset exactly how we want it and generate imagery accordingly. We could never collect such imagery in the real world.” — Colin Herbert [0:10:33]“Ultimately, the way we look at an image is not the way AI looks at an image.” — Colin Herbert [0:21:11]“It may not be a real-world image that we're looking at, but it will be data from the real world. There is a crucial difference.” — Colin Herbert [0:32:01]Links Mentioned in Today's Episode:Colin Herbert on LinkedInAgriSynthHow AI HappensSama
Jennifer is the founder of Data Relish, a boutique consultancy firm dedicated to providing strategic guidance and executing data technology solutions that generate tangible business benefits for organizations of diverse scales across the globe. In our conversation, we unpack why a data platform is not the same as a database, working as a freelancer in the industry, common problems companies face, the cultural aspect of her work, and starting with the end in mind. We also delve into her approach to helping companies in crisis, why ‘small' data is just as important as ‘big' data, building companies for the future, the idea of a ‘data dictionary', good and bad examples of data culture, and the importance of identifying an executive sponsor.Key Points From This Episode:Introducing Jennifer Stirrup and an overview of her professional background.Jennifer's passion for technology and the exciting projects she is currently working on.Alan Turing's legacy in terms of AI and how the landscape is evolving.The reason for starting her own business and working as a freelancer.Forging a career in the technology and AI space: advice from an expert.Challenges and opportunities of working as a consultant in the technology sector.Characteristics of AI that make it a high-pressure and high-risk environment.She breaks down the value and role of an executive sponsor.Common hurdles companies face regarding data and AI operations.Circumstances when companies hire Jennifer to help them.Safeguarding her reputation and managing unrealistic expectations. Advice for healthy data practices to avoid problems in the future.Why Jennifer decided on the name Data Relish.Discover how good and reliable data can help change lives.Quotes:“Something that is important in AI is having an executive sponsor, someone who can really unblock any obstacles for you.” — @jenstirrup [0:08:50]“Probably the biggest [challenge companies face] is access to the right data and having a really good data platform.” — @jenstirrup [0:10:50]“If the crisis is not being handled by an executive sponsor, then there is nothing I can do.” — @jenstirrup [0:20:55]“I want people to understand the value that [data] can have because when your data is good it can change lives.” — @jenstirrup [0:32:50]Links Mentioned in Today's Episode:Jennifer StirrupJennifer Stirrup on LinkedInJennifer Stirrup on XData RelishHow AI HappensSama
Joining us today to provide insight on how to put together a credible AI solutions team is Mike Demissie, Managing Director of the AI Hub at BNY Mellon. We talk with Mike about what to consider when putting together and managing such a diverse team and how BNY Mellon is implementing powerful AI and ML capabilities to solve the problems that matter most to their clients and employees. To learn how BNY Mellon is continually innovating for the benefit of their customers and their employees, along with Mike's thoughts on the future of generative AI, be sure to tune in! Key Points From This Episode:Mike's background in engineering and his role at BNY Mellon.The history of BNY Mellon and how they are applying AI and ML in financial services.An overview of the diverse range of specialists that make up their enterprise AI team.Making it easier for their organization to tap into AI capabilities responsibly.Identifying the problems that matter most to their clients and employees.Finding the best ways to build solutions and deploy them in a scalable fashion.Insight into the AI solutions currently being implemented by BNY Mellon.How their enterprise AI team chooses what to prioritize and why it can be so challenging.The value of having a diverse set of use cases: it builds confidence and awareness.Their internal PR strategy for educating the rest of the organization on AI implementations.Insight into generative AI's potential to enhance BNY Mellon's products and services.Ensuring the proper guardrails and regulations are put in place for generative AI.Mike's advice on pursuing a career in the AI, ML, and data science space.Quotes:“Building AI solutions is very much a team sport. So you need experts across many disciplines.” —Mike Demissie [0:06:40]“The engineers need to really find a way in terms of ‘okay, look, how are we going to stitch together the various applications to run it in the most optimal way?'” —Mike Demissie [0:09:23]“It is not only opportunity identification, but also developing the solution and deploying it and making sure there's a sustainable model to take care of afterwards, after production — so you can go after the next new challenge.” —Mike Demissie [0:09:33]“There's endless use of opportunities. And every time we deploy each of these solutions [it] actually sparks ideas and new opportunities in that line of business.” —Mike Demissie [0:11:58]“Not only is it important to raise the level of awareness and education for everyone involved, but you can also tap into the domain expertise of folks, regardless of where they sit in the organization.” —Mike Demissie [0:15:36]“Demystifying, and really just making this abstract capability real for people is an important part of the practice as well.” —Mike Demissie [0:16:10]“Remember, [this] still is day one. As much as all the talk that is out there, we're still figuring out the best way to navigate and the best way to apply this capability. So continue to explore that, too.” —Mike Demissie [0:24:21]Links Mentioned in Today's Episode:Mike Demissie on LinkedInBNY MellonHow AI HappensSama
Mercedes-Benz is a juggernaut in the automobile industry and in recent times, it has been deliberate in advancing the use of AI throughout the organization. Today, we welcome to the show the Executive Manager for AI at Mercedes-Benz, Alex Dogariu. Alex explains his role at the company, he tells us how realistic chatbots need to be, how he and his team measure the accuracy of their AI programs, and why people should be given more access to AI and time to play around with it. Tune in for a breakdown of Alex's principles for the responsible use of AI. Key Points From This Episode:A warm welcome to the Executive Manager for AI at Mercedes-Benz, Alex Dogariu.Alex's professional background and how he ended up at Mercedes-Benz.When Mercedes-Benz decided that it needed a team dedicated to AI.An example of the output of descriptive analytics as a result of machine learning at Mercedes.Alex explains his role as Executive Manager for AI. How realistic chatbots need to be, according to Alex. The way he measures the accuracy of his AI programs. How Mercedes-Benz assigns AI teams to specific departments within the organization. Why it's important to give people access to AI technology and allow them to play with it. Using vendors versus doing everything in-house. Alex gives us a brief breakdown of his principles for the responsible use of AI.What he was trying to express and accomplish with his TEDx talk. Tweetables:“[Chatbots] are useful helpers, they're not replacing humans.” — Alex Dogariu [09:38]“This [AI] technology is so new that we really just have to give people access to it and let them play with it.” — Alex Dogariu [15:50]“I want to make people aware that AI has not only benefits but also downsides, and we should account for those. And also, that we use AI in a responsible way and manner.” — Alex Dogariu [25:12]“It's always a balancing act. It's the same with certification of AI models — you don't want to stifle innovation with legislation and laws and compliance rules but, to a certain extent, it's necessary, it makes sense.” — Alex Dogariu [26:14]“To all the AI enthusiasts out there, keep going, and let's make it a better world with this new technology.” — Alex Dogariu [27:00]Links Mentioned in Today's Episode:Alex Dogariu on LinkedInMercedes-Benz‘Principles for responsible use of AI | Alex Dogariu | TEDxWHU'How AI HappensSama
Tarun dives into the game-changing components of Watsonx, before delivering some noteworthy advice for those who are eager to forge a career in AI and machine learning. Key Points From This Episode:Introducing Tarun Chopra and a brief look at his professional background. His intellectual diet: what Tarun is consuming to stay up to date with technological trends. Common challenges in technology and AI that he encounters daily. The importance of fully understating what problem you want your new technology to solve. IBM's role in AI and how the company is helping to accelerate change in the space.Exploring IBM's decision to remove facial recognition from its endeavors in biometrics. The development of IBM's Watsonx and how it's helping business tell their unique AI stories. Why IBM's consultative approach to introducing their customers to AI is so effective. Tarun's thoughts on computer power and all related costs. Diving deeper into the three components of Watsonx. Our guest's words of advice to those looking to forge a career in AI and ML. Tweetables:“One of the first things I tell clients is, ‘If you don't know what problems we are solving, then we're on the wrong path.'” — @tc20640n [05:14]“A lot of our customers have adopted AI — but if the workflow is, let's say 10 steps, they have applied AI to only one or two steps. They don't get to realize the full value of that innovation.” — @tc20640n [05:24]“Every client that I talk to, they're all looking to build their own unique story; their own unique point of view with their own unique data and their own unique customer pain points. So, I look at Watsonx as a vehicle to help customers build their own unique AI story.” — @tc20640n [14:16]“The most important thing you need is curiosity. [And] be strong-hearted, because this [industry] is not for the weak-hearted.” — @tc20640n [27:41]Links Mentioned in Today's Episode:Tarun ChopraTarun Chopra on LinkedInTarun Chopra on TwitterTarun Chopra on IBMIBMIBM WatsonHow AI HappensSama
Creating AI workflows can be a challenging process. And while purchasing these types of technologies may be straightforward, implementing them across multiple teams is often anything but. That's where a company like Veritone can offer unparalleled support. With over 400 AI engines on their platform, they've created a unique operating system that helps companies orchestrate AI workflows with ease and efficacy. Chris discusses the differences between legacy and generative AI, how LLMs have transformed chatbots, and what you can do to identify potential AI use cases within an organization. AI innovations are taking place at a remarkable pace and companies are feeling the pressure to innovate or be left behind, so tune in to learn more about AI applications in business and how you can revolutionize your workflow!Key Points From This Episode:An introduction to Chris Doe, Product Management Leader at Veritone.How Veritone is helping clients orchestrate their AI workflows.The four verticals Chris oversees: media, entertainment, sports, and advertising.Building solutions that infuse AI from beginning to end.An overview of the type of AI that Veritone is infusing.How they are helping their clients navigate the expansive landscape of cognitive engines.Fine-tuning generative AI to be use-case-specific for their clients.Why now is the time to be testing and defining proof of concept for generative AI.How LLMs have transformed chatbots to be significantly more sophisticated.Creating bespoke chatbots for clients that can navigate complex enterprise applications.The most common challenges clients face when it comes to integrating AI applications.Chris's advice on taking stock of an organization and figuring out where to apply AI.Tips on how to identify potential AI use cases within an organization.Quotes:“Anybody who's writing text can leverage generative AI models to make their output better.” — @chris_doe [0:05:32]“With large language models, they've basically given these chatbots a whole new life.” — @chris_doe [0:12:38]“I can foresee a scenario where most enterprise applications will have an LLM power chatbot in their UI.” — @chris_doe [0:13:31]“It's easy to buy technology, it's hard to get it adopted across multiple teams that are all moving in different directions and speeds.” — @chris_doe [0:21:16]“People can start new companies and innovate very quickly these days. And the same has to be true for large companies. They can't just sit on their existing product set. They always have to be innovating.” — @chris_doe [0:23:05]“We just have to identify the most problematic part of that workflow and then solve it.” — @chris_doe [0:26:20]Links Mentioned in Today's Episode:Chris Doe on LinkedInChris Doe on XVeritoneHow AI HappensSama
AI is an incredible tool that has allowed us to evolve into more efficient human beings. But, the lack of ethical and responsible design in AI can lead to a level of detachment from real people and authenticity. A wonderful technology strategist at Microsoft, Valeria Sadovykh, joins us today on How AI Happens. Valeria discusses why she is concerned about AI tools that assist users in decision-making, the responsibility she feels these companies hold, and the importance of innovation. We delve into common challenges these companies face in people, processes, and technology before exploring the effects of the democratization of AI. Finally, our guest shares her passion for emotional AI and tells us why that keeps her in the space. To hear it all, tune in now!Key Points From This Episode:An introduction to today's guest, Valeria Sadovykh. Valeria tells us about her studies at the University of Auckland and her Ph.D. The problems with using the internet to assist in decision making. How ethical and responsible AI frames Valeria's career. What she is doing to encourage AI leaders to prioritize responsible design. The dangers of lack of authenticity, creativity, and emotion in AI. Whether we need human interaction or not and if we want to preserve it. What responsibility companies developing this technology have, according to Valeria. She tells us about her job at Microsoft and what large organizations are doing to be ethical. What kinds of AI organizations need to be most conscious of ethics and responsible design.Other common challenges companies face when they plug in other technology.How those challenges show up in people, processes, and technology when deploying AI.Why Valeria expects some costs to decrease as AI technology democratizes over time.The importance of innovating and being prepared to (potentially) fail. Why the future of emotional AI and the ability to be authentic fascinates Valeria. Tweetables:“We have no opportunity to learn something new outside of our predetermined environment.” — @ValeriaSadovykh [0:07:07]“[Ethics] as a concept is very difficult to understand because what is ethical for me might not necessarily be ethical for you and vice versa.” — @ValeriaSadovykh [0:11:38]“Ethics – should not come – [in] place of innovation.” — @ValeriaSadovykh [0:20:13]“Not following up, not investing, not trying, [and] not failing is also preventing you from success.” — @ValeriaSadovykh [0:29:52]Links Mentioned in Today's Episode:Valeria Sadovykh on LinkedInValeria Sadovykh on InstagramValeria Sadovykh on TwitterHow AI HappensSama
Key Points From This Episode:She shares her professional journey that eventually led to the founding of Gradient Ventures.How Anna would contrast AI Winter to the standard hype cycles that exist.Her thoughts on how the web and mobile sectors were under-hyped.Those who decide if something falls out of favor; according to Anna.How Anna navigates hype cycles.Her process for evaluating early-stage AI companies. How to assess whether someone is a tourist or truly committed to something.Approaching problems and discerning whether AI is the right answer.Her thoughts on the best application for AI or MLR technology. Anna shares why she is excited about large language models (LLMs).Thoughts on LLMs and whether we should or can we approach AGIs.A discussion: do we limit machines when we teach them to speak the way we speak?Quality AI and navigating fairness: the concept of the Human in the Loop.Boring but essential data tasks: whose job is that?How she feels about sensationalism. What gets her fired up when it is time to support new companies. Advice to those forging careers in the AI and ML space. Tweetables:“When that hype cycle happens, where it is overhyped and falls out of favor, then generally that is – what is called a winter.” — @AnnapPatterson [0:03:28]“No matter how hyped you think AI is now, I think we are underestimating its change.” — @AnnapPatterson [0:04:06]“When there is a lot of hype and then not as many breakthroughs or not as many applications that people think are transformational, then it starts to go through a winter.” — @AnnapPatterson [0:04:47]@AnnapPatterson [0:25:17]Links Mentioned in Today's Episode:Anna Patterson on LinkedIn‘Eight critical approaches to LLMs'‘The next programming language is English'‘The Advice Taker'GradientHow AI HappensSama
Wayfair uses AI and machine learning (ML) technology to interpret what its customers want, connect them with products nearby, and ensure that the products they see online look and feel the same as the ones that ultimately arrive in their homes. With a background in engineering and a passion for all things STEM, Wayfair's Director of Machine Learning, Tulia Plumettaz, is an innate problem-solver. In this episode, she offers some insight into Wayfair's ML-driven decision-making processes, how they implement AI and ML for preventative problem-solving and predictive maintenance, and how they use data enrichment and customization to help customers navigate the inspirational (and sometimes overwhelming) world of home decor. We also discuss the culture of experimentation at Wayfair and Tulia's advice for those looking to build a career in machine learning.Key Points From This Episode:A look at Tulia's engineering background and how she ended up in this role at Wayfair.Defining operations research and examples of its real-life applications.What it means for something to be strategy-proof.Different ways that AI and ML are being integrated at Wayfair.The challenge of unstructured data and how Wayfair takes the onus off suppliers.Wayfair's North Star: detecting anomalies before they're exposed to customers.Preventative problem-solving and how Wayfair trains ML models to “see around corners.”Examples of nuanced outlier detection and whether or not ML applications would be suitable.Insight into Wayfair's bespoke search tool and how it interprets customers' needs.The exploit-and-explore model Wayfair uses to measure success and improve accordingly.Tulia's advice for those forging a career in machine learning: go back to first principles!Tweetables:“[Operations research is] a very broad field at the intersection between mathematics, computer science, and economics that [applies these toolkits] to solve real-life applications.” — Tulia Plumettaz [0:03:42]“All the decision making, from which channel should I bring you in [with] to how do I bring you back if you're taking your sweet time to make a decision to what we show you when you [visit our site], it's all [machine learning]-driven.” — Tulia Plumettaz [0:09:58]“We want to be in a place [where], as early as possible, before problems are even exposed to our customers, we're able to detect them.” — Tulia Plumettaz [0:18:26]“We have the challenge of making you buy something that you would traditionally feel, sit [on], and touch virtually, from the comfort of your sofa. How do we do that? [Through the] enrichment of information.” — Tulia Plumettaz [0:29:05]“We knew that making it easier to navigate this very inspirational space was going to require customization.” — Tulia Plumettaz [0:29:39]“At its core, it's an exploit-and-explore process with a lot of hypothesis testing. Testing is at the core of [Wayfair] being able to say: this new version is better than [the previous] version.” — Tulia Plumettaz [0:31:53]Links Mentioned in Today's Episode:Tulia Plumettaz on LinkedInWayfairHow AI HappensSama
Bob highlights the importance of building interdepartmental relationships and growing a talented team of problem solvers, as well as the key role of continuous education. He also offers some insight into the technical and not-so-technical skills of a “data science champion,” tips for building adaptable data infrastructures, and the best career advice he has ever received, plus so much more. For an insider's look at the data science operation at FreeWheel and valuable advice from an analytics leader with more than two decades of experience, be sure to tune in today!Key Points From This Episode:A high-level overview of FreeWheel, Bob's role there, and his career trajectory thus far.Important intersections between data science and the organization at large.Three indicators that FreeWheel is a data-driven company.Why continuous education is a key component for agile data science teams.The interplay between data science and the development of AI technology.Technical (and other) skills that Bob looks for when recruiting new talent to his team.Bob's perspective on the value of interdepartmental collaboration.Insight into what an adaptable data infrastructure looks like.The importance of asking yourself, “What more can we do?”Tweetables:“As a data science team, it's not enough to be able to solve quantitative problems. You have to establish connections to the company in a way that uncovers those problems to begin with.” — @Bob_Bress [0:06:42]“The more we can do to educate folks – on the type of work that the [data science] team does, the better the position we are in to tackle more interesting problems and innovate around new ideas and concepts.” — @Bob_Bress [0:09:49]“There are so many interactions and dependencies across any project of sufficient complexity that it's only through [collaboration] across teams that you're going to be able to hone in on the right answer.” — @Bob_Bress [0:17:34]“There is always more you can do to enhance the work you're doing, other questions you can ask, other ways you can go beyond just checking a box.” — @Bob_Bress [0:23:31]Links Mentioned in Today's Episode:Bob Bress on LinkedInBob Bress on TwitterFreeWheelHow AI HappensSama