POPULARITY
Google just dropped like 3 years of AI updates on us in 3 hours.
“Edge AI is the ultimate and probably only scalable way to do AI in the real world—collecting, analyzing, and acting on data where it lives” says Evgeni Gousev, Chair of the Edge AI Foundation and Senior Director at Qualcomm. In this episode he talks about the transformative power of edge AI and why it's becoming a strategic priority for businesses across industries. Furthermore, he shares use cases that are already delivering measurable impact and addresses the challenges of deploying edge AI at scale.
Patrick Cronin of University of Limerick spin-out Oscil, was presented with the Big Ideas Award at Enterprise Ireland's Start-Up Day 2025 in the Aviva Stadium. Oscil was one of six investor-ready potential spin-outs that had three minutes to pitch their new technology solutions to a 600 strong audience made up of representatives from the Irish Start-up ecosystem including VCs and other funders, State support agencies & professional and financial services. The award was presented to Patrick Cronin, for the outstanding pitch of the day. Oscil which is an Enterprise Ireland commercialisation funded project approaching spin-out, is an innovative deep tech technology operating at the intersection of Edge-AI and powder processing, initially targeting the pharmaceutical and dairy sectors. They haves developed a proprietary, ATEX-compliant sensor and edge-AI solution that enables real-time, in-line analysis of powder flow - addressing critical issues like sensor fouling, yield loss, and process downtime. As part of the Big Ideas award, Oscil will travel to the US to participate in UC Berkeley's Venture Connectivity Program. Tara Dalton of University of Limerick spin-out TANGO received the Big Ideas runner-up award on the day. Start-Up Day 2025 hosted the 'Class of 2024' High Potential Start-Up companies that Enterprise Ireland invested in during 2024. The event also played host to technology-based companies with origins deep rooted in groundbreaking research. In 2024, 34 companies were supported through the Enterprise Ireland Commercialisation Fund Programme with 25 companies spun out of third level institutions. Presenting the award, Michael Carey, Chairman, Enterprise Ireland said, "The Big Ideas pitching element and awards at Start-Up Day provides a platform to showcase Enterprise Ireland's commercialisation funded research approaching start-up status, with significant potential for success. The event also highlights the accomplishments of our national technology transfer system, the high calibre of research commercialisation activity within Ireland, and the significant impact these companies will have to help solve huge global challenges. I wish to congratulate both Oscil and TANGO on their achievements to date and wish them every success for the future." See more stories here. More about Irish Tech News Irish Tech News are Ireland's No. 1 Online Tech Publication and often Ireland's No.1 Tech Podcast too. You can find hundreds of fantastic previous episodes and subscribe using whatever platform you like via our Anchor.fm page here: https://anchor.fm/irish-tech-news If you'd like to be featured in an upcoming Podcast email us at Simon@IrishTechNews.ie now to discuss. Irish Tech News have a range of services available to help promote your business. Why not drop us a line at Info@IrishTechNews.ie now to find out more about how we can help you reach our audience. You can also find and follow us on Twitter, LinkedIn, Facebook, Instagram, TikTok and Snapchat.
CeADAR, Ireland's Centre for AI, and quantum computing (QC) company Equal1 are combining their expertise to help give Ireland a competitive edge in the evolving world of AI-QC platforms. This week, they signed a memorandum of understanding (MOU) to work together on the establishment of a national Edge AI and Quantum Computing testbed to enable Ireland to develop and deploy AI-QC platforms and services. The collaboration aims to promote the importance of AI and QC in Ireland and Europe for matters of strategic importance, global competitiveness, and technical research and innovation. Quantum Computing has the potential to solve complex problems that are beyond the abilities of traditional computers, but its development has presented challenges over issues including scalability and cost. However, Equal1 announced last month that it had built Bell-1 - Ireland's first quantum computer - using a silicon-based quantum server that eliminates cost and complexity barriers to the adoption of quantum computing. The company said Bell-1 marks the beginning of Quantum Computing 2.0 - the shift from experimental machines in research facilities to practical quantum solutions that businesses can harness to solve complex problems in far less time than traditional computing. As well as creating the AI-QC infrastructure needed to underpin the development of advanced technology platforms, CeADAR and Equal1 will jointly develop funded RDI proposals to progress the national Quantum-AI ecosystem. Quantum-AI is expected to lead to the development of new AI models that will have a transformative impact on multiple business sectors. CeADAR's CEO, Dr John Lonsdale said: "CeADAR's remit is to work with Irish businesses to help them understand and leverage the benefits of AI and machine learning. The fusion of AI and Quantum Computing will lead to transformative change across multiple sectors, and it is our mission to ensure that Irish businesses are well positioned to understand and adapt to the changes that are coming our way. By collaborating with Equal1, CeADAR will be able to create a framework and ecosystem for the development and adoption of Quantum AI." Equal1's CEO, Jason Lynch said: "Equal1 is excited to further strengthen our collaboration with CeADAR - we believe this partnership will position Ireland at the forefront of the intersection between Quantum and AI. By working closely with CeADAR, we aim to create a platform that makes quantum computing accessible for Irish businesses and researchers as they explore the potential of Quantum Computing and AI to have a transformative impact across a range of application use cases and industry sectors." See more stories here. More about Irish Tech News Irish Tech News are Ireland's No. 1 Online Tech Publication and often Ireland's No.1 Tech Podcast too. You can find hundreds of fantastic previous episodes and subscribe using whatever platform you like via our Anchor.fm page here: https://anchor.fm/irish-tech-news If you'd like to be featured in an upcoming Podcast email us at Simon@IrishTechNews.ie now to discuss. Irish Tech News have a range of services available to help promote your business. Why not drop us a line at Info@IrishTechNews.ie now to find out more about how we can help you reach our audience. You can also find and follow us on Twitter, LinkedIn, Facebook, Instagram, TikTok and Snapchat.
Send us a textWelcome to IoT Coffee Talk #246 where we have a chat about all things #IoT over a cup of coffee or two with some of the industry's leading business minds, thought leaders and technologists in a totally unscripted, organic format. Thanks for joining us. Sit back with a cup of Joe and enjoy the morning banter.This week, Pete, Tom, David, Bill, Debbie, Rob, and Leonard jump on Web3 to talk about:THE WORST KARAOKE! "Anyway You Want It", JourneyAI fatigue - Too much DeepSeek nonsense!All Chinese tech denial leads to a whining road to D.C.The great AI hypocrisyHow to build LLMs and 1.5 trillion parameter MoEs out of coconutsThe Week of DeepSeek - Dazed and ConfusedThe red AI pill or the blue AI pill - utopia or dystopia?The 3 Laws of Edge AIWhat does safe, reliable, trustworthy Edge AI look like?How to make your content LLM copyright protected - 80 percent nonsense ruleWhy IoT Coffee Talk doesn't fit in the attention span of 99.999 percent of humanityIt's a great episode. Grab an extraordinarily expensive latte at your local coffee shop and check out the whole thing. You will get all you need to survive another week in the world of IoT and greater tech!Thanks for listening to us! Watch episodes at http://iotcoffeetalk.com/. We support Elevate Our Kids to bridge the digital divide by bringing K-12 computing devices and connectivity to support kids' education in under-resourced communities. Please donate.
Is Perplexity going after..... Siri? Talk about a hard pivot. OpenAI and Google are racing for users.... who's winning? And will the U.S.'s effort on AI in education be too little, too late? We'll answer those questions and a ton more on our weekly news roundup show. Don't spend hours a day trying to keep up. Just join us (most) Mondays. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:OpenAI Launches Lightweight Deep Research ToolDOJ Pushes Google Chrome BreakupMicrosoft 365 Copilot Spring 2025 UpdateAdobe Firefly Supports Google, OpenAI ModelsMyPillow CEO's AI-Generated Legal TroubleOpenAI's Image Gen API for DevelopersU.S. President Signs AI Education OrderPerplexity AI Challenges Siri with Voice AssistantTimestamps:00:00 AI in Education, Adobe Partnerships05:52 "Premier Deep Research Tools: OpenAI & Google"07:35 DOJ Proposes Google Chrome Sale13:42 Adobe Expands AI Image Tools17:08 AI Missteps: Lindell's Legal Trouble18:43 Lindell's Legal AI Misstep22:25 OpenAI ImageGen API Overview28:27 AI Literacy Initiative Praised31:29 Microsoft Launches Controversial Recall Feature32:32 Improved Microsoft Search Enhancement Secured38:30 "Perplexity: Contextual AI Assistant Edge"41:22 "Perplexity's Crucial Pivot Needed"43:18 OpenAI's New Reasoning Language Model46:43 AI Usage Surges Amid OpenAI Speculation50:54 Tech Updates: AI Expansions & Legal IssuesKeywords:OpenAI, Lightweight deep research tool, chat GPT, Free users, Paid users, Deep research queries, O four mini model, Google, Gemini 2.5, Perplexity, Deep research, Adobe, Firefly AI, Competitors, Image generation, Microsoft, Copilot, AI features, MyPillow, Legal citations, AI-generated court filings, ImageGen, API access, Creative work, US president Trump, Executive order, AI education, Microsoft 365, Recall feature, Copilot plus PCs, Perplexity AI, Voice assistant, Siri competitor, Open reasoning language model, Llama, Open weights, Meta, Text in/text out, DeepSeek, AI safety, Sam Altman, Edge AI, Multimodal models.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
Send us a textStep into the future of hearing technology as Dave Fabry and Holly Schissel unveil Starkey's groundbreaking Edge AI platform and accessories that are revolutionizing the hearing experience. This fascinating deep dive showcases how cutting-edge innovation is removing barriers for people with hearing loss in ways previously unimaginable.At the heart of this conversation is Starkey's Deep Neural Network technology, delivering up to 13dB improvement in signal-to-noise ratio while maintaining all-day battery life—essentially turning a noisy restaurant into a comfortable listening environment. The duo explores their newest accessories: the Table Microphone with its sophisticated multi-array system for group settings, and the Remote Microphone+ that doubles as a hearing aid controller, eliminating the need for multiple devices.Perhaps most exciting is the discussion around Auracast technology, the future of wireless audio connectivity. Imagine receiving airport announcements, museum audio tours, or television audio directly through your hearing aids without additional equipment. This Bluetooth-on-steroids technology will transform accessibility in public spaces, not just for hearing aid users but anyone using compatible devices.The conversation highlights Starkey's philosophy of fitting both the hearing loss and the lifestyle—recognizing that even the best hearing aids sometimes need accessories for optimal performance in challenging environments. With Edge Mode Automatic, users can engage enhanced settings for difficult listening situations that automatically adjust as environments change, providing a seamless, intuitive experience.Ready to experience hearing technology that adapts to your life instead of the other way around? Discover how these innovations can transform connections to loved ones, workplace environments, and everyday experiences with unprecedented clarity and convenience. Connect with the Hearing Matters Podcast TeamEmail: hearingmatterspodcast@gmail.com Instagram: @hearing_matters_podcast Twitter: @hearing_mattasFacebook: Hearing Matters Podcast
We've been talking about smarter devices for years - but what does progress actually look like today? In this episode of our spin-off series Edge of Tomorrow: The Edge AI Debate, created in collaboration with the @edgeaifoundation , Host Tom White heads to NXP Semiconductor's Smart Home Lab to sit down with Davis Sawyer - AI Product Manager and Anthony Huereca - Senior Embedded Systems Engineer. Together, they explore where Edge AI really stands in 2025 from smaller, more efficient models to the gap between what people want and what's actually being built. They dive into the current state of embedded intelligence, real-world applications in the smart home, technical and practical challenges, and what the future might look like, with a focus on what's actually working today. Expect thoughts on...
As the use of AI spreads outside the cloud and datacenter, data management and protection has never been more important. This episode of Utilizing Tech, sponsored by Solidigm, features Rick Vanover of Veeam discussing the importance of data protection with hosts Stephen Foskett and Scott Shadley. Data protection for backup and security is familiar in the datacenter but is not always considered at the edge. But as AI and sensors push data to the edge, we have to consider how to protect and manage it. Edge AI systems combine elements of endpoint, datacenter, and cloud technologies and often have limited or intermittent connectivity, creating a differentiated platform that challenges data protection software. Mixing these capabilities to build a useful platform is the key challenge for data protection at the edge. Data protection at the edge is different, with continuity and recovery from outages and attacks more important than long-term preservation of data, and IT pros are beginning to adjust their perspectives on the requirements for these systems as well.Guest:Rick Vanover, Vice President of Product Strategy at VeeamHosts: Stephen Foskett, President of the Tech Field Day Business Unit and Organizer of the Tech Field Day Event SeriesScott Shadley, Leadership Narrative Director and Evangelist at SolidigmFollow Tech Field Day on LinkedIn, on X/Twitter, on Bluesky, and on Mastodon. Visit the Tech Field Day website for more information on upcoming events. For more episodes of Utilizing Tech, head to the dedicated website and follow the show on X/Twitter, on Bluesky, and on Mastodon.
April 3, 2025 - Assemblymember Alex Bores has proposed safeguards on the most cutting edge developments of artificial intelligence technology, but the tech industry is pushing back on this type of government regulations. We hear some of those concerns from Todd O'Boyle, vice president of technology policy at the Chamber of Progress.
Recorded at the Dutch Cloud Native Day, hosts Ronald Kers (CNCF Ambassador) and Jan Stomphorst (Solutions Architect at ACC ICT) dive into one of the most original edge computing use cases we've seen so far—running a full Kubernetes environment from the top of a tractor.They speak with Wieneke Keller, CTO at Aurea Imaging, and Sebastian Lenartowicz, Senior Software Engineer on the TreeScout project, about how Kubernetes—specifically K3s—is transforming apple and pear orchards across Europe.
In this episode, we sit down with robotics powerhouse Sue Keay - Chair of the Robotics Australia Board, Director of UNSW's AI Institute, and a long-time champion of Australia's robotics and AI industries. We dig into the reality behind Australia's slip in global robotics density rankings - and why that might be painting the wrong picture. From mining to logistics, agtech to space, Sue shares where Australia is quietly excelling and where the next wave of innovation is set to emerge. The conversation spans: Why Australia's strength in field robotics could shape the global future of automation How AI is breaking down barriers to robotics adoption for SMEs The real impact of automation on jobs - and how to navigate it The urgent need for a national AI strategy and infrastructure plan Practical steps for businesses to kickstart their automation journey (hint: it starts with a whiteboard, not a robot) Whether you're deep in industry or just automation-curious, Sue brings clarity, insight, and optimism to a space that's moving fast.
OpenMV has a new Kickstarter so CEO Kwabena Agyeman chatted with us about more powerful (and smaller!) programmable cameras. See OpenMV's site for their existing cameras. See their (already funded!) kickstarter page for the super powerful N6 and the ridiculously small AE3. Note that OpenMV still is committed to open source. See their github if you want to know more. Edge AI is the idea of putting intelligence in the devices (instead of in the cloud). There is an advocacy and education foundation called Edge AI Foundation. This organization was formerly the TinyML Foundation. Edge Impulse and Roboflow are companies that aid in creating and training AI models that can be put on devices. ARM talks about their Ethos-U55 NPU (and how to write software for it). Transcript
In this episode we sit down with Alex Hawkinson, the visionary founder of SmartThings and CEO of Bright AI. Alex takes us on a journey from revolutionizing home automation to tackling the world's most outdated infrastructure. Discover how AI, IoT, and real-time automation are transforming our cities, utilities, and industries, and why blue-collar workers are the biggest beneficiaries of this technological revolution.Learn how Bright AI is turning outdated systems into intelligent networks that save money, protect the environment, and improve lives.If you're passionate about the future of technology and its potential to solve real-world problems, this episode is a must-listen.What You'll Discover in This Episode:From Smart Homes to Smart Cities: How Alex Hawkinson's experience with SmartThings laid the groundwork for Bright AI.The Problem with Aging Infrastructure: Why our water, energy, and other critical systems are in dire need of modernization.The Power of Physical AI: How IoT, AI, and real-time automation are revolutionizing infrastructure.Autonomous Inspection and Maintenance: How robots and AI are inspecting and repairing pipelines and other critical systems.Augmenting Human Workers: How wearables and AI co-pilots are empowering blue-collar workers.The Economic and Environmental Impact: How these technologies are driving productivity, efficiency, and sustainability.The Future of Infrastructure: How Bright AI's platform is enabling the creation of a "physical graph" of intelligent systems.The Potential of Edge AI and Sensors: How devices can learn and adapt to their environments, leading to new and unexpected use cases.
Smart homes are getting smarter, but at what cost? Are we truly building intelligent, secure spaces, or are we creating a fragile ecosystem that could fail when we need it most?
The modular data center industry is undergoing a seismic shift in the age of AI, and few are as deeply embedded in this transformation as Andrew Lindsey, Co-Founder and CEO of Flexnode. In a recent episode of the Data Center Frontier Show podcast, Lindsey joined DCF Editor-in-Chief Matt Vincent and Senior Editor David Chernicoff to discuss the evolution of modular and edge data centers, the growing demand for high-density liquid-cooled solutions, and the industry factors driving this momentum. A Background Rooted in Innovation Lindsey's career has been defined by the intersection of technology and the built environment. Prior to launching Flexnode, he worked at Alpha Corporation, a top 100 engineering and construction management firm founded by his father in 1979. His early career involved spearheading technology adoption within the firm, with a focus on high-security infrastructure for both government and private clients. Recognizing a massive opportunity in the data center space, Lindsey saw a need for an innovative approach to infrastructure deployment. "The construction industry is relatively uninnovative," he explained, citing a McKinsey study that ranked construction as the second least-digitized industry—just above fishing and wildlife, which remains deliberately undigitized. Given the billions of square feet of data center infrastructure required in a relatively short timeframe, Lindsey set out to streamline and modernize the process. Founded four years ago, Flexnode delivers modular data centers with a fully integrated approach, handling everything from site selection to design, engineering, manufacturing, deployment, operations, and even end-of-life decommissioning. Their core mission is to provide an "easy button" for high-density computing solutions, including cloud and dedicated GPU infrastructure, allowing faster and more efficient deployment of modular data centers. The Rising Momentum for Modular Data Centers As Vincent noted, Data Center Frontier has closely tracked the increasing traction of modular infrastructure. Lindsey has been at the forefront of this shift, witnessing the market evolve significantly over the last five years. "Five years ago, we were looking at a graveyard of modular data center companies that leaned heavily on the edge," Lindsey recalled. Many early modular providers focused on latency-sensitive, interconnected solutions—such as base stations at 5G tower sites. However, the market proved premature, hindered by high costs and the scale of deployment within the telecommunications industry. Now, macroeconomic and technological factors have driven a fundamental shift toward modular data centers. One of the most significant drivers is the rapid evolution of chip design. "A traditional data center design cycle can take a year or 18 months," Lindsey explained. "But if we see radical Nvidia chip advancements every 12 months, your design could be obsolete before you even break ground." The need for embedded flexibility within data center design has made modular solutions an ideal fit. Labor Scarcity and the Need for Efficiency Another factor accelerating the adoption of modular infrastructure is the labor shortage in construction. "There just aren't enough people today to build the scale of infrastructure needed for data centers," Lindsey noted. Compounding the issue is an aging workforce, with many skilled professionals nearing retirement. "When they leave, they take decades of institutional knowledge with them." Modular construction mitigates this problem by shifting labor-intensive processes to manufacturing environments where technical expertise is concentrated. By centralizing production, modular providers can reduce reliance on dispersed construction labor while maintaining high precision and efficiency. Liquid Cooling and the Future of High-Density Deployments Flexnode is also a leader in the adoption of high-density liquid-cooled infrastructure. Lindsey attended the CoolerChips event last year and has been vocal about the advantages of liquid cooling for modern workloads. "More recently, modular is everywhere," he said. "We at Flexnode are seeing demand hand over fist for high-density liquid-cooled systems that integrate seamlessly with broader building designs." This demand underscores the shift from the speculative modular edge deployments of five years ago to today's high-performance, flexible data center solutions. "Modular is no longer just a niche," Lindsey concluded. "It's a critical strategy for meeting the growing demand for scalable, high-efficiency data center capacity." The realization that liquid cooling would become a building-wide challenge, rather than just an IT challenge, was a pivotal moment for Flexnode. "Four years ago, we recognized that liquid cooling, which had been around for 10 to 15 years in government and research, was now commercially viable. But very few data centers were truly equipped to deploy it to its full potential," Lindsey explained. Flexnode identified an opportunity to deliver integrated liquid-cooled modules that connect IT systems to building infrastructure through a fully embedded design. Rather than developing proprietary liquid cooling technology, Flexnode focuses on being "liquid neutral." "The liquid cooling market is advancing well on its own," Lindsey said. "We want to enable OEM-driven solutions like JetCool, Motivair, Isotope, and ZutaCore, ensuring they perform optimally in an environment designed to support them." Flexnode operates at the building scale, working on innovative heat management strategies that eliminate the need for external heat rejection. "We integrate heat rejection into the panelized construction of our modular data centers," Lindsey explained. This approach pushes forward a broader, integrated building design suited for liquid cooling. The Shift Toward Hybrid and Two-Phase Liquid Cooling David Chernicoff asked Lindsey whether Flexnode leans toward specific liquid cooling methodologies, such as waterless, multi-phase, or single-phase solutions. Lindsey responded that their focus aligns with OEM and ODM preferences. "Right now, we're primarily working with direct-to-chip water-based single-phase cooling," Lindsey said. "But as part of our work with the Cooler Chips program, we're also developing a hybrid immersion approach with Isotope." This hybrid method integrates both direct-to-chip and immersion cooling. The industry is currently debating whether to move to a single-phase hybrid approach or leapfrog directly to two-phase cooling. "The big challenge with two-phase is the environmental impact of certain chemicals used in the process," Lindsey noted. While companies are actively working to address these concerns, two-phase cooling remains a complex consideration. Even Nvidia is leaning toward a two-phase future. "From what we've heard at CoolerChips, Nvidia sees the next generation as being two-phase oriented," Lindsey said. "But they can speak better to that." With liquid cooling now firmly part of the mainstream conversation, the challenge is not just about advancing the technology but also ensuring that the surrounding infrastructure evolves to support it. Flexnode's approach—integrating liquid cooling at the building level—positions them at the leading edge of this shift. Customer Demands Drive Cooling Technology Choices As the industry evolves, cooling technology decisions are increasingly shaped by customer preferences. "Right now, it's very much customer-driven for us," Lindsey explained. "We're working with sophisticated customers—hyperscalers and GPU-as-a-service providers—who already know what they want to deploy." While some enterprises may still be evaluating their liquid cooling options, hyperscalers are looking beyond traditional single-phase approaches, including both dielectric and water-based cooling. However, Lindsey emphasized that many of these developments remain in the R&D phase. "We don't typically recommend one technology over another unless there's a clear drawback," he said. One challenge with direct-to-chip cooling, for example, is achieving full heat absorption into the liquid. "That's where hybrid approaches come in," Lindsey noted. He described hybrid designs that integrate both two-phase direct-to-chip cooling and immersion cooling, as seen in the CoolerChips program. "In some cases, direct-to-chip is single-phase, in others, it's two-phase. We're working as a category B provider, helping integrate these technologies at the building level." Lindsey also touched on sustainability concerns, particularly around immersion cooling. "Immersion is seen as the most sustainable in terms of energy efficiency, but there are still questions about how immersion fluids impact server longevity over time," he said. Factors like glue degradation and cable insulation breakdown raise questions about immersion cooling's long-term sustainability profile. Two-phase cooling also presents challenges. "There's an ongoing discussion about PFAS and finding non-toxic, non-carcinogenic alternatives," Lindsey explained. "Beyond that, two-phase cooling can create cavitational forces that affect motherboard and chip integrity over time. That's why many in the industry—including Nvidia—are still weighing the trade-offs." With liquid cooling now firmly in the mainstream, the industry's next challenge is integrating these technologies seamlessly into modular data centers. "It's not just about cooling IT gear anymore; it's about designing buildings that fully support liquid cooling at scale," Lindsey concluded. Flexnode's modular approach positions them at the forefront of this transformation. Modular Configurations and Integrated Power Solutions Finally, Flexnode's modular approach offers extreme configurability. "Our modules can be standalone data centers or integrated into powered shell facilities," Lindsey explained. "We configure everything from 2 MW to 20 MW standalone deployments, and we can scale up to 200 MW campuses." Beyond footprint flexibility, power integration is a growing focus. "On-prem generation is gaining traction, particularly with fuel-agnostic generators that can switch between natural gas, hydrogen, methane, and propane," Lindsey noted. Collaborating with partners like Hyliion, Flexnode is exploring adaptable power solutions, including fuel cells. Being behind the meter is another key driver. "Utilities are getting smarter about power allocation," Lindsey said. "In Europe, data centers are facing use-it-or-lose-it policies, and in the U.S., regions like Ohio are imposing tariffs on unused capacity." On-site power generation provides greater flexibility, helping data centers scale more efficiently and participate in curtailment programs that balance grid demand. Looking Ahead As modular data centers become a core part of the industry landscape, Flexnode is pushing the boundaries of what's possible. "We see modular as a natural extension of utilities—a distributed solution that enhances flexibility," Lindsey concluded. "And we're just getting started."
Edge AI is one of the popular topics amongst the embedded community. It's the place where decisions are made, data is analyzed, etc. It's also a place where lots of confusion arises because of its complexity, especially from a development perspective. To get to the heart of where the challenges lie in this phase of design, I spoke with Jim Beneke, a Vice President for Tria Americas, an Avnet company, on this week's Embedded Executives podcast. Jim has a pretty deep history in the embedded space and served as a great resource on this topic.
This episode features an interview between Bill Pfeifer and Ananta Nair, Artificial Intelligence Engineer at Dell Technologies, where she leads AI and ML software initiatives for large enterprises. Ananta discusses differences between human learning and AI models, highlighting the complexities and limitations of current AI technologies. She also touches on the potential and challenges of AI in edge computing, emphasizing the importance of building efficient, scalable, and business-focused models. --------Key Quotes:“It's very hard to take these AI structures and say that they can do all of these very complex things that humans can do, when they're architecturally designed very differently. I'm a big fan of biologically inspired, not biologically accurate.”“Do you really need AGI for a lot of real world applications? No, you don't want that. Do you want some really complex system where you have no idea what you're doing, where you're pouring all this money in and you know, you're not really getting the results that you want? No, you want something very simple outcomes razor approach, make it as simple as possible, scalable, can adapt, can measure for all of your business metrics” “We have reached a point where you can do the most with AI models with minimal compute than ever. And so I think that is very exciting. I think we have reached a point where you have very capable models that you can deploy at the edge and I think there's a lot of stuff happening in that realm.”--------Timestamps: (01:20) How Ananta got started in tech and neuroscience(04:59) Human learning vs AI learning(15:11) Explaining dynamical systems (26:57) Exploring AI agents and human behavior(30:43) Edge computing and AI models(32:58) Advancements in AI model efficiency--------Sponsor:Edge solutions are unlocking data-driven insights for leading organizations. With Dell Technologies, you can capitalize on your edge by leveraging the broadest portfolio of purpose-built edge hardware, software and services. Leverage AI where you need it; simplify your edge; and protect your edge to generate competitive advantage within your industry. Capitalize on your edge today with Dell Technologies.--------Credits:Over the Edge is hosted by Bill Pfeifer, and was created by Matt Trifiro and Ian Faison. Executive producers are Matt Trifiro, Ian Faison, Jon Libbey and Kyle Rusca. The show producer is Erin Stenhouse. The audio engineer is Brian Thomas. Additional production support from Elisabeth Plutko.--------Links:Follow Ananta on LinkedInFollow Bill on LinkedIn
From a developer's perspective, Edge AI is anything but simple. Any tools, software, etc., that can simplify the process of integrating this technology into a system is very welcome. That's where ClearBlade comes in, with its framework that developers “design to,” using hardware, software, and even terminology that the design community is accustomed to. That process may seem vague and complex, but it needn't be, as stated by Aaron Allsbrook, the Co-Founder and CTO of ClearBlade, in this week's Embedded Executives podcast.
In this episode of the IoT For All Podcast, Fabrizio Del Maffeo, co-founder and CEO of Axelera AI, joins Ryan Chacon to discuss edge AI. The conversation covers the importance and benefits of edge AI, such as reduced latency, real-time decision-making, and enhanced privacy, optimizing algorithms and hardware design for edge devices, the potential of AI in various industries, the role of cloud computing, retrofitting existing solutions with AI, and the impact of generative AI.Fabrizio Del Maffeo is co-founder and CEO of Axelera AI, a Netherlands-based startup building scalable hardware for AI at the edge. Fabrizio leads a world-class executive team, board of directors, and advisors from top AI Fortune 500 companies. Previously, Fabrizio was Vice President and Managing Director of AAEON Technology Europe, the AI and IoT computing company within the ASUS Group. Fabrizio graduated with a Master's degree in telecommunication engineering from Milan Politecnico University.Axelera AI is on a mission to provide rapid access to advanced Edge AI-native hardware and software solutions for companies of all sizes across a range of market verticals and place AI in the hands of those who could not otherwise afford it. They do this by delivering faster, more efficient, and easy-to-use inference acceleration while minimizing power and cost. To do this, their platform is purpose-built to support AI strategies across a wide-range of industries while seamlessly integrating with existing technologies.Discover more about IoT athttps://www.iotforall.comFind IoT solutions:https://marketplace.iotforall.comMore about Axelera AI:https://www.axelera.aiConnect with Fabrizio:https://www.linkedin.com/in/delmaffeo/(00:00) Intro(00:10) Fabrizio Del Maffeo and Axelera AI(01:20) What is edge AI?(02:30) Benefits and challenges of edge computing(05:17) Privacy and compliance in edge AI(06:26) Future of edge computing and AI(08:18) Retrofitting existing edge devices with AI(11:02) Role of cloud computing(12:26) Impact of generative AI(15:24) Industry insights from recent events(17:11) Learn more and follow upSubscribe to the Channel:https://bit.ly/2NlcEwmJoin Our Newsletter:https://newsletter.iotforall.comFollow Us on Social:https://linktr.ee/iot4all
How have data centers evolved and how will we adapt to meet modern data storage demands? In this episode Bill sits down with John Bonczek, Chief Revenue Officer of fifteenfortyseven Critical Systems Realty, a leading developer of highly interconnected, custom-designed data centers. They discuss the impact of AI on power and cooling requirements and the growing interest in edge deployments. John also highlights the importance of planning, customer requirements, and the challenges of building data centers to meet modern demands.--------Key Quotes:“ And I believe everyone in the space, including AI and hyperscalers, are planning further ahead, as well. Partially because you have to, but you look at the consumption of some of these AI companies that are coming in and gobbling up all of the available inventory out there that meets their needs, it's just causing a lack of inventory to be available.”“There's going to be a next wave from AI, more of the inference applications that are more of the edge applications that require lower latency and more real time compute and learning.”--------Timestamps: (01:45) How John got started in tech (06:45) Data centers and edge deployments (11:13) Challenges in modern data centers(20:29) The role of AI(31:01) Power and sustainability in data centers(35:50) Nomad Futurists and the future workforce --------Sponsor:Edge solutions are unlocking data-driven insights for leading organizations. With Dell Technologies, you can capitalize on your edge by leveraging the broadest portfolio of purpose-built edge hardware, software and services. Leverage AI where you need it; simplify your edge; and protect your edge to generate competitive advantage within your industry. Capitalize on your edge today with Dell Technologies.--------Credits:Over the Edge is hosted by Bill Pfeifer, and was created by Matt Trifiro and Ian Faison. Executive producers are Matt Trifiro, Ian Faison, Jon Libbey and Kyle Rusca. The show producer is Erin Stenhouse. The audio engineer is Brian Thomas. Additional production support from Elisabeth Plutko.--------Links:Follow John on LinkedInFollow Bill on LinkedIn
How does the Tour de France collect data and provide real time stats to viewers? Shahid Ahmed, EVP for New Ventures and Innovations at NTT, argues that the work they do to track the Tour de France is the ultimate edge use case. He also dives into small AI models and localized data at the edge, as well as educating for the future of jobs in tech. --------Key Quotes:“ I think [tracking the Tour de France] is one of the most fascinating edge use cases that I've seen in a while…. We've got cyclists going at north of 50 miles an hour, 60 miles an hour sometimes, and they're moving across a variety of different towns, villages, mountains, always a challenge from a network coverage perspective. It's a very difficult equation just from that movement part.”“If you tune in to NBC, which covers the Tour de France, or USA Network, which is the sister network, you'll always see the live coverage of stats…. Live data is a must have and it has to have very small latency, meaning it can't be five seconds later when the cyclist has gone down or finished across the line.”Timestamps: (01:32) How Shahid got started in tech (04:55) Innovations in 5G(09:06) Challenges and innovations in the Tour de France(18:01) Edge AI and industrial applications(27:38) Lessons from a global role and regulatory challenges(34:03) Teaching Industrial IoT and preparing the workforce--------Sponsor:Edge solutions are unlocking data-driven insights for leading organizations. With Dell Technologies, you can capitalize on your edge by leveraging the broadest portfolio of purpose-built edge hardware, software and services. Leverage AI where you need it; simplify your edge; and protect your edge to generate competitive advantage within your industry. Capitalize on your edge today with Dell Technologies.--------Credits:Over the Edge is hosted by Bill Pfeifer, and was created by Matt Trifiro and Ian Faison. Executive producers are Matt Trifiro, Ian Faison, Jon Libbey and Kyle Rusca. The show producer is Erin Stenhouse. The audio engineer is Brian Thomas. Additional production support from Elisabeth Plutko.--------Links:Follow Shahid on LinkedInFollow Bill on LinkedIn
In this episode of Arm Viewpoints, host Brian Fuller chats with SpaceTech CEO Sean Ding about the transformative role of Edge AI in smart buildings. Sean shares insights from his journey at Intel, Huawei and Alibaba, leading to his work at Vanke's SpaceTech, where he's driving the digitization of 1 billion square meters of real estate in China (four times the size of Washington, D.C.). They discuss the integration of Arm-based edge servers, AI for energy efficiency and the future of smart building technology. Tune in to explore how innovation is reshaping property management and advancing green building initiatives.
What will AI look like in 2025?
How are AI and edge computing revolutionizing the retail industry? In this episode, Bill sits down with Omar El Gohary, Vice President and General Manager of IoT Solutions at ServiceNow. Omar shares how AI and edge computing are transforming retail by optimizing operational efficiencies and reshaping customer experiences. He also delves into the challenges of managing distributed data, the impact of GenAI, and the opportunities for innovation in today's fast-paced retail environment.--------Key Quotes:“ Technology that is use case driven, that brings you benefit, that's where retailers have to go and that's where the benefits lie.”“You do see that retailers that have more robust processes, that have integrated their entire value chain, that are able to act on data and be proactive and not reactive, will definitely have an advantage.”“ One of the key words is how do you reduce the barrier to the adoption of the use cases, right? And that is by having the correct edge device, by having the right data approach and bringing the silos together, and having the right AI components.”--------Timestamps: (04:27) Evolving with technology in retail(06:33) Challenges in digitizing the retail industry(14:38) Technology adoption and its impact on retail(18:00) Breaking down data silos (27:03) Managing distributed data at the edge(39:03) GenAI's role in transforming retail--------Sponsor:Edge solutions are unlocking data-driven insights for leading organizations. With Dell Technologies, you can capitalize on your edge by leveraging the broadest portfolio of purpose-built edge hardware, software and services. Leverage AI where you need it; simplify your edge; and protect your edge to generate competitive advantage within your industry. Capitalize on your edge today with Dell Technologies.--------Credits:Over the Edge is hosted by Bill Pfeifer, and was created by Matt Trifiro and Ian Faison. Executive producers are Matt Trifiro, Ian Faison, Jon Libbey and Kyle Rusca. The show producer is Erin Stenhouse. The audio engineer is Brian Thomas. Additional production support from Elisabeth Plutko.--------Links:Follow Omar on LinkedInFollow Bill on LinkedInReimagining Intelligent Modern Retail: Redefining the Retail Experience for TomorrowLearn more about ServiceNow: https://www.servicenow.com/industries/retail.htmlhttps://www.servicenow.com/products/retail-operations.htmlhttps://www.servicenow.com/company/media/press-room/now-assist-ai-industry-solutions.html
Send Everyday AI and Jordan a text messageDid Google dethrone OpenAI in December? Could Claude finally catch up? Or will ChatGPT reign supreme in the LLM race of 2025? We break down the LLM landscape in 2025. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Jordan questions on AIUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:1. OpenAI's Dominance in LLM Space2. Shift Towards Smaller Language Models3. LLMs and Internet Connectivity4. OpenAI's Focus and Competition5. Popularity of Different AI Systems6. LLM Predictions and OutcomesTimestamps:00:00 Discussing future competition in AI development.02:20 Daily AI news08:49 Importance of large language models in daily life.10:31 Who will win AI race: OpenAI, Google, others?16:30 Focus will shift to business value in 2025.17:20 New AI models surpass human intelligence benchmarks.23:13 Chat GPT search frequently stuck in loop.26:06 OpenAI loses money, key staff, needs resources.27:51 Listen to WorkLab podcast for AI insights.30:54 ChatGPT dominates due to user interest.34:46 Google hid AI tech, affecting market value.38:03 ChatGPT's interface outpaces stagnant competitors' features.42:49 Locally run massive models, GPT advancements impressive.45:09 OpenAI excels in small language models locally.48:33 Google's groundbreaking December 2024 in AI technology.Keywords:OpenAI, ChatGPT, Google Trends, AI systems, Gemini, Perplexity, Claude, large language models, front-end dominance, internet connectivity, ChatGPT search, financial challenges, competition, AI strengths, user data, Microsoft WorkLab Podcast, model parameters, Meta, GPT-4, NVIDIA, small language models, Edge AI, AI race, Everyday AI podcast, Microsoft investment in India, Google DeepMind, NVIDIA RTX 50 series GPU, artificial intelligence, AI knowledge, reasoning models. Learn how work is changing on WorkLab, available wherever you get your podcasts.
En el episodio de hoy, Miguel Muñoz y Eugenio Garibay analizan las empresas que han comenzado el 2025 con fuerza, explorando casos como la colaboración entre Synaptics y Google en Edge AI, el "momento nirvana" de Nvidia, el impacto de la desaceleración económica en las acciones chinas, y el resurgimiento de las acciones relacionadas con criptomonedas. También destacan las mejoras significativas en empresas como Cloudflare y RTX, además de los movimientos estratégicos en la lista de compras de Goldman Sachs. Simplificando términos complejos y ofreciendo contexto histórico, presentan una visión clara de las tendencias económicas y tecnológicas actuales, ayudando a los oyentes a entender cómo estos eventos podrían influir en el mercado este año.
Send Everyday AI and Jordan a text messageGoogle just dropped its 'Flash Thinking' reasoning model. Is it better than o1? ↳ Why is NVIDIA going small? ↳ And OpenAI announced its 03 mode. Why did it skip o2? ↳ ChatGPT's Advanced Voice Mode gets a ton of updates. What do they do? Here's this week's AI News That Matters!Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Jordan questions on AIUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:1. Google AI Models and Updates2. ChatGPT Updates3. ChatGPT Advanced Voice Mode4. OpenAI's New Reasoning Model5. Salesforce AI Updates6. NVIDIA Jetson Orin Nano7. Meta's Ray-Ban Smart GlassesTimestamps:00:00 Daily AI news podcast and newsletter subscription.05:06 Gemini 2.0 Flash tops in LLM rankings.07:36 Ray Ban Meta Glasses: AI video, translation available.11:17 Salesforce hires humans to sell AI product.14:26 NVIDIA's Nano Super boosts AI performance, affordability.18:29 VO 2 excels with 4K quality physics videos.23:17 AI models can deceive by faking alignment.24:50 Anthropic study highlights AI system behavior variability.29:40 Google previews AI-mode search with chatbot features.32:38 Big publishers block access; tech must adapt.37:32 ChatGPT updates improve app integration functionality.|40:40 O Three models enhance AI task adoption.44:09 Current AI hasn't achieved AGI, needs tool use.45:52 O three model may achieve AGI, costly access.48:17 Share our AI content; support appreciated.Keywords:Google AI Mode, Gemini AI chatbot, refining searches, ChatGPT updates, OpenAI, AI integration in search engines, Salesforce AgentForce 2.0, Capgemini survey, AI security risks, NVIDIA Jetson Orin Nano, Edge AI, Google VO 2, video generation model, YouTube Shorts, AI alignment faking, Anthropic research, Google's Gemini 2.0 Flash Thinking, multimodal reasoning, Meta's Ray-Ban Smart Glasses, real-time language translation, Shazam integration, OpenAI 03 reasoning model, artificial general intelligence, ARC AGI benchmark, AI capabilities, high costs of AI, Google updates, Meta updates, Salesforce updates, NVIDIA updates. Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/
Happy holidays! We'll be sharing snippets from Latent Space LIVE! through the break bringing you the best of 2024! We want to express our deepest appreciation to event sponsors AWS, Daylight Computer, Thoth.ai, StrongCompute, Notable Capital, and most of all our LS supporters who helped fund the venue and A/V production!For NeurIPS last year we did our standard conference podcast coverage interviewing selected papers (that we have now also done for ICLR and ICML), however we felt that we could be doing more to help AI Engineers 1) get more industry-relevant content, and 2) recap 2024 year in review from experts. As a result, we organized the first Latent Space LIVE!, our first in person miniconference, at NeurIPS 2024 in Vancouver.Since Nathan Lambert ( Interconnects ) joined us for the hit RLHF 201 episode at the start of this year, it is hard to overstate how much Open Models have exploded this past year. In 2023 only five names were playing in the top LLM ranks, Mistral, Mosaic's MPT, TII UAE's Falcon, Yi from Kai-Fu Lee's 01.ai, and of course Meta's Llama 1 and 2. This year a whole cast of new open models have burst on the scene, from Google's Gemma and Cohere's Command R, to Alibaba's Qwen and Deepseek models, to LLM 360 and DCLM and of course to the Allen Institute's OLMo, OL MOE, Pixmo, Molmo, and Olmo 2 models. We were honored to host Luca Soldaini, one of the research leads on the Olmo series of models at AI2.Pursuing Open Model research comes with a lot of challenges beyond just funding and access to GPUs and datasets, particularly the regulatory debates this year across Europe, California and the White House. We also were honored to hear from and Sophia Yang, head of devrel at Mistral, who also presented a great session at the AI Engineer World's Fair Open Models track!Full Talk on YouTubePlease like and subscribe!Timestamps* 00:00 Welcome to Latent Space Live * 00:12 Recap of 2024: Best Moments and Keynotes * 01:22 Explosive Growth of Open Models in 2024 * 02:04 Challenges in Open Model Research * 02:38 Keynote by Luca Soldani: State of Open Models * 07:23 Significance of Open Source AI Licenses * 11:31 Research Constraints and Compute Challenges * 13:46 Fully Open Models: A New Trend * 27:46 Mistral's Journey and Innovations * 32:57 Interactive Demo: Lachat Capabilities * 36:50 Closing Remarks and NetworkingTranscriptSession3Audio[00:00:00] AI Charlie: Welcome to Latent Space Live, our first mini conference held at NeurIPS 2024 in Vancouver. This is Charlie, your AI co host. As a special treat this week, we're recapping the best of 2024 going domain by domain. We sent out a survey to the over 900 of you who told us what you wanted, and then invited the best speakers in the latent space network to cover each field.[00:00:28] AI Charlie: 200 of you joined us in person throughout the day, with over 2, 200 watching live online. Our next keynote covers the state of open models in 2024, with Luca Soldani and Nathan Lambert of the Allen Institute for AI, with a special appearance from Dr. Sophia Yang of Mistral. Our first hit episode of 2024 was with Nathan Lambert on RLHF 201 back in January.[00:00:57] AI Charlie: Where he discussed both reinforcement learning for language [00:01:00] models and the growing post training and mid training stack with hot takes on everything from constitutional AI to DPO to rejection sampling and also previewed the sea change coming to the Allen Institute. And to Interconnects, his incredible substack on the technical aspects of state of the art AI training.[00:01:18] AI Charlie: We highly recommend subscribing to get access to his Discord as well. It is hard to overstate how much open models have exploded this past year. In 2023, only five names were playing in the top LLM ranks. Mistral, Mosaics MPT, and Gatsby. TII UAE's Falcon, Yi, from Kaifu Lee's 01. ai, And of course, Meta's Lama 1 and 2.[00:01:43] AI Charlie: This year, a whole cast of new open models have burst on the scene. From Google's Jemma and Cohere's Command R, To Alibaba's Quen and DeepSeq models, to LLM360 and DCLM, and of course, to the Allen Institute's OLMO, [00:02:00] OLMOE, PIXMO, MOLMO, and OLMO2 models. Pursuing open model research comes with a lot of challenges beyond just funding and access to GPUs and datasets, particularly the regulatory debates this year across Europe.[00:02:14] AI Charlie: California and the White House. We also were honored to hear from Mistral, who also presented a great session at the AI Engineer World's Fair Open Models track. As always, don't forget to check the show notes for the YouTube link to their talk, as well as their slides. Watch out and take care.[00:02:35] Luca Intro[00:02:35] Luca Soldaini: Cool. Yeah, thanks for having me over. I'm Luca. I'm a research scientist at the Allen Institute for AI. I threw together a few slides on sort of like a recap of like interesting themes in open models for, for 2024. Have about maybe 20, 25 minutes of slides, and then we can chat if there are any questions.[00:02:57] Luca Soldaini: If I can advance to the next slide. [00:03:00] Okay, cool. So I did the quick check of like, to sort of get a sense of like, how much 2024 was different from 2023. So I went on Hugging Face and sort of get, tried to get a picture of what kind of models were released in 2023 and like, what do we get in 2024?[00:03:16] Luca Soldaini: 2023 we get, we got things like both LLAMA 1 and 2, we got Mistral, we got MPT, Falcon models, I think the YI model came in at the end. Tail end of the year. It was a pretty good year. But then I did the same for 2024. And it's actually quite stark difference. You have models that are, you know, reveling frontier level.[00:03:38] Luca Soldaini: Performance of what you can get from closed models from like Quen, from DeepSeq. We got Llama3. We got all sorts of different models. I added our own Olmo at the bottom. There's this growing group of like, Fully open models that I'm going to touch on a little bit later. But you know, just looking at the slides, it feels like 2024 [00:04:00] was just smooth sailing, happy knees, much better than previous year.[00:04:04] Luca Soldaini: And you know, you can plot you can pick your favorite benchmark Or least favorite, I don't know, depending on what point you're trying to make. And plot, you know, your closed model, your open model and sort of spin it in ways that show that, oh, you know open models are much closer to where closed models are today versus to Versus last year where the gap was fairly significant.[00:04:29] Luca Soldaini: So one thing that I think I don't know if I have to convince people in this room, but usually when I give this talks about like open models, there is always like this background question in, in, in people's mind of like, why should we use open models? APIs argument, you know, it's, it's. Just an HTTP request to get output from a, from one of the best model out there.[00:04:53] Luca Soldaini: Why do I have to set up infra and use local models? And there are really like two answer. There is the more [00:05:00] researchy answer for this, which is where it might be. Background lays, which is just research. If you want to do research on language models, research thrives on, on open models, there is like large swath of research on modeling, on how these models behave on evaluation and inference on mechanistic interpretability that could not happen at all if you didn't have open models they're also for AI builders, they're also like.[00:05:30] Luca Soldaini: Good use cases for using local models. You know, you have some, this is like a very not comprehensive slides, but you have things like there are some application where local models just blow closed models out of the water. So like retrieval, it's a very clear example. We might have like constraints like Edge AI applications where it makes sense.[00:05:51] Luca Soldaini: But even just like in terms of like stability, being able to say this model is not changing under the hood. It's, there's plenty of good cases for, [00:06:00] for open models. And the community is just not models. Is I stole this slide from one of the Quent2 announcement blog posts. But it's super cool to see like how much tech exists around open models and serving them on making them efficient and hosting them.[00:06:18] Luca Soldaini: It's pretty cool. And so. It's if you think about like where the term opens come from, comes from like the open source really open models meet the core tenants of, of open, of open source specifically when it comes around collaboration, there is truly a spirit, like through these open models, you can build on top of other people.[00:06:41] Luca Soldaini: innovation. We see a lot of these even in our own work of like, you know, as we iterate in the various versions of Alma it's not just like every time we collect from scratch all the data. No, the first step is like, okay, what are the cool data sources and datasets people have put [00:07:00] together for language model for training?[00:07:01] Luca Soldaini: Or when it comes to like our post training pipeline We one of the steps is you want to do some DPO and you use a lot of outputs of other models to improve your, your preference model. So it's really having like an open sort of ecosystem benefits and accelerates the development of open models.[00:07:23] The Definition of Open Models[00:07:23] Luca Soldaini: One thing that we got in 2024, which is not a specific model, but I thought it was really significant, is we first got we got our first open source AI definition. So this is from the open source initiative they've been generally the steward of a lot of the open source licenses when it comes to software and so they embarked on this journey in trying to figure out, okay, How does a license, an open source license for a model look like?[00:07:52] Luca Soldaini: Majority of the work is very dry because licenses are dry. So I'm not going to walk through the license step by [00:08:00] step, but I'm just going to pick out one aspect that is very good and then one aspect that personally feels like it needs improvement on the good side. This this open source AI license actually.[00:08:13] Luca Soldaini: This is very intuitive. If you ever build open source software and you have some expectation around like what open source looks like for software for, for AI, sort of matches your intuition. So, the weights need to be fairly available the code must be released with an open source license and there shouldn't be like license clauses that block specific use cases.[00:08:39] Luca Soldaini: So. Under this definition, for example, LLAMA or some of the QUEN models are not open source because the license says you can't use this model for this or it says if you use this model you have to name the output this way or derivative needs to be named that way. Those clauses don't meet open source [00:09:00] definition and so they will not be covered.[00:09:02] Luca Soldaini: The LLAMA license will not be covered under the open source definition. It's not perfect. One of the thing that, um, internally, you know, in discussion with with OSI, we were sort of disappointed is around the language. For data. So you might imagine that an open source AI model means a model where the data is freely available.[00:09:26] Luca Soldaini: There were discussion around that, but at the end of the day, they decided to go with a softened stance where they say a model is open source if you provide sufficient detail information. On how to sort of replicate the data pipeline. So you have an equivalent system, sufficient, sufficiently detailed.[00:09:46] Luca Soldaini: It's very, it's very fuzzy. Don't like that. An equivalent system is also very fuzzy. And this doesn't take into account the accessibility of the process, right? It might be that you provide enough [00:10:00] information, but this process costs, I don't know, 10 million to do. Now the open source definition. Like, any open source license has never been about accessibility, so that's never a factor in open source software, how accessible software is.[00:10:14] Luca Soldaini: I can make a piece of open source, put it on my hard drive, and never access it. That software is still open source, the fact that it's not widely distributed doesn't change the license, but practically there are expectations of like, what we want good open sources to be. So, it's, It's kind of sad to see that the data component in this license is not as, as, Open as some of us would like would like it to be.[00:10:40] Challenges for Open Models[00:10:40] Luca Soldaini: and I linked a blog post that Nathan wrote on the topic that it's less rambly and easier to follow through. One thing that in general, I think it's fair to say about the state of open models in 2024 is that we know a lot more than what we knew in, [00:11:00] in 2023. Like both on the training data, like And the pre training data you curate on like how to do like all the post training, especially like on the RL side.[00:11:10] Luca Soldaini: You know, 2023 was a lot of like throwing random darts at the board. I think 2024, we have clear recipes that, okay, don't get the same results as a closed lab because there is a cost in, in actually matching what they do. But at least we have a good sense of like, okay, this is, this is the path to get state of the art language model.[00:11:31] Luca Soldaini: I think that one thing that it's a downside of 2024 is that I think we are more research constrained in 2023. It feels that, you know, the barrier for compute that you need to, to move innovation along as just being right rising and rising. So like, if you go back to this slide, there is now this, this cluster of models that are sort of released by the.[00:11:57] Luca Soldaini: Compute rich club. Membership is [00:12:00] hotly debated. You know, some people don't want to be. Called the rich because it comes to expectations. Some people want to be called rich, but I don't know, there's debate, but like, these are players that have, you know, 10, 000, 50, 000 GPUs at minimum. And so they can do a lot of work and a lot of exploration and improving models that it's not very accessible.[00:12:21] Luca Soldaini: To give you a sense of like how I personally think about. Research budget for each part of the, of the language model pipeline is like on the pre training side, you can maybe do something with a thousand GPUs, really you want 10, 000. And like, if you want real estate of the art, you know, your deep seek minimum is like 50, 000 and you can scale to infinity.[00:12:44] Luca Soldaini: The more you have, the better it gets. Everyone on that side still complains that they don't have enough GPUs. Post training is a super wide sort of spectrum. You can do as little with like eight GPUs as long as you're able to [00:13:00] run, you know, a good version of, say, a LLAMA model, you can do a lot of work there.[00:13:05] Luca Soldaini: You can scale a lot of the methodology, just like scales with compute, right? If you're interested in you know, your open replication of what OpenAI's O1 is you're going to be on the 10K spectrum of our GPUs. Inference, you can do a lot with very few resources. Evaluation, you can do a lot with, well, I should say at least one GPUs if you want to evaluate GPUs.[00:13:30] Luca Soldaini: Open models but in general, like if you are, if you care a lot about intervention to do on this model, which it's my prefer area of, of research, then, you know, the resources that you need are quite, quite significant. Yeah. One other trends that has emerged in 2024 is this cluster of fully open models.[00:13:54] Luca Soldaini: So Omo the model that we built at ai, two being one of them and you know, it's nice [00:14:00] that it's not just us. There's like a cluster of other mostly research efforts who are working on this. And so it's good to to give you a primer of what like fully open means. So fully open, the easy way to think about it is instead of just releasing a model checkpoint that you run, you release a full recipe so that other people working on it.[00:14:24] Luca Soldaini: Working on that space can pick and choose whatever they want from your recipe and create their own model or improve on top of your model. You're giving out the full pipeline and all the details there instead of just like the end output. So I pull up the screenshot from our recent MOE model.[00:14:43] Luca Soldaini: And like for this model, for example, we released the model itself. Data that was trained on, the code, both for training and inference all the logs that we got through the training run, as well as every intermediate checkpoint and like the fact that you release different part of the pipeline [00:15:00] allows others to do really cool things.[00:15:02] Luca Soldaini: So for example, this tweet from early this year from folks in news research they use our pre training data to do a replication of the BitNet paper in the open. So they took just a Really like the initial part of a pipeline and then the, the thing on top of it. It goes both ways.[00:15:21] Luca Soldaini: So for example, for the Olmo2 model a lot of our pre trained data for the first stage of pre training was from this DCLM initiative that was led by folks Ooh, a variety of ins a variety of institutions. It was a really nice group effort. But you know, for When it was nice to be able to say, okay, you know, the state of the art in terms of like what is done in the open has improved.[00:15:46] AI2 Models - Olmo, Molmo, Pixmo etc[00:15:46] Luca Soldaini: We don't have to like do all this work from scratch to catch up the state of the art. We can just take it directly and integrate it and do our own improvements on top of that. I'm going to spend a few minutes doing like a [00:16:00] shameless plug for some of our fully open recipes. So indulge me in this.[00:16:05] Luca Soldaini: So a few things that we released this year was, as I was mentioning, there's OMOE model which is, I think still is state of the art MOE model in its size class. And it's also. Fully open, so every component of this model is available. We released a multi modal model called Molmo. Molmo is not just a model, but it's a full recipe of how you go from a text only model to a multi modal model, and we apply this recipe on top of Quent checkpoints, on top of Olmo checkpoints, as well as on top of OlmoE.[00:16:37] Luca Soldaini: And I think there'd be a replication doing that on top of Mistral as well. The post training side we recently released 2. 0. 3. Same story. This is a recipe on how you go from a base model to A state of the art post training model. We use the Tulu recipe on top of Olmo, on top of Llama, and then there's been open replication effort [00:17:00] to do that on top of Quen as well.[00:17:02] Luca Soldaini: It's really nice to see like, you know, when your recipe sort of, it's kind of turnkey, you can apply it to different models and it kind of just works. And finally, the last thing we released this year was Olmo 2, which so far is the best state of the art. Fully open language model a Sera combines aspect from all three of these previous models.[00:17:22] Luca Soldaini: What we learn on the data side from MomoE and what we learn on like making models that are easy to adapt from the Momo project and the Tulu project. I will close with a little bit of reflection of like ways this, this ecosystem of open models like it's not all roses. It's not all happy. It feels like day to day, it's always in peril.[00:17:44] Luca Soldaini: And, you know, I talked a little bit about like the compute issues that come with it. But it's really not just compute. One thing that is on top of my mind is due to like the environment and how you know, growing feelings about like how AI is treated. [00:18:00] It's actually harder to get access to a lot of the data that was used to train a lot of the models up to last year.[00:18:06] Luca Soldaini: So this is a screenshot from really fabulous work from Shane Longpre who's, I think is in Europe about Just access of like diminishing access to data for language model pre training. So what they did is they went through every snapshot of common crawl. Common crawl is this publicly available scrape of the, of a subset of the internet.[00:18:29] Luca Soldaini: And they looked at how For any given website whether a website that was accessible in say 2017, what, whether it was accessible or not in 2024. And what they found is as a reaction to like the close like of the existence of closed models like OpenAI or Cloud GPT or Cloud a lot of content owners have blanket Blocked any type of crawling to your website.[00:18:57] Luca Soldaini: And this is something that we see also internally at [00:19:00] AI2. Like one project that we started this year is we wanted to, we wanted to understand, like, if you're a good citizen of the internet and you crawl following sort of norms and policy that have been established in the last 25 years, what can you crawl?[00:19:17] Luca Soldaini: And we found that there's a lot of website where. The norms of how you express preference of whether to crawl your data or not are broken. A lot of people would block a lot of crawling, but do not advertise that in RobustDXT. You can only tell that they're crawling, that they're blocking you in crawling when you try doing it.[00:19:37] Luca Soldaini: Sometimes you can't even crawl the robots. txt to, to check whether you're allowed or not. And then a lot of websites there's, there's like all these technologies that historically have been, have existed to make websites serving easier such as Cloudflare or DNS. They're now being repurposed for blocking AI or any type of crawling [00:20:00] in a way that is Very opaque to the content owners themselves.[00:20:04] Luca Soldaini: So, you know, you go to these websites, you try to access them and they're not available and you get a feeling it's like, Oh, someone changed, something changed on the, on the DNS side that it's blocking this and likely the content owner has no idea. They're just using a Cloudflare for better, you know, load balancing.[00:20:25] Luca Soldaini: And this is something that was sort of sprung on them with very little notice. And I think the problem is this, this blocking or ideas really, it impacts people in different ways. It disproportionately helps companies that have a headstart, which are usually the closed labs and it hurts incoming newcomer players where either have now to do things in a sketchy way or you're never going to get that content that the closed lab might have.[00:20:54] Luca Soldaini: So there's a lot, it was a lot of coverage. I'm going to plug Nathan's blog post again. That is, [00:21:00] that I think the title of this one is very succinct which is like, we're actually not, You know, before thinking about running out of training data, we're actually running out of open training data. And so if we want better open models they should be on top of our mind.[00:21:13] Regulation and Lobbying[00:21:13] Luca Soldaini: The other thing that has emerged is that there is strong lobbying efforts on trying to define any kind of, AI as like a new extremely risky and I want to be precise here. Like the problem is now, um, like the problem is not not considering the risk of this technology. Every technology has risks that, that should always be considered.[00:21:37] Luca Soldaini: The thing that it's like to me is sorry, is ingenious is like just putting this AI on a pedestal and calling it like, An unknown alien technology that has like new and undiscovered potentials to destroy humanity. When in reality, all the dangers I think are rooted in [00:22:00] dangers that we know from existing software industry or existing issues that come with when using software on on a lot of sensitive domains, like medical areas.[00:22:13] Luca Soldaini: And I also noticed a lot of efforts that have actually been going on and trying to make this open model safe. I pasted one here from AI2, but there's actually like a lot of work that has been going on on like, okay, how do you make, if you're distributing this model, Openly, how do you make it safe?[00:22:31] Luca Soldaini: How, what's the right balance between accessibility on open models and safety? And then also there's annoying brushing of sort of concerns that are then proved to be unfounded under the rug. You know, if you remember the beginning of this year, it was all about bio risk of these open models.[00:22:48] Luca Soldaini: The whole thing fizzled because as being Finally, there's been like rigorous research, not just this paper from Cohere folks, but it's been rigorous research showing [00:23:00] that this is really not a concern that we should be worried about. Again, there is a lot of dangerous use of AI applications, but this one was just like, A lobbying ploy to just make things sound scarier than they actually are.[00:23:15] Luca Soldaini: So I got to preface this part. It says, this is my personal opinion. It's not my employer, but I look at things like the SP 1047 from, from California. And I think we kind of dodged a bullet on, on this legislation. We, you know, the open source community, a lot of the community came together at the last, sort of the last minute and did a very good effort trying to explain all the negative impact of this bill.[00:23:43] Luca Soldaini: But There's like, I feel like there's a lot of excitement on building these open models or like researching on these open models. And lobbying is not sexy it's kind of boring but it's sort of necessary to make sure that this ecosystem can, can really [00:24:00] thrive. This end of presentation, I have Some links, emails, sort of standard thing in case anyone wants to reach out and if folks have questions or anything they wanted to discuss.[00:24:13] Luca Soldaini: Is there an open floor? I think we have Sophia[00:24:16] swyx: who wants to who one, one very important open model that we haven't covered is Mistral. Ask her on this slide. Yeah, yeah. Well, well, it's nice to have the Mistral person talk recap the year in Mistral. But while Sophia gets set up, does anyone have like, just thoughts or questions about the progress in this space?[00:24:32] Questions - Incentive Alignment[00:24:32] swyx: Do you always have questions?[00:24:34] Quesiton: I'm very curious how we should build incentives to build open models, things like Francois Chollet's ArcPrize, and other initiatives like that. What is your opinion on how we should better align incentives in the community so that open models stay open?[00:24:49] Luca Soldaini: The incentive bit is, like, really hard.[00:24:51] Luca Soldaini: Like, even It's something that I actually, even we think a lot about it internally because like building open models is risky. [00:25:00] It's very expensive. And so people don't want to take risky bets. I think the, definitely like the challenges like our challenge, I think those are like very valid approaches for it.[00:25:13] Luca Soldaini: And then I think in general, promoting, building, so, any kind of effort to participate in this challenge, in those challenges, if we can promote doing that on top of open models and sort of really lean into like this multiplier effect, I think that is a good way to go. If there were more money for that.[00:25:35] Luca Soldaini: For efforts like research efforts around open models. There's a lot of, I think there's a lot of investments in companies that at the moment are releasing their model in the open, which is really cool. But it's usually more because of commercial interest and not wanting to support this, this like open models in the longterm, it's a really hard problem because I think everyone is operating sort of [00:26:00] in what.[00:26:01] Luca Soldaini: Everyone is at their local maximum, right? In ways that really optimize their position on the market. Global maximum is harder to achieve.[00:26:11] Question2: Can I ask one question? No.[00:26:12] Luca Soldaini: Yeah.[00:26:13] Question2: So I think one of the gap between the closed and open source models is the mutability. So the closed source models like chat GPT works pretty good on the low resource languages, which is not the same on the open, open source models, right?[00:26:27] Question2: So is it in your plan to improve on that?[00:26:32] Luca Soldaini: I think in general,[00:26:32] Luca Soldaini: yes, is I think it's. I think we'll see a lot of improvements there in, like, 2025. Like, there's groups like, Procurement English on the smaller side that are already working on, like, better crawl support, multilingual support. I think what I'm trying to say here is you really want to be experts.[00:26:54] Luca Soldaini: who are actually in those countries that teach those languages to [00:27:00] participate in the international community. To give you, like, a very easy example I'm originally from Italy. I think I'm terribly equipped to build a model that works well in Italian. Because one of the things you need to be able to do is having that knowledge of, like, okay, how do I access, you know, how Libraries, or content that is from this region that covers this language.[00:27:23] Luca Soldaini: I've been in the US long enough that I no longer know. So, I think that's the efforts that folks in Central Europe, for example, are doing. Around like, okay, let's tap into regional communities. To get access you know, to bring in collaborators from those areas. I think it's going to be, like, very crucial for getting products there.[00:27:46] Mistral intro[00:27:46] Sophia Yang: Hi everyone. Yeah, I'm super excited to be here to talk to you guys about Mistral. A really short and quick recap of what we have done, what kind of models and products we have released in the [00:28:00] past year and a half. So most of you We have already known that we are a small startup funded about a year and a half ago in Paris in May, 2003, it was funded by three of our co founders, and in September, 2003, we released our first open source model, Mistral 7b yeah, how, how many of you have used or heard about Mistral 7b?[00:28:24] Sophia Yang: Hey, pretty much everyone. Thank you. Yeah, it's our Pretty popular and community. Our committee really loved this model, and in December 23, we, we released another popular model with the MLE architecture Mr. A X seven B and oh. Going into this year, you can see we have released a lot of things this year.[00:28:46] Sophia Yang: First of all, in February 2004, we released MrSmall, MrLarge, LeChat, which is our chat interface, I will show you in a little bit. We released an embedding model for, you [00:29:00] know, converting your text into embedding vectors, and all of our models are available. The, the big cloud resources. So you can use our model on Google cloud, AWS, Azure Snowflake, IBM.[00:29:16] Sophia Yang: So very useful for enterprise who wants to use our model through cloud. And in April and May this year, we released another powerful open source MOE model, AX22B. And we also released our first code. Code Model Coastal, which is amazing at 80 plus languages. And then we provided another fine tuning service for customization.[00:29:41] Sophia Yang: So because we know the community love to fine tune our models, so we provide you a very nice and easy option for you to fine tune our model on our platform. And also we released our fine tuning code base called Menstrual finetune. It's open source, so feel free to take it. Take a look and.[00:29:58] Sophia Yang: More models. [00:30:00] On July 2, November this year, we released many, many other models. First of all is the two new small, best small models. We have Minestra 3B great for Deploying on edge devices we have Minstrel 8B if you used to use Minstrel 7B, Minstrel 8B is a great replacement with much stronger performance than Minstrel 7B.[00:30:25] Sophia Yang: We also collaborated with NVIDIA and open sourced another model, Nemo 12B another great model. And Just a few weeks ago, we updated Mistral Large with the version 2 with the updated, updated state of the art features and really great function calling capabilities. It's supporting function calling in LatentNate.[00:30:45] Sophia Yang: And we released two multimodal models Pixtral 12b. It's this open source and Pixtral Large just amazing model for, models for not understanding images, but also great at text understanding. So. Yeah, a [00:31:00] lot of the image models are not so good at textual understanding, but pixel large and pixel 12b are good at both image understanding and textual understanding.[00:31:09] Sophia Yang: And of course, we have models for research. Coastal Mamba is built on Mamba architecture and MathRoll, great with working with math problems. So yeah, that's another model.[00:31:29] Sophia Yang: Here's another view of our model reference. We have several premier models, which means these models are mostly available through our API. I mean, all of the models are available throughout our API, except for Ministry 3B. But for the premier model, they have a special license. Minstrel research license, you can use it for free for exploration, but if you want to use it for enterprise for production use, you will need to purchase a license [00:32:00] from us.[00:32:00] Sophia Yang: So on the top row here, we have Minstrel 3b and 8b as our premier model. Minstrel small for best, best low latency use cases, MrLarge is great for your most sophisticated use cases. PixelLarge is the frontier class multimodal model. And, and we have Coastral for great for coding and then again, MrEmbedding model.[00:32:22] Sophia Yang: And The bottom, the bottom of the slides here, we have several Apache 2. 0 licensed open way models. Free for the community to use, and also if you want to fine tune it, use it for customization, production, feel free to do so. The latest, we have Pixtros 3 12b. We also have Mr. Nemo mum, Coastal Mamba and Mastro, as I mentioned, and we have three legacy models that we don't update anymore.[00:32:49] Sophia Yang: So we recommend you to move to our newer models if you are still using them. And then, just a few weeks ago, [00:33:00] we did a lot of, uh, improvements to our code interface, Lachette. How many of you have used Lachette? Oh, no. Only a few. Okay. I highly recommend Lachette. It's chat. mistral. ai. It's free to use.[00:33:16] Sophia Yang: It has all the amazing capabilities I'm going to show you right now. But before that, Lachette in French means cat. So this is actually a cat logo. If you You can tell this is the cat eyes. Yeah. So first of all, I want to show you something Maybe let's, let's take a look at image understanding.[00:33:36] Sophia Yang: So here I have a receipts and I want to ask, just going to get the prompts. Cool. So basically I have a receipt and I said I ordered I don't know. Coffee and the sausage. How much do I owe? Add a 18 percent tip. So hopefully it was able to get the cost of the coffee and the [00:34:00] sausage and ignore the other things.[00:34:03] Sophia Yang: And yeah, I don't really understand this, but I think this is coffee. It's yeah. Nine, eight. And then cost of the sausage, we have 22 here. And then it was able to add the cost, calculate the tip, and all that. Great. So, it's great at image understanding, it's great at OCR tasks. So, if you have OCR tasks, please use it.[00:34:28] Sophia Yang: It's free on the chat. It's also available through our API. And also I want to show you a Canvas example. A lot of you may have used Canvas with other tools before. But, With Lachat, it's completely free again. Here, I'm asking it to create a canvas that's used PyScript to execute Python in my browser.[00:34:51] Sophia Yang: Let's see if it works. Import this. Okay, so, yeah, so basically it's executing [00:35:00] Python here. Exactly what we wanted. And the other day, I was trying to ask Lachat to create a game for me. Let's see if we can make it work. Yeah, the Tetris game. Yep. Let's just get one row. Maybe. Oh no. Okay. All right. You get the idea. I failed my mission. Okay. Here we go. Yay! Cool. Yeah. So as you can see, Lachet can write, like, a code about a simple game pretty easily. And you can ask Lachet to explain the code. Make updates however you like. Another example. There is a bar here I want to move.[00:35:48] Sophia Yang: Okay, great, okay. And let's go back to another one. Yeah, we also have web search capabilities. Like, you can [00:36:00] ask what's the latest AI news. Image generation is pretty cool. Generate an image about researchers. Okay. In Vancouver? Yeah, it's Black Forest Labs flux Pro. Again, this is free, so Oh, cool.[00:36:19] Sophia Yang: I guess researchers here are mostly from University of British Columbia. That's smart. Yeah. So this is Laia ira. Please feel free to use it. And let me know if you have any feedback. We're always looking for improvement and we're gonna release a lot more powerful features in the coming years.[00:36:37] Sophia Yang: Thank you. Get full access to Latent Space at www.latent.space/subscribe
In this episode, Bill sits down with Paul Savill, Global Practice Leader of Networking and Edge Compute at Kyndryl, a company that builds and modernizes the world's critical technology systems. Paul provides insights on the cultural challenges between IT and OT teams, the implementation of edge computing across industries, and innovative use cases including robotics in retail, AI deployment, and private 5G networks---------Key Quotes:“To really leverage AI, you have to start with a data foundation. Are you collecting enough information? Is the information that you're collecting correct? Have you got good data integrity? And do you have enough of it over time?” “The value of private 5G networks lies in empowering enterprises to innovate within their own environments.”“The biggest cultural challenge is bridging the gap between IT and OT—it's a dynamic of trust and risk.”--------Timestamps: (01:11) How Paul got started in tech (05:43) Challenges in IT and OT integration(15:08) Industry use cases and innovations(16:45) Bitcoin mining and edge computing(21:10) AI models and custom solutions(26:33) Global trends and private 5G--------Sponsor:Edge solutions are unlocking data-driven insights for leading organizations. With Dell Technologies, you can capitalize on your edge by leveraging the broadest portfolio of purpose-built edge hardware, software and services. Leverage AI where you need it; simplify your edge; and protect your edge to generate competitive advantage within your industry. Capitalize on your edge today with Dell Technologies.--------Credits:Over the Edge is hosted by Bill Pfeifer, and was created by Matt Trifiro and Ian Faison. Executive producers are Matt Trifiro, Ian Faison, Jon Libbey and Kyle Rusca. The show producer is Erin Stenhouse. The audio engineer is Brian Thomas. Additional production support from Elisabeth Plutko.--------Links:Follow Paul on LinkedInFollow Bill on LinkedIn
In this episode, we talked with Chris Baird, CEO of OptConnect, about the evolving IoT landscape and the strategies driving enterprise adoption. Our conversation explored key growth drivers in IoT, the challenges of scaling connectivity, and the technologies poised to redefine the industry, including Edge AI and the promise of 6G. Chris shared insights into how regulatory and environmental factors are accelerating IoT adoption, the impact of turnkey solutions on reducing total cost of ownership, and the pivotal role of simplicity in scaling IoT deployments. Key Insights: What drives IoT growth? Regulatory shifts, environmental needs, and cellular ubiquity fuel adoption. How do you cut IoT costs? Simplified plug-and-play solutions reduce complexity and hidden expenses. Why has 5G underdelivered? 6G offers transformative potential in cost, latency, and scalability. What's next for IoT? Edge AI enables smarter systems and expands use cases like healthcare and security. Where is IoT thriving? Retail, water management, and medical devices show rapid adoption. IoT ONE Database: https://www.iotone.com/case-studies The Industrial IoT Spotlight podcast is produced by Asia Growth Partners (AGP): https://asiagrowthpartners.com/
My guest in this episode is Elizabeth Samara-Rubio, Chief Business Officer of SiMa.ai, which offers MLSOC and platform for Edge AI, and one of the ten hottest semiconductor startups of 2024. In our conversation, we dive into how to help customers build business cases for GenAI, use cases and business models. In her previous role, Elizabeth led GTM and business development at Amazon for the worldwide AI specialist organization in AWS. So we talk about how Amazon's famous Working Backwards methodology applied to go-to-market activities. Elizabeth started her career at HP as a product manager in the Hardware Services and Support Division, and she shares some interesting lessons from this time about aligning with stakeholders. We also talk about market and industry transformation which she drove as Managing Director for Strategy and Consulting at Accenture. This episode is short and packed with insights which I know is what you will love. If you enjoy this podcast don't forget to subscribe and follow it in your favorite podcast app or YouTube. It's the best way to not miss future episodes, and it helps the podcast a lot. Chapters: 01:35 Introduction to AI and Generative AI at AWS 04:42 Leading Customer Conversations on Generative AI 07:26 Working Backwards: Amazon's Approach 10:43 Generative AI: Use Cases and Business Models 13:36 Investment Cases for Generative AI 16:28 Clock Speed: Understanding Stakeholder Dynamics 19:46 Transitioning from HP to Accenture: Lessons Learned 22:39 Defining Market Attractiveness and Pockets of Possibility 25:32 Machine Learning at the Edge: Applications and Implications 28:21 Personal Insights: Influences and High Standards
Lattice Developers Conference Key Insights! What's the latest in Edge AI innovation from Lattice Semiconductor? Hosts Daniel Newman and Patrick Moorhead are joined by Lattice Semiconductor's CEO, Ford Tamer, Chief Strategy and Marketing Officer, Esam Elashmawi, and CVP of Product Marketing and Planning, Dan Mansur, for a conversation on Lattice Semiconductor's strategic growth, key announcements, and unique market position in the Edge AI sector on this episode of Six Five On The Road at Lattice DevCon 2024. Highlights include ⤵️ Ford Tamer's insights and experiences since joining Lattice. Lattice's growth strategy and the strengths driving this growth. Key product announcements and solutions introduced at the DevCon. The unique position of Lattice FPGA's in the burgeoning Edge AI market. An introduction to Nexus 2 and continued investment in small FPGAs.
A Edge AI, ou Inteligência Artificial de Borda, está revolucionando a forma como interagimos com a tecnologia no dia a dia. Ao permitir que dispositivos como smartphones, câmeras de segurança e até carros autônomos processem dados localmente, sem depender de servidores na nuvem, essa inovação oferece respostas mais rápidas e maior privacidade. Para falar sobre esse assunto eu recebo hoje aqui no Podcast Canaltech o Daniel Vettorazi, especialista em inteligência artificial. E mais: Senac RJ e AWS abrem 25 vagas para curso grátis de computação em nuvem; Fim do Windows 10 pode movimentar R$ 74 bi e transformar mercado de PCs no Brasil; Pix ultrapassa dinheiro e é a forma de pagamento mais usada no Brasil; Xiaomi supera rivais e tem os top de linha mais vendidos da nova geração; Acidente com Cybertruck deixa três mortos e um ferido nos EUA. Acesse o site do Canaltech Receba notícias do Canaltech no WhatsApp Entre nas redes sociais do Canaltech buscando por @Canaltech nelas todas Entre em contato pelo nosso e-mail: podcast@canaltech.com.br Entre no Canaltech Ofertas Acesse a newsletter do Canaltech Este episódio foi roteirizado e apresentado por Gustavo Minari. O programa também contou com reportagens de Paulo Amaral, Emanuele Almeida, Jones Oliveira, Wendel Martins e Bruno De Blasi. Edição por Jully Cruz. A trilha sonora é uma criação de Guilherme Zomer e a capa deste programa é feita por Erick Teixeira.See omnystudio.com/listener for privacy information.
Broadcom's Chief Architect for SD-WAN and SASE, Guru Belur, discusses how edge computing and AI are revolutionizing network infrastructure, transforming traditional connectivity paradigms, and how intelligent networking can bridge AI models across diverse environments. Can SD-WAN evolve to support next-generation AI applications? In this Executives at the Edge episode, host Pascal Menezes explores these topics... Read More The post Edge AI Networking: SD-WAN in the AI Age appeared first on MEF.
This episode is sponsored by Netsuite by Oracle, the number one cloud financial system, streamlining accounting, financial management, inventory, HR, and more. NetSuite is offering a one-of-a-kind flexible financing program. Head to https://netsuite.com/EYEONAI to know more. In this episode of the Eye on AI podcast, we explore the cutting-edge world of semiconductor innovation and its role in the future of artificial intelligence with Kai Beckmann, CEO of Merck KGaA. Kai takes us on a journey into the heart of semiconductor manufacturing, revealing how next-generation chips are driving the AI revolution. From the complex process of creating advanced chips to the increasing demands of AI on semiconductor technology, Kai shares how Merck is pioneering materials science to unlock unprecedented levels of computational power. Throughout the conversation, Kai explains how AI's growth is reshaping the semiconductor industry, with innovations like edge AI, heterogeneous integration, and 3D chip architectures pushing the boundaries of performance. He highlights how Merck is using artificial intelligence to accelerate material discovery, reduce experimentation cycles, and create smarter, more efficient processes for the chips that power everything from smartphones to data centers. Kai also delves into the global landscape of semiconductor manufacturing, discussing the challenges of supply chains, the cyclical nature of the industry, and the rapid technological advancements needed to meet AI's demands. He explains why the semiconductor sector is entering the "Age of Materials," where breakthroughs in materials science are enabling the next wave of AI-driven devices. Like, subscribe, and hit the notification bell to stay tuned for more episodes! Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI (00:00) Introduction (02:48) Merck KGaA (05:21) Foundations of Semiconductor Manufacturing (07:57) How Chips Are Made (09:24) Exploring Materials Science (13:59) Growth and Trends in the Semiconductor Industry (15:44) Semiconductor Manufacturing (17:34) AI's Growing Demands on Semiconductor Tech (20:34) The Future of Edge AI (22:10) Using AI to Disrupt Material Discovery (24:58) How AI Accelerates Innovation in Semiconductors (27:32) Evolution of Semiconductor Fabrication Processes (30:08) Advanced Techniques: Chiplets, 3D Stacking, and Beyond (32:29) Merck's Role in Global Semiconductor Innovation (34:03) Major Markets for Semiconductor Manufacturing (37:18) Challenges in Reducing Latency and Energy Consumption (40:21) Exploring New Conductive Materials for Efficiency
Can you trick the AI model running locally on a security camera into thinking you're a bird (and not a burglar)? We sat down Kasimir Schulz, principle security researcher at HiddenLayer, to discuss Edge AI, and to learn about how AI running on your device (at the "edge" of the network) can be compromised with something like a QR code. Learn more about your ad choices. Visit podcastchoices.com/adchoices
This episode is sponsored by Shopify. Shopify is a commerce platform that allows anyone to set up an online store and sell their products. Whether you're selling online, on social media, or in person, Shopify has you covered on every base. With Shopify you can sell physical and digital products. You can sell services, memberships, ticketed events, rentals and even classes and lessons. Sign up for a $1 per month trial period at http://shopify.com/eyeonai In this episode of the Eye on AI podcast, Andrew D. Feldman, Co-Founder and CEO of Cerebras Systems, unveils how Cerebras is disrupting AI inference and high-performance computing. Andrew joins Craig Smith to discuss the groundbreaking wafer-scale engine, Cerebras' record-breaking inference speeds, and the future of AI in enterprise workflows. From designing the fastest inference platform to simplifying AI deployment with an API-driven cloud service, Cerebras is setting new standards in AI hardware innovation. We explore the shift from GPUs to custom architectures, the rise of large language models like Llama and GPT, and how AI is driving enterprise transformation. Andrew also dives into the debate over open-source vs. proprietary models, AI's role in climate mitigation, and Cerebras' partnerships with global supercomputing centers and industry leaders. Discover how Cerebras is shaping the future of AI inference and why speed and scalability are redefining what's possible in computing. Don't miss this deep dive into AI's next frontier with Andrew Feldman. Like, subscribe, and hit the notification bell for more episodes! Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI (00:00) Intro to Andrew Feldman & Cerebras Systems (00:43) The rise of AI inference (03:16) Cerebras' API-powered cloud (04:48) Competing with NVIDIA's CUDA (06:52) The rise of Llama and LLMs (07:40) OpenAI's hardware strategy (10:06) Shifting focus from training to inference (13:28) Open-source vs proprietary AI (15:00) AI's role in enterprise workflows (17:42) Edge computing vs cloud AI (19:08) Edge AI for consumer apps (20:51) Machine-to-machine AI inference (24:20) Managing uncertainty with models (27:24) Impact of U.S.–China export rules (30:29) U.S. innovation policy challenges (33:31) Developing wafer-scale engines (34:45) Cerebras' fast inference service (37:40) Global partnerships in AI (38:14) AI in climate & energy solutions (39:58) Training and inference cycles (41:33) AI training market competition
Ericsson held its Analyst Conference on Nov. 7, 2024, at its D-15 Labs Innovation Center in Santa Clara, CA. It was a packed event with many interesting sessions with speakers from Ericsson and scores of its partners, including Verizon, AT&T, T-Mobile, Meta, AWS, Google Cloud, Dell, Intel, and others. The theme of the event was "Programable High Performance Networks." As expected, the emphasis was on programmability, to scale the network up and down to make it fit the use cases, network scenarios, apps, and service needs. In this episode, Leonard Lee of Next Curve and I discuss Network APIs, monetization, mechanisms that distribute the "value" across the value chain, and how Ericsson is accelerating it through its JV with operators. We also delve into the role of AI, Edge AI, rApps, and Ericsson's pilot of making cellular primary connectivity for enterprises, eliminating wire and wired connectivity.
My podcast guests this week are Jack Ferrari and Johanna Pingel from MathWorks! We discuss the trends and technologies driving the adoption of edge AI applications, the common challenges associated with edge AI and the roles that the maintenance and upkeep of machine learning models, over the air updates, and on-device training will play for the future of edge AI applications.
My guests today are Andrew Homan and Chris Miller. Andrew has spent two decades at Maverick Capital and is a managing partner at Maverick Silicon, where he leads the firm's technology investments. Chris is a professor at Tufts and the author of the New York Times best-selling book “Chip War,” which details the geopolitical battle to control the semiconductor industry. Together we get into a comprehensive discussion on the semiconductor ecosystem and the silicon backbone of our digital age. Andrew and Chris share insights on how venture capital is navigating this complex industry and what it means for the future of computing. We discuss the AI-driven revolution in chip demand, the geopolitics of semi-manufacturing, and the next wave of innovation beyond NVIDIA. Please enjoy my conversation with Andrew Homan and Chris Miller. For the full show notes, transcript, and links to mentioned content, check out the episode page here. ----- This episode is brought to you by Ramp. Ramp's mission is to help companies manage their spend in a way that reduces expenses and frees up time for teams to work on more valuable projects. Ramp is the fastest growing FinTech company in history and it's backed by more of my favorite past guests (at least 16 of them!) than probably any other company I'm aware of. It's also notable that many best-in-class businesses use Ramp—companies like Airbnb, Anduril, and Shopify, as well as investors like Sequoia Capital and Vista Equity. They use Ramp to manage their spending, automate tedious financial processes, and reinvest saved dollars and hours into growth. At Colossus and Positive Sum, we use Ramp for exactly the same reason. Go to Ramp.com/invest to sign up for free and get a $250 welcome bonus. — This episode is brought to you by Tegus, where we're changing the game in investment research. Step away from outdated, inefficient methods and into the future with our platform, proudly hosting over 100,000 transcripts – with over 25,000 transcripts added just this year alone. Our platform grows eight times faster and adds twice as much monthly content as our competitors, putting us at the forefront of the industry. Plus, with 75% of private market transcripts available exclusively on Tegus, we offer insights you simply can't find elsewhere. See the difference a vast, quality-driven transcript library makes. Unlock your free trial at tegus.com/patrick. ----- Invest Like the Best is a property of Colossus, LLC. For more episodes of Invest Like the Best, visit joincolossus.com/episodes. Stay up to date on all our podcasts by signing up to Colossus Weekly, our quick dive every Sunday highlighting the top business and investing concepts from our podcasts and the best of what we read that week. Sign up here. Follow us on Twitter: @patrick_oshag | @JoinColossus Editing and post-production work for this episode was provided by The Podcast Consultant (https://thepodcastconsultant.com). Show Notes: (00:00:00) Welcome to Invest Like the Best (00:06:28) Intel's Historical Success and Current Challenges (00:08:22) The Paradigm Shift in Technology (00:11:44) AI and the Future of Semiconductors (00:19:02) Political and Economic Considerations in Chip Manufacturing (00:29:28) Investment Perspectives and Market Dynamics (00:45:46) The Mobile Paradigm Shift: Apple vs. AT&T (00:46:49) Corporate Strategies in the AI Transition (00:48:02) NVIDIA's Dominance and Potential Vulnerabilities (00:51:27) The Future of Edge AI (00:57:02) Powering the Data Centers of Tomorrow (00:59:42) The Semiconductor Startup Ecosystem (01:05:08) The Role of Government and Global Dynamics (01:07:28) Investment Strategies and Market Dynamics (01:10:57) The Future of the Semiconductor Industry (01:25:52) The Kindest Thing Anyone Has Ever Done For Chris And Andrew
Send us a textBrandon Sawalich, President and CEO of Starkey, joins us to share his incredible journey both on the professional front and in his personal life. The conversation brings to light the pride he feels for his son William's progress in the NASCAR Xfinity Series—a journey fueled by discipline and dedication. Brandon also opens up about the profound impact of Starkey's founder, Bill Austin, whose mentorship continues to guide him in leading the company while preserving Bill's remarkable legacy.Moving beyond personal stories, we explore Starkey's transformation into a forward-thinking tech company. Brandon details how groundbreaking innovations like Edge AI—featuring AI and sensors—are redefining hearing aid technology. With advancements such as accelerometers for fall detection and vestibular diagnostics for fall prevention, the importance of educating audiologists and hearing professionals becomes paramount. Despite some initial skepticism, these pioneering features bring hope for significantly improved patient outcomes, emphasizing the potential of technology to enhance lives.We tackle the evolving landscape of the hearing aid industry, focusing on the rise of over-the-counter (OTC) hearing aids. Brandon reflects on how OTC products have reshaped public perception and awareness. He candidly discusses the critical role of professional service, fitting, and maintenance in achieving optimal hearing outcomes, while cautioning against misleading marketing tactics. This engaging discussion highlights how innovation, mentorship, and patient care remain central to navigating the challenges and opportunities within the hearing industry.While we know all hearing aids amplify sounds to help you hear them, Starkey Genesis AI uses cutting-edge technology designed to help you understand them, too.Click here to find a provider near you and test drive Starkey Genesis AI! Support the showConnect with the Hearing Matters Podcast TeamEmail: hearingmatterspodcast@gmail.com Instagram: @hearing_matters_podcast Twitter: @hearing_mattasFacebook: Hearing Matters Podcast
How is AI transforming security at the edge? In this episode, Bill sits down with Inna Ushakova, Co-Founder and CEO at AI EdgeLabs, to explore the evolving landscape of edge security and its unique challenges compared to cloud and data center security. They discuss the constraints and risks specific to edge environments, the need for lightweight, AI-driven security solutions, and the current state of edge security across various industries. Together, they offer insights into how proactive and automated security measures are reshaping security practices at the edge.---------Key Quotes:“If you talk about adopting AI and machine learning, me personally, I don't really consider this as a huge revolution because it's already a must-have for us. It's not something that we have the privilege to implement or not to implement. It should be already done. You should have it already.”‘There should be some shift from reactive to proactive security.”--------Timestamps: (01:54) Inna's journey into tech(02:17) Understanding security across environments(04:48) Challenges in edge security(06:45) Current threats and vulnerabilities(09:04) Implementing effective edge security(14:05) AI-driven cybersecurity(27:48) Security regulations at the edge(32:40) Edge security across industries(38:59) Future of edge security--------Sponsor:Over the Edge is brought to you by Dell Technologies to unlock the potential of your infrastructure with edge solutions. From hardware and software to data and operations, across your entire multi-cloud environment, we're here to help you simplify your edge so you can generate more value. Learn more by visiting dell.com/edge for more information or click on the link in the show notes.--------Credits:Over the Edge is hosted by Bill Pfeifer, and was created by Matt Trifiro and Ian Faison. Executive producers are Matt Trifiro, Ian Faison, Jon Libbey and Kyle Rusca. The show producer is Erin Stenhouse. The audio engineer is Brian Thomas. Additional production support from Elisabeth Plutko.--------Links:Follow Bill on LinkedInFollow Inna on LinkedIn
Send us a textDiscover the transformative power of Starkey Edge AI hearing aids, as audiology veteran Dr. Douglas L. Beck joins us to share his revolutionary experience with this cutting-edge technology. With over four decades in the field, Dr. Beck's firsthand account of how these hearing aids leverage deep neural networks to enhance sound clarity, especially in noisy environments, offers a unique insight into the future of auditory technology. From bustling football stadiums to intimate dinners, Dr. Beck's journey reveals the profound impact of sound clarity over mere loudness, highlighting the advanced signal-to-noise processing capabilities of Starkey Edge AI.Join us as Dr. Beck uncovers how his experiences with these innovative hearing aids have reshaped his auditory world, offering seamless Bluetooth connectivity and exceptional noise reduction. Fitted by Bill Austin, Dr. Beck explores the intricacies of his unique hearing profile and the importance of precise fitting. This episode is packed with Dr. Beck's compelling insights into how Starkey Edge AI is setting a new standard in the auditory experience, making it a must-listen for anyone interested in the future of hearing technology.Support the showConnect with the Hearing Matters Podcast TeamEmail: hearingmatterspodcast@gmail.com Instagram: @hearing_matters_podcast Twitter: @hearing_mattasFacebook: Hearing Matters Podcast
Starkey's latest hearing technology has arrived! Chief Technology Officer and EVP of Engineering Achin Bhowmik, Ph.D., sits down with Dave to share what excites him most about Edge AI. The highlights include: 30% more accuracy at identifying speech, 6 dB additional reduction in low-level noise, and 100 times the processing power, all while maintaining Starkey's industry-leading 51 hours of battery life. Tune in for a deep dive that explains why this technology gives patients and professionals the edge they've been looking for. To learn more about Edge AI, visit StarkeyPro.com Link to full transcript
Today, we're joined by Philippe De Ridder, founder of Board of Innovation and a seasoned entrepreneur renowned for his forward-thinking approach to business and innovation.In today's discussion, Philippe shares insights from his extensive experience working with global giants like Microsoft and Amazon. We delve into how AI is reshaping the creative landscape and what it means for future entrepreneurs and innovators. Philippe also touches on the importance of maintaining a human-centric approach in an increasingly automated world.Key Points We'll Cover Include:Understanding AI's Role: How AI is becoming an integral part of creative processes and innovation strategies.Human Creativity vs. AI: Philippe's perspective on the unique value of human creativity in the age of AI.Navigating the Future: Strategies for entrepreneurs and businesses to stay relevant and competitive as technology evolves.Join us as Philippe De Ridder unpacks the complexities of integrating AI into creative endeavors and offers a blueprint for thriving in this new digital renaissance.Don't forget to subscribe for more insights and discussions on the latest in technology and innovation. Check out Philippe's work and contributions to the field to stay ahead in the evolving world of AI-driven creativity.
The advance of human capabilities has been led by people who were able to imagine things that were not possible and then muster the willpower to build them. That requires a combination of seeing the world a bit differently and being willing to take risks and push the limits to create something new.Eric Adolphe, CEO of Forward Edge-AI, Inc., is a technology-savvy executive with over thirty years of success building high-growth firms focused on mission impact, revenue, and margin attainment, primarily in the national security sector. But he isn't interested in technology for technology's own sake. Instead, he believes that it has the potential to generate enormous benefits for all of humanity.In this classic Supply Chain Now episode, sponsored by Microsoft, Eric joins co-hosts Scott Luton and Kevin L. Jackson to talk about his passion for innovation and willingness to go where no man has gone before:Eric's opportunities to work first with NASA and now with the United States Space ForceWhy innovators must dedicate themselves to building responsible and more inclusive AI capabilities than we have todayHow AI is acting as an accelerant in manufacturing, making it possible to bring designs to market faster than ever beforeAdditional Links & Resources:Connect with Eric: https://www.linkedin.com/in/ericadolphe/Learn more about our hosts: https://supplychainnow.com/aboutLearn more about Supply Chain Now: https://supplychainnow.comWatch and listen to more Supply Chain Now episodes here: https://supplychainnow.com/program/supply-chain-nowSubscribe to Supply Chain Now on your favorite platform: https://supplychainnow.com/joinWork with us! Download Supply Chain Now's NEW Media Kit: https://bit.ly/3XH6OVkWEBINAR- Mastering Shipping: Insider Tips for Reliable and Cost-Effective Deliveries: https://bit.ly/3XdC3t5WEBINAR- Creating the Unified Supply Chain Through the Symbiosis of People and Technology: https://bit.ly/3XDtrejWEBINAR- Defending Your Business from Ransomware and Cyber Threats: https://bit.ly/4d0VGcfWEBINAR- End-to-End Excellence: Integrating Final Mile Logistics: https://bit.ly/3ZlpE7UWEBINAR- AI for SMBs: Unlocking Growth with Netstock's Benchmark Report: https://bit.ly/3AWtoCDThis episode is hosted by Scott Luton and Kevin L. Jackson. For additional information, please visit our dedicated show page at: https://supplychainnow.com/supply-chain-now-classic-nothing-impossible-1325
Send Everyday AI and Jordan a text messageWin a free year of ChatGPT or other prizes! Find out how.Did Apple fail at AI? Or, will some of their new AI announcements from yesterday change how we do business? We'll tackle those questions and also highlight 5 AI-powered features from Apple that can change the way your biz does its biz. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Jordan questions on AppleUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:1. Apple's AI Strategy2. iPhone 16 and AI Features3. Improved Siri Features4. Impact on Apple Ecosystem5. Business Implications for Apple's AITimestamps:02:05 Daily AI news06:30 Recap of Apple AI announcements08:25 Apple urges upgrades to boost iPhone sales.12:51 Apple stock stagnant after iPhone 16 announcement.13:35 Apple's stock unimpressive despite new AI features.18:23 Apple's writing tools enhance third-party app compatibility.20:26 Apple delayed text improvements despite possible earlier release.23:06 Apple Intelligence features boost photo and video indexing.27:20 Small feature, big impact for businesses.31:53 Apple updates rolling out slowly until 2025.32:31 Improved Siri enhances business integration and assistance.37:26 Apple's new feature calls out ChatGPT, Google.39:03 Real-time interaction with world via phone.44:57 Apple Intelligence to reach billions, altering work.Keywords:Jordan Wilson, new technology, business use cases, Apple iPhone sales, personal AI usage, Apple Intelligence, artificial intelligence, generative AI, future of work, Thanks a Million giveaway, everydayai.com, Everyday AI Podcast, AI trends, business growth, AI news updates, Stanford Study, Large language models, James Earl Jones, AI voice cloning, Responsible Military AI Blueprint, Apple iPhone 16, AI features, business implications, Apple ecosystem, visual AI, Focused Summaries, Apple Intelligence, stock market reaction, Edge AI advances, Microsoft WorkLab Podcast.
Send Everyday AI and Jordan a text messageWin a free year of ChatGPT or other prizes! Find out out.The future of communication might look a whole lot different than it does today. Why? More devices. Faster responses. AI-powered knowledge. NVIDIA's Senior Vice President - Telecom Ronnie Vasishta joins us to give us the scoop. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Jordan and Ronnie questions on 6G and AIRelated Episode:EP 184: On-Device AI: What it is and do we need it? What no one's talking aboutEp 264: AI-Powered Devices: Do we actually need them?Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:1. Edge AI and on-device large language models.2. Intersection of AI and 6G3. Role of NVIDIA in 6G and AI4. Learning Opportunities and the Value of 6G5. Future communication possibilitiesTimestamps:01:35 Daily AI news04:30 About Ronnie, NVIDIA senior VP leading telecom tech initiatives.08:12 5G brings faster communication speeds, enabling various applications.12:39 AI essential for seamless network and services.14:27 Evolution of 5G and AI in networks.17:38 Telecom operators investing in AI for services.26:05 Concerns about technology capabilities and network challenges.29:31 Advances in 5G network capabilities and monetization.32:10 AI will transform telecommunications for businessesKeywords:Ronnie Vasishta, Edge AI, Large Language Models, Communication, Work, Prime Prompt Polish Chat GPT Course, On-device Language Translation, Visual Collaboration, Autonomous Vehicles, Network Capacity, Technological Demands, Internet-connected Large Language Models, Humanoid Robots, Computer Vision, Everyday AI Podcast, 6G, AI News Updates, AI Industry, NVIDIA, Telecom Infrastructure, Telecommunications Network, 5G, Improved Coverage, Infrastructure Changes, Intelligent Connectivity, Spectral Efficiency, Bottleneck Period, Generative AI, Business Models, Innovative Services. Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/