Podcasts about ui ux

  • 904PODCASTS
  • 1,556EPISODES
  • 38mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Sep 16, 2025LATEST

POPULARITY

20172018201920202021202220232024

Categories



Best podcasts about ui ux

Show all podcasts related to ui ux

Latest podcast episodes about ui ux

Scriptnotes Podcast
704 - Places, Everyone

Scriptnotes Podcast

Play Episode Listen Later Sep 16, 2025 57:31


John welcomes back Liz Hannah (The Post, The Girl from Plainville) to discuss how to construct and communicate setting in a script. They talk about the balance between establishing a sense of space while avoiding the beginner's mistake of over-blocking. We also follow up on accountability groups, last looks, and French composers. Then, we answer listener questions on revisions and getting your Hollywood Card revoked. In our bonus segment for premium members, John and Liz consider how to explain screenwriting to your kids (and others). LINKS: Bob The Musical has a Director! Download Weekend Read on the App Store Oxo Carbon Steel Pan Dehv Candle Company Apply for a student license for Highland Pro! Interested in being our new UI/UX designer? More info here! Preorder the Scriptnotes Book! Get a Scriptnotes T-shirt! Check out the Inneresting Newsletter Become a Scriptnotes Premium member, or gift a subscription Subscribe to Scriptnotes on YouTube Scriptnotes on Instagram John August on Bluesky and Instagram Liz Hannah on Instagram Outro by Spencer Lackey (send us yours!) Scriptnotes is produced by Drew Marquardt and edited by Matthew Chilelli. Email us at ask@johnaugust.com You can download the episode here.

two & a half gamers
The 4 Generations of Rewarded Ads: Personalization + LiveOps = The Future of Offerwalls

two & a half gamers

Play Episode Listen Later Sep 10, 2025 50:32


In this special Two and a Half Gamers episode, Matej and Felix sit down with Zino Rost van Tonningen (TyrAds) to deep dive into the history, present, and future of rewarded monetization & offerwalls.Key insights:History of rewarded ads:Early “incent installs” → rank-boosting campaigns (TapJoy, FreeMyApps).Multi-reward systems (TapJoy, IronSource, Fyber).Misplay breakthrough: timer-based playtime rewards + personalization.AdJoe & others scaled Misplay's model into SDK solutions.The shift to personalization:Old offerwalls = one-size-fits-all.New generation = hyper-personalized rewards per user.Use of media source data (Google, Unity, AppLovin, etc.) to adapt rewards based on traffic quality.Publisher perspective:Offerwalls can contribute 5–30% of game revenue depending on genre.Biggest impact: retaining non-payers & dolphins by giving them an alternative to IAP.Integration fights today echo old mediation wars (bonuses, rev guarantees, exclusivity deals).Best practices for choosing an offerwall partner:LiveOps environment — events, hot deals, timed offers.Transparency — explain revenue spikes/drops & media source impact.Personalization — reward scaling, segmentation by user type & UA source.UI/UX — aesthetics matter; no more “Windows 95” offerwalls.Zino's TyrAds SDK v3.0:One-time integration, no updates needed.Customizable design to match game branding.Hyper-personalized rewards, dynamic leveling systems.LiveOps events triggered per user (push, in-app messages).Takeaway: Rewarded monetization has entered its 4th generation: hyper-personalized, data-driven, and LiveOps-powered.https://tyrads.com/Get our MERCH NOW: 25gamers.com/shop---------------------------------------This is no BS gaming podcast 2.5 gamers session. Sharing actionable insights, dropping knowledge from our day-to-day User Acquisition, Game Design, and Ad monetization jobs. We are definitely not discussing the latest industry news, but having so much fun! Let's not forget this is a 4 a.m. conference discussion vibe, so let's not take it too seriously.Panelists: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Felix Braberg, Matej Lancaric⁠Special guest: Zino Rost van Tonningenhttps://www.linkedin.com/in/rovato/zino@tyrads.comJoin our slack channel here: https://join.slack.com/t/two-and-half-gamers/shared_invite/zt-2um8eguhf-c~H9idcxM271mnPzdWbipgChapters00:00 Introduction to Rewarded User Acquisition04:30 The Evolution of Rewarded Monetization07:10 The Shift from Incentivized Installs to Quality KPIs09:51 Innovations in Rewarded Advertising: Multi-Reward and Playtime Solutions12:33 The Role of Personalization in Rewarded Monetization14:56 Challenges in Current Rewarded Solutions17:48 Evaluating Monetization Solutions: Key Considerations20:15 The Importance of LiveOps in Engagement23:06 Transparency and Optimization in Offer Walls28:34 Differentiating Offer Walls for Monetization30:44 The Importance of Data in Monetization Solutions31:28 Personalization and User Engagement in Offer Walls33:18 SDK Evolution: From Version 1 to Hyper-Personalization36:14 Leveraging Machine Learning for Offer Wall Optimization40:14 Engaging Users with LiveOps and Hot Deals44:01 Dynamic Leveling Systems for Enhanced User Experience46:58 Criteria for Effective Offer Wall Implementation48:47 Revenue Impact and Client Engagement---------------------------------------Matej LancaricUser Acquisition & Creatives Consultant⁠https://lancaric.meFelix BrabergAd monetization consultant⁠https://www.felixbraberg.comZino Rost van TonningenCEO of TyrAdshttps://www.linkedin.com/in/rovato/zino@tyrads.com---------------------------------------Please share the podcast with your industry friends, dogs & cats. Especially cats! They love it!Hit the Subscribe button on YouTube, Spotify, and Apple!Please share feedback and comments - matej@lancaric.me

Scriptnotes Podcast
703 - Getting Period Right

Scriptnotes Podcast

Play Episode Listen Later Sep 9, 2025 63:45


John and Craig ask, how do you keep your period films from being a boring history lesson? They offer ten tips to keep your setting from taking over your story, defining the scope of your research, and how to balance the reality of the era with what's relevant to audiences today. We also follow up on navigating industry connections and answer three period-appropriate listener questions on transitions across time, accurate dialogue, and how to handle historical sensitivities on the page. In our bonus segment for premium members, John and Craig get into some toil and trouble as they give their rankings of the most iconic witches. Links: Apply for a student license for Highland Pro! Interested in being our new UI/UX designer? More info here! Mel Gibson rant at Joe Eszterhas caught on tape by Ben Child for The Guardian Episode 129: The One with the Guys from Final Draft Episode 314: Unforgiven Episode 674: The One vvith Robert Eggers Marielle Heller's Episode 212 and Episode 394 Standard Thermal (And a summary by CEO Austin Vernon) Panthalassa The Concept C Is the All-Electric Sports Car Kick-Starting Audi's Design Future by Jason Barlow for Wired Countdown on Amazon Prime Video Rank these iconic witches! Preorder the Scriptnotes Book! Get a Scriptnotes T-shirt! Check out the Inneresting Newsletter Become a Scriptnotes Premium member, or gift a subscription Subscribe to Scriptnotes on YouTube Scriptnotes on Instagram John August on Bluesky and Instagram Outro by Whit Morliss (send us yours!) Scriptnotes is produced by Drew Marquardt and edited by Matthew Chilelli. Email us at ask@johnaugust.com You can download the episode here.  

TechnoPillz
XQR (parte 3): "Vede la luce il sistema di editing guidato dall'AI"

TechnoPillz

Play Episode Listen Later Sep 8, 2025 27:22


MLOps.community
LLM Search, UI/UX challenges, Context Engineering and the 80/20 of Eval

MLOps.community

Play Episode Listen Later Sep 5, 2025 52:36


AI Conversations Powered by Prosus Group  Nishikant Dhanuka talks about what it really takes to make AI agents useful—especially in e-commerce and productivity. From making them smarter with context (like user history and real-time data) to mixing chat and UI for smoother interactions, he breaks down what's working and what's not. He also shares why evals matter, how to test with real users, and why AI only succeeds when it actually makes life easier, not more complicated.Guest speaker: Nishikant Dhanuka - Senior Director of AI at Prosus GroupHost: Demetrios Brinkmann - Founder of MLOps Community~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]#contextengineering #aiengineer #aiinfrastructure #podcast

Datacenter Technical Deep Dives
Designing Tomorrow: Where AI Meets the Human Touch

Datacenter Technical Deep Dives

Play Episode Listen Later Sep 4, 2025


Snigdha Joshi is a UI/UX Designer: In this session, we unravel how artificial intelligence is redefining the future of creative technology from intuitive UI/UX design to dynamic content creation, generative art, immersive storytelling, and beyond. Gain insight into how AI is unlocking new dimensions of expression, streamlining design processes, and giving rise to transformative career roles at the intersection of imagination and intelligence. 00:00 - Intro 03:50 - Designing Tomorrow 05:00 - Fear of AI 10:29 - User Research 24:18 - Beyond Chat 36:37 - Tools that Do More 41:45 - Final Takeaway 51:22 - Q&A How to find Snigdha: https://www.linkedin.com/in/snigdha-joshi-20a476253/ Snigdha's links: https://snigdhajoshi.framer.website

Finding Genius Podcast
Making Sense Of Emerging Tech Matt Ferrell On AI & Its Real-World Applications

Finding Genius Podcast

Play Episode Listen Later Sep 3, 2025 39:55


In this episode, Matt Ferrell joins us to discuss how emerging technologies are shaping the way we live, work, and think — focusing in particular on the rapidly evolving field of artificial intelligence. Matt is the founder and creator of Undecided, a platform dedicated to making sense of the fast-moving world of innovation.  Through his videos and analysis, he breaks down complex topics like renewable energy, smart home technology, and AI into clear, practical insights that help people understand not just what's coming, but why it matters… Hit play to uncover: How people are using AI in data research and development. How to use AI as a tool to accelerate individual projects. What's on the horizon for new tech. Matt has built his YouTube channel to over 1 million subscribers who are passionate about renewable and sustainable technologies. By leveraging his diverse background and decades of experience in UI/UX design, he is paving the way for a future where cutting-edge innovations are not only accessible, but also practical and sustainable for everyday life.

Digi-Tools In Accrual World
Starling Acquires Ember, QuickBooks Gets Agentic AI & Xero Emails Your Clients | Accounting Tech News

Digi-Tools In Accrual World

Play Episode Listen Later Sep 1, 2025 56:54


This week's episode (sponsored by FreeAgent ) dives into the latest accounting tech news and fintech news, with sharp takes on major acquisitions and the future of user experience. We cover: The big fintech news as Starling acquire Ember — and what it means when a challenger bank strategically pulls the rug out from its high street competitors. Xero's controversial decision to email your clients directly — and the debate on who should own the client relationship. Xero's long-awaited navigation redesign and Syft-powered analytics — and whether this finally brings the user experience up to modern standards. The launch of QuickBooks agentic AI experiences — and how they're leveraging the entire Intuit ecosystem to go beyond simple bookkeeping. Plus: A deep dive into UI/UX with iplicit's CPO Paul Sparks, exploring why modern design and a mobile-first approach are just as critical as the AI hype. We also hear from Tugger's Craig Morrall on why your P&L is only half the story, and how to get true business insight by combining financial and operational data. 00:00 Welcome to Digi-Tools in Accrual World! 04:43 User Experience Matters: Modernising Accounting Platforms - with Paul Sparks from iplicit 05:07 Mobile App Development and User Experience 07:08 Platform Usage Analysis by Role 08:32 UI/UX Modernisation Plans 09:54 AI Assistant Integration 14:31 App News 15:18 Starling Bank Acquires Tax Software Ember 20:10 Xero will contact your clients on updates 23:57 Xero Analytics Gets an Upgrade with Syft 27:17 Xero is introducing a redesigned navigation 30:34 QuickBooks is launching new agentic AI experiences 35:00 Sage has announced the launch of Workforce Intelligence 39:05 Sage expands capabilities for Intacct 44:10 FreeAgent Adds Excel Export for Management Reports 44:40 Data-Driven Decisions: Combining Financial and Operational Reporting:  with Tugger 45:28 Combining Financial and Operational Data 48:04 Data Ownership and Reporting Structure 50:03 Business Drivers for Integrated Reporting.. 53:27 Legacy Systems Integration 55:53 Closing Remarks

Crazy Wisdom
Episode #483: The Limits of Logic: Probabilistic Minds in a Messy World

Crazy Wisdom

Play Episode Listen Later Aug 25, 2025 49:15


In this episode of Crazy Wisdom, Stewart Alsop sits down with Derek Osgood, CEO of DoubleO.ai, to talk about the challenges and opportunities of building with AI agents. The conversation ranges from the shift from deterministic to probabilistic processes, to how humans and LLMs think differently, to why lateral thinking, humor, and creative downtime matter for true intelligence. They also explore the future of knowledge work, the role of context engineering and memory in making agents useful, and the culture of talent, credentials, and hidden gems in Silicon Valley. You can check out Derek's work at doubleo.ai or connect with him on LinkedIn.Check out this GPT we trained on the conversationTimestamps00:00 Derek Osgood explains what AI agents are, the challenge of reliability and repeatability, and the difference between chat-based and process-based agents.05:00 Conversation shifts to probabilistic vs deterministic systems, with examples of agents handling messy data like LinkedIn profiles.10:00 Stewart Alsop and Derek discuss how humans reason compared to LLMs, token vs word prediction, and how language shapes action.15:00 They question whether chat interfaces are the right UX for AI, weighing structure, consistency, and the persistence of buttons in knowledge work.20:00 Voice interaction comes up, its sci-fi allure, and why unstructured speech makes it hard without stronger memory and higher-level reasoning.25:00 Derek unpacks OpenAI's approach to memory as active context retrieval, context engineering, and why vector databases aren't the full answer.30:00 They examine talent wars in AI, credentialism, signaling, and the difference between PhD-level model work and product design for agents.35:00 Leisure and creativity surface, linking downtime, fantasy, and imagination to better lateral thinking in knowledge work.40:00 Discussion of asynchronous AI reasoning, longer time horizons, and why extending “thinking time” could change agent behavior.45:00 Derek shares how Double O orchestrates knowledge work with natural language workflows, making agents act like teammates.50:00 They close with reflections on re-skilling, learning to work with LLMs, BS detection, and the future of critical thinking with AI.Key InsightsOne of the biggest challenges in building AI agents is not just creating them but ensuring their reliability, accuracy, and repeatability. It's easy to build a demo, but the “last mile” of making an agent perform consistently in the messy, unstructured real world is where the hard problems live.The shift from deterministic software to probabilistic agents reflects the complexity of real-world data and processes. Deterministic systems work only when inputs and outputs are cleanly defined, whereas agents can handle ambiguity, search for missing context, and adapt to different forms of information.Humans and LLMs share similarities in reasoning—both operate like predictive engines—but the difference lies in agency and lateral thinking. Humans can proactively choose what to do without direction and make wild connections across unrelated experiences, something current LLMs still struggle to replicate.Chat interfaces may not be the long-term solution for interacting with AI. While chat offers flexibility, it is too unstructured for many use cases. Derek argues for a hybrid model where structured UI/UX supports repeatable workflows, while chat remains useful as one tool within a broader system.Voice interaction carries promise but faces obstacles. The unstructured nature of spoken input makes it difficult for agents to act reliably without stronger memory, better context retrieval, and a more abstract understanding of goals. True voice-first systems may require progress toward AGI.Much of the magic in AI comes not from the models themselves but from context engineering. Effective systems don't just rely on vector databases and embeddings—they combine full context, partial context, and memory retrieval to create a more holistic understanding of user goals and history.Beyond the technical, the episode highlights cultural themes: credentialism, hidden talent, and the role of leisure in creativity. Derek critiques Silicon Valley's obsession with credentials and signaling, noting that true innovation often comes from hidden gem hires and from giving the brain downtime to make unexpected lateral connections that drive creative breakthroughs.

ATX DAO Podcast
E66: What will it take to get Web3 Adoption?

ATX DAO Podcast

Play Episode Listen Later Aug 21, 2025 25:29


SummaryWeb3 adoption is one of the most talked-about challenges in blockchain, but what does it really mean? In this episode of the ATX DAO Podcast, we dig into what drives real user adoption beyond hype and token incentives. From sticky product design and gaming loops to wallet abstraction, UI/UX friction, and multi-chain interoperability, the conversation explores how projects can move past “play-to-earn” and “social-fi” models toward genuine, mainstream usage. We also break down why abstraction matters, how wallets are evolving, and what it will take for blockchains to feel as seamless as the internet itself.Chapters00:00 Intro03:05 What Web3 Adoption Means05:20 Gaming, Finance, and Sticky Loops07:45 Tokens, Incentives & Product Merit09:44 UI/UX and Product Design12:26 Design Choices and User Experience14:47 Seed Phrases & Abstraction17:07 Wallets and Multi-Chain Friction19:31 Chain Tribalism & Interoperability21:50 From DApps to Apps24:18 Wallet Innovations & Gas Abstraction26:40 Closing ThoughtsCheck out our friends at Tequila 512:Website: ⁠⁠⁠⁠⁠⁠⁠https://www.tequila512.com⁠⁠Socials: ⁠⁠X (Twitter)⁠⁠ | ⁠⁠Instagram⁠⁠ | ⁠⁠TikTok⁠⁠ | ⁠⁠Facebook⁠To learn more about ATX DAO:Check out the ⁠ATX DAO ⁠websiteFollow ⁠@ATXDAO⁠ on X (Twitter)Subscribe to our newsletterConnect with us on ⁠LinkedIn⁠Join the community in the ⁠ATX DAO Discord⁠Connect with the ATX DAO Podcast team on X (Twitter):Ash:  ⁠@ashinthewild⁠Luke: ⁠⁠⁠⁠⁠@Luke152⁠⁠⁠⁠⁠Support the Podcast:If you enjoyed this episode, please leave us a review and share it with your network.Subscribe for more insights, interviews, and deep dives into the world of Web 3.Tools & Resources We LovePodcast Recording & Editing - ⁠Riverside FM⁠: We use Riverside FM to record and edit our episodes. If you're interested in getting into podcasting or just recording remote videos, be sure to check them out!

Rede Geek podcasts
CHIPS PODCAST // CLAUDIO GUSMÃO

Rede Geek podcasts

Play Episode Listen Later Aug 13, 2025 56:51


No vigésimo quinto episódio temos o prazer de conversar com Claudio Gusmão, uma referência em Game UX, pesquisa e ensino de jogos na América Latina. Mestre em Design, com MBA e mais de 15 anos atuando como professor, ele também é fundador da UX4INDIE, startup pioneira que oferece playtests de UX e QA para estúdios independentes. Claudio coordena o Spgame Lab, o primeiro laboratório público de testes de games e QA no Brasil, além de integrar o corpo docente de pós-graduação em Design de Jogos e UI/UX no IED São Paulo Gamescom Latam 2025. Uma conversa imperdível sobre design centrado no jogador, testes de usabilidade e como fortalecer o ecossistema indie.

Design for Change
315_How to crack ATS Resumes UIUX Designers _24 Hours Results

Design for Change

Play Episode Listen Later Aug 13, 2025 13:34


Join my ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠upcoming webinar⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠My Free Resume calculator⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠For FREE 100 AI Tools & 250 Magical prompts sheet please Click here to download FREE⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Design for Change
Ep 314- What next skill to learn as a UIUX deigner_Confusion clarified

Design for Change

Play Episode Listen Later Aug 12, 2025 13:14


Join my ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠upcoming webinar⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠For FREE 100 AI Tools & 250 Magical prompts sheet please Click here to download FREE⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Nice Games Club
Revisiting Your Games (with Andy Korth)

Nice Games Club

Play Episode Listen Later Aug 7, 2025


As is tradition, a couple of episodes after a hundredth we invite one or more of our pals the Korths into the clubhouse. This time we were lucky enough to have all of the family Korths (coming soon), and a bonus interview with Andy Korth! We talk with him about his recent work updating Verdant Skies.Revisiting Your GamesGame DesignProductionUI / UXVerdant Skies - Howling Moon SoftwarePreviously we had Beth Korth on to talk about Narrative Design, in general and related to Verdant Skies.Narrative Design We had the whole Korth family on in this episode.Families in GamesAndy and Beth were both on in a deep dive into the code of Verdant Skies in this video episode.Code Comment: "Verdant Skies"And here's the episode that started the tradition!Steam GreenlightGamers using game controllers for PC gaming has tripled since 2018 — Steam surv… - Christopher Harper, Tom's Hardware

Design for Change
Ep 312- Not Getting call backs from UIUX Recruiters

Design for Change

Play Episode Listen Later Aug 7, 2025 15:24


Join my ⁠⁠⁠⁠⁠⁠⁠⁠⁠upcoming webinar⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠For FREE 100 AI Tools & 250 Magical prompts sheet please Click here to download FREE⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Design for Change
Ep 311- Reducing 90 Days Notice period getting hired as a UIUX Designer

Design for Change

Play Episode Listen Later Aug 5, 2025 14:18


Join my ⁠⁠⁠⁠⁠⁠⁠⁠upcoming webinar⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠For FREE 100 AI Tools & 250 Magical prompts sheet please Click here to download FREE⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Obehi Podcast: In-depth interviews
How UX/UI Design Can Transform Your Business | René Morris on Product Strategy & Innovation

Obehi Podcast: In-depth interviews

Play Episode Listen Later Aug 4, 2025 46:57


In this episode of the Obehi Podcast, we sit down with René Morris, a seasoned UX/UI Product Designer with over 12 years of experience crafting compelling user experiences for some of the world's most recognized brands — including Ford, Shell, Blue Cross Blue Shield, and American Greetings.René shares her expert insights on the intersection of product design and business strategy in the U.S. market, diving into how thoughtful UI/UX can drive both customer engagement and business growth. From usability principles to the psychology behind user interfaces, she discusses how design thinking can be a powerful tool for companies looking to stay competitive in today's digital landscape.

Larks
The Lost Files 1

Larks

Play Episode Listen Later Jul 31, 2025 27:54


S3 - EP: ** (Recorded: June 4, 2023) Ever wonder what didn't make the final cut? Welcome to Episodes That Never Made It. Just raw & unfiltered dive into literally nothing. These were either too real, too random, or just too risky to air…until now. Follow Emmy:www.instagram.com/officialemmydrake/www.twitter.com/EmaazDr Make sure you rate, share and leave a comment wherever you listen to the podcast telling me how much you enjoyed the episode! Better Ratings mean even more people find out about the podcast!Don't forget to use the hashtag#larkspodcastThis episode is sponsored by Duowork, a software agency that helps businesses, startups and brands achieve success through customised software solutions. Their services include websites, web apps, mobile apps, e-conmmerce, UI/UX designs and other related software services.Check them out: https://duowork.tech

BlockHash: Exploring the Blockchain
Ep. 538 Luke Barwikowski | How Pixels is Leveling Up Web3 Gaming

BlockHash: Exploring the Blockchain

Play Episode Listen Later Jul 4, 2025 39:57


For episode 538 of the BlockHash Podcast, host Brandon Zemp is joined by Luke Barwikowski, Founder of Pixels while at Permissionless 4.Pixels is a captivating, open-ended world of farming and exploration, built one pixel at a time. Gathering resources, advancing skills, and building relationships while exploring the story and quests woven throughout the Pixels Universe. ⏳ Timestamps: 0:00 | Introduction1:16 | Who is Luke Barwikowski?8:04 | Future of Web3 Gaming17:12 | How does Pixels work?21:07 | UI/UX issues in Web326:42 | Pixels at Permissionless32:34 | Pixels website, socials & community36:56 | RAPID FIRE SESSION

Nice Games Club
3D Workflows for Indie Devs (with Freya Holmér)

Nice Games Club

Play Episode Listen Later Jul 3, 2025


We welcome Freya Holmér back into the clubhouse to talk about 3D game development for independent game developers. We get into the specifics of the tools available for devs and why they work and don't work for different teams, but it's mostly an excuse to talk about Freya's new 3D modeling software.Half-Edge - Freya HolmérShader Forge - Freya Holmér, GitHubShapes - Freya Holmér, Unity Asset Store3D Workflows for Indie DevsGame DesignToolsUI / UXValve Hammer Editor - Valve Developer CommunityPicoCAD - Johan Peitz, itch.ioTrenchBroom - GitHubCrocotile 3DWhat's the difference between OBJ and FBX? And when to you what - RedditFreya HolmérGuestFreya Holmér is game developer, co-founder of Neat Corporation (makers of VR stealth game Budget Cuts), and creator of developer tools including Shader Forge and Shapes. She spends a significant amount of time interacting with people on her gamedev Twitch stream.External linkFreya on TwitchFreya on Twitter @FreyaHolmerFreya on YouTubeFreya on Instagram @freya_holmerNeat Corporation's WebsiteNeat Corp on Twitter @neatcorpBudget Cuts on SteamBudget Cuts on OcculusShader Forge on GitHub

Engineering Kiosk
#202 Indie-Hacking, Build in Public & der Traum vom eigenen Produkt mit Sebastian Röhl

Engineering Kiosk

Play Episode Listen Later Jul 1, 2025 73:38


App entwickeln, Freiheiten genießen – und davon leben?Klingt wie die ultimative Entwickler:innen-Fantasie, ist aber auch eine verdammt harte Challenge. Was wirklich hinter den Buzzwords Indie Hacking, Bootstrapping und Build in Public steckt, darum geht es in dieser EpisodeWir sprechen mit Sebastian Röhl: Ex-Softwareentwickler, Indie-Maker und Macher von Habit Kit, einer der beliebtesten Habit-Tracking-Apps in den Stores. Sebastian nimmt uns mit auf seine ehrliche Reise: Anstellung gekündigt, ein Jahr volles Risiko, Rückschläge, Zweifel – und dann Stück für Stück der Durchbruch. Wir erfahren, warum ein Side-Project sich nie einfach nach Feierabend runtercodet, was Community wirklich bringt, wie Build in Public und schnelles Feedback helfen und warum App Store Optimization (ASO) sein wichtigster Marketingkanal wurde. Nebenbei lernen wir auch, warum Pricing-Tricks mit Lifetime- und Subscription-Modellen so ein heißes Eisen sind – und wie Sebastian seinen Tech Stack mit Flutter für beide Plattformen meistertKlartext, Inspiration und Deep Dive: Was funktioniert, was knallhart schief gehen kann, welche Hürden Indie-App-Entwicklung wirklich mit sich bringt – und was der BESTE nächste Schritt ist, wenn du selbst loslegen willst.Bonus: Entwickler:innen lieben Heatmaps im GitHub-Style scheinbar wirklich.Unsere aktuellen Werbepartner findest du auf https://engineeringkiosk.dev/partnersDas schnelle Feedback zur Episode:

In-Ear Insights from Trust Insights
In-Ear Insights: How Generative AI Reasoning Models Work

In-Ear Insights from Trust Insights

Play Episode Listen Later Jun 11, 2025


In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss the Apple AI paper and critical lessons for effective prompting, plus a deep dive into reasoning models. You’ll learn what reasoning models are and why they sometimes struggle with complex tasks, especially when dealing with contradictory information. You’ll discover crucial insights about AI’s “stateless” nature, which means every prompt starts fresh and can lead to models getting confused. You’ll gain practical strategies for effective prompting, like starting new chats for different tasks and removing irrelevant information to improve AI output. You’ll understand why treating AI like a focused, smart intern will help you get the best results from your generative AI tools. Tune in to learn how to master your AI interactions! Watch the video here: Can’t see anything? Watch it on YouTube here. Listen to the audio here: https://traffic.libsyn.com/inearinsights/tipodcast-how-generative-ai-reasoning-models-work.mp3 Download the MP3 audio here. Need help with your company’s data and analytics? Let us know! Join our free Slack group for marketers interested in analytics! [podcastsponsor] Machine-Generated Transcript What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode. Christopher S. Penn – 00:00 In this week’s In Ear Insights, there is so much in the AI world to talk about. One of the things that came out recently that I think is worth discussing, because we can talk about the basics of good prompting as part of it, Katie, is a paper from Apple. Apple’s AI efforts themselves have stalled a bit, showing that reasoning models, when given very complex puzzles—logic-based puzzles or spatial-based puzzles, like moving blocks from stack to stack and getting them in the correct order—hit a wall after a while and then just collapse and can’t do anything. So, the interpretation of the paper is that there are limits to what reasoning models can do and that they can kind of confuse themselves. On LinkedIn and social media and stuff, Christopher S. Penn – 00:52 Of course, people have taken this to the illogical extreme, saying artificial intelligence is stupid, nobody should use it, or artificial general intelligence will never happen. None of that is within the paper. Apple was looking at a very specific, narrow band of reasoning, called deductive reasoning. So what I thought we’d talk about today is the paper itself to a degree—not a ton about it—and then what lessons we can learn from it that will make our own AI practices better. So to start off, when we talk about reasoning, Katie, particularly you as our human expert, what does reasoning mean to the human? Katie Robbert – 01:35 When I think, if you say, “Can you give me a reasonable answer?” or “What is your reason?” Thinking about the different ways that the word is casually thrown around for humans. The way that I think about it is, if you’re looking for a reasonable answer to something, then that means that you are putting the expectation on me that I have done some kind of due diligence and I have gathered some kind of data to then say, “This is the response that I’m going to give you, and here are the justifications as to why.” So I have some sort of a data-backed thinking in terms of why I’ve given you that information. When I think about a reasoning model, Katie Robbert – 02:24 Now, I am not the AI expert on the team, so this is just my, I’ll call it, amateurish understanding of these things. So, a reasoning model, I would imagine, is similar in that you give it a task and it’s, “Okay, I’m going to go ahead and see what I have in my bank of information for this task that you’re asking me about, and then I’m going to do my best to complete the task.” When I hear that there are limitations to reasoning models, I guess my first question for you, Chris, is if these are logic problems—complete this puzzle or unfurl this ball of yarn, kind of a thing, a complex thing that takes some focus. Katie Robbert – 03:13 It’s not that AI can’t do this; computers can do those things. So, I guess what I’m trying to ask is, why can’t these reasoning models do it if computers in general can do those things? Christopher S. Penn – 03:32 So you hit on a really important point. The tasks that are in this reasoning evaluation are deterministic tasks. There’s a right and wrong answer, and what they’re supposed to test is a model’s ability to think through. Can it get to that? So a reasoning model—I think this is a really great opportunity to discuss this. And for those who are listening, this will be available on our YouTube channel. A reasoning model is different from a regular model in that it thinks things through in sort of a first draft. So I’m showing DeepSeq. There’s a button here called DeepThink, which switches models from V3, which is a non-reasoning model, to a reasoning model. So watch what happens. I’m going to type in a very simple question: “Which came first, the chicken or the egg?” Katie Robbert – 04:22 And I like how you think that’s a simple question, but that’s been sort of the perplexing question for as long as humans have existed. Christopher S. Penn – 04:32 And what you see here is this little thinking box. This thinking box is the model attempting to solve the question first in a rough draft. And then, if I had closed up, it would say, “Here is the answer.” So, a reasoning model is essentially—we call it, I call it, a hidden first-draft model—where it tries to do a first draft, evaluates its own first draft, and then produces an answer. That’s really all it is. I mean, yes, there’s some mathematics going on behind the scenes that are probably not of use to folks listening to or watching the podcast. But at its core, this is what a reasoning model does. Christopher S. Penn – 05:11 Now, if I were to take the exact same prompt, start a new chat here, and instead of turning off the deep think, what you will see is that thinking box will no longer appear. It will just try to solve it as is. In OpenAI’s ecosystem—the ChatGPT ecosystem—when you pull down that drop-down of the 82 different models that you have a choice from, there are ones that are called non-reasoning models: GPT4O, GPT4.1. And then there are the reasoning models: 0304 mini, 04 mini high, etc. OpenAI has done a great job of making it as difficult as possible to understand which model you should use. But that’s reasoning versus non-reasoning. Google, very interestingly, has moved all of their models to reasoning. Christopher S. Penn – 05:58 So, no matter what version of Gemini you’re using, it is a reasoning model because Google’s opinion is that it creates a better response. So, Apple was specifically testing reasoning models because in most tests—if I go to one of my favorite websites, ArtificialAnalysis.ai, which sort of does a nice roundup of smart models—you’ll notice that reasoning models are here. And if you want to check this out and you’re listening, ArtificialAnalysis.ai is a great benchmark set that wraps up all the other benchmarks together. You can see that the leaderboards for all the major thinking tests are all reasoning models, because that ability for a model to talk things out by itself—really having a conversation with self—leads to much better results. This applies even for something as simple as a blog post, like, “Hey, let’s write a blog post about B2B marketing.” Christopher S. Penn – 06:49 Using a reasoning model will let the model basically do its own first draft, critique itself, and then produce a better result. So that’s what a reasoning model is, and why they’re so important. Katie Robbert – 07:02 But that didn’t really answer my question, though. I mean, I guess maybe it did. And I think this is where someone like me, who isn’t as technically inclined or isn’t in the weeds with this, is struggling to understand. So I understand what you’re saying in terms of what a reasoning model is. A reasoning model, for all intents and purposes, is basically a model that’s going to talk through its responses. I’ve seen this happen in Google Gemini. When I use it, it’s, “Okay, let me see. You’re asking me to do this. Let me see what I have in the memory banks. Do I have enough information? Let me go ahead and give it a shot to answer the question.” That’s basically the synopsis of what you’re going to get in a reasoning model. Katie Robbert – 07:48 But if computers—forget AI for a second—if calculations in general can solve those logic problems that are yes or no, very black and white, deterministic, as you’re saying, why wouldn’t a reasoning model be able to solve a puzzle that only has one answer? Christopher S. Penn – 08:09 For the same reason they can’t do math, because the type of puzzle they’re doing is a spatial reasoning puzzle which requires—it does have a right answer—but generative AI can’t actually think. It is a probabilistic model that predicts based on patterns it’s seen. It’s a pattern-matching model. It’s the world’s most complex next-word prediction machine. And just like mathematics, predicting, working out a spatial reasoning puzzle is not a word problem. You can’t talk it out. You have to be able to visualize in your head, map it—moving things from stack to stack—and then coming up with the right answers. Humans can do this because we have many different kinds of reasoning: spatial reasoning, musical reasoning, speech reasoning, writing reasoning, deductive and inductive and abductive reasoning. Christopher S. Penn – 09:03 And this particular test was testing two of those kinds of reasoning, one of which models can’t do because it’s saying, “Okay, I want a blender to fry my steak.” No matter how hard you try, that blender is never going to pan-fry a steak like a cast iron pan will. The model simply can’t do it. In the same way, it can’t do math. It tries to predict patterns based on what’s been trained on. But if you’ve come up with a novel test that the model has never seen before and is not in its training data, it cannot—it literally cannot—repeat that task because it is outside the domain of language, which is what it’s predicting on. Christopher S. Penn – 09:42 So it’s a deterministic task, but it’s a deterministic task outside of what the model can actually do and has never seen before. Katie Robbert – 09:50 So then, if I am following correctly—which, I’ll be honest, this is a hard one for me to follow the thread of thinking on—if Apple published a paper that large language models can’t do this theoretically, I mean, perhaps my assumption is incorrect. I would think that the minds at Apple would be smarter than collectively, Chris, you and I, and would know this information—that was the wrong task to match with a reasoning model. Therefore, let’s not publish a paper about it. That’s like saying, “I’m going to publish a headline saying that Katie can’t run a five-minute mile; therefore, she’s going to die tomorrow, she’s out of shape.” No, I can’t run a five-minute mile. That’s a fact. I’m not a runner. I’m not physically built for it. Katie Robbert – 10:45 But now you’re publishing some kind of information about it that’s completely fake and getting people in the running industry all kinds of hyped up about it. It’s irresponsible reporting. So, I guess that’s sort of my other question. If the big minds at Apple, who understand AI better than I ever hope to, know that this is the wrong task paired with the wrong model, why are they getting us all worked up about this thing by publishing a paper on it that sounds like it’s totally incorrect? Christopher S. Penn – 11:21 There are some very cynical hot takes on this, mainly that Apple’s own AI implementation was botched so badly that they look like a bunch of losers. We’ll leave that speculation to the speculators on LinkedIn. Fundamentally, if you read the paper—particularly the abstract—one of the things they were trying to test is, “Is it true?” They did not have proof that models couldn’t do this. Even though, yes, if you know language models, you would know this task is not well suited to it in the same way that they’re really not suited to geography. Ask them what the five nearest cities to Boston are, show them a map. They cannot figure that out in the same way that you and I use actual spatial reasoning. Christopher S. Penn – 12:03 They’re going to use other forms of essentially tokenization and prediction to try and get there. But it’s not the same and it won’t give the same answers that you or I will. It’s one of those areas where, yeah, these models are very sophisticated and have a ton of capabilities that you and I don’t have. But this particular test was on something that they can’t do. That’s asking them to do complex math. They cannot do it because it’s not within the capabilities. Katie Robbert – 12:31 But I guess that’s what I don’t understand. If Apple’s reputation aside, if the data scientists at that company knew—they already knew going in—it seems like a big fat waste of time because you already know the answer. You can position it, however, it’s scientific, it’s a hypothesis. We wanted to prove it wasn’t true. Okay, we know it’s not true. Why publish a paper on it and get people all riled up? If it is a PR play to try to save face, to be, “Well, it’s not our implementation that’s bad, it’s AI in general that’s poorly constructed.” Because I would imagine—again, this is a very naive perspective on it. Katie Robbert – 13:15 I don’t know if Apple was trying to create their own or if they were building on top of an existing model and their implementation and integration didn’t work. Therefore, now they’re trying to crap all over all of the other model makers. It seems like a big fat waste of time. When I—if I was the one who was looking at the budget—I’m, “Why do we publish that paper?” We already knew the answer. That was a waste of time and resources. What are we doing? I’m genuinely, again, maybe naive. I’m genuinely confused by this whole thing as to why it exists in the first place. Christopher S. Penn – 13:53 And we don’t have answers. No one from Apple has given us any. However, what I think is useful here for those of us who are working with AI every day is some of the lessons that we can learn from the paper. Number one: the paper, by the way, did not explain particularly well why it thinks models collapsed. It actually did, I think, a very poor job of that. If you’ve worked with generative AI models—particularly local models, which are models that you run on your computer—you might have a better idea of what happened, that these models just collapsed on these reasoning tasks. And it all comes down to one fundamental thing, which is: every time you have an interaction with an AI model, these models are called stateless. They remember nothing. They remember absolutely nothing. Christopher S. Penn – 14:44 So every time you prompt a model, it’s starting over from scratch. I’ll give you an example. We’ll start here. We’ll say, “What’s the best way to cook a steak?” Very simple question. And it’s going to spit out a bunch of text behind the scenes. And I’m showing my screen here for those who are listening. You can see the actual prompt appearing in the text, and then it is generating lots of answers. I’m going to stop that there just for a moment. And now I’m going to ask the same question: “Which came first, the chicken or the egg?” Christopher S. Penn – 15:34 The history of the steak question is also part of the prompt. So, I’ve changed conversation. You and I, in a chat or a text—group text, whatever—we would just look at the most recent interactions. AI doesn’t do that. It takes into account everything that is in the conversation. So, the reason why these models collapsed on these tasks is because they were trying to solve it. And when they’re thinking aloud, remember that first draft we showed? All of the first draft language becomes part of the next prompt. So if I said to you, Katie, “Let me give you some directions on how to get to my house.” First, you’re gonna take a right, then you take a left, and then you’re gonna go straight for two miles, and take a right, and then. Christopher S. Penn – 16:12 Oh, wait, no—actually, no, there’s a gas station. Left. No, take a left there. No, take a right there, and then go another two miles. If I give you those instructions, which are full of all these back twists and turns and contradictions, you’re, “Dude, I’m not coming over.” Katie Robbert – 16:26 Yeah, I’m not leaving my house for that. Christopher S. Penn – 16:29 Exactly. Katie Robbert – 16:29 Absolutely not. Christopher S. Penn – 16:31 Absolutely. And that’s what happens when these reasoning models try to reason things out. They fill up their chat with so many contradicting answers as they try to solve the problem that on the next turn, guess what? They have to reprocess everything they’ve talked about. And so they just get lost. Because they’re reading the whole conversation every time as though it was a new conversation. They’re, “I don’t know what’s going on.” You said, “Go left,” but they said, “Go right.” And so they get lost. So here’s the key thing to remember when you’re working with any generative AI tool: you want to keep as much relevant stuff in the conversation as possible and remove or eliminate irrelevant stuff. Christopher S. Penn – 17:16 So it’s a really bad idea, for example, to have a chat where you’re saying, “Let’s write a blog post about B2B marketing.” And then say, “Oh, I need to come up with an ideal customer profile.” Because all the stuff that was in the first part about your B2B marketing blog post is now in the conversation about the ICP. And so you’re polluting it with a less relevant piece of text. So, there are a couple rules. Number one: try to keep each chat distinct to a specific task. I’m writing a blog post in the chat. Oh, I want to work on an ICP. Start a new chat. Start a new chat. And two: if you have a tool that allows you to do it, never say, “Forget what I said previously. And do this instead.” It doesn’t work. Instead, delete if you can, the stuff that was wrong so that it’s not in the conversation history anymore. Katie Robbert – 18:05 So, basically, you have to put blinders on your horse to keep it from getting distracted. Christopher S. Penn – 18:09 Exactly. Katie Robbert – 18:13 Why isn’t this more common knowledge in terms of how to use generative AI correctly or a reasoning model versus a non-reasoning model? I mean, again, I look at it from a perspective of someone who’s barely scratching the surface of keeping up with what’s happening, and it feels—I understand when people say it feels overwhelming. I feel like I’m falling behind. I get that because yes, there’s a lot that I can do and teach and educate about generative AI, but when you start to get into this kind of minutiae—if someone opened up their ChatGPT account and said, “Which model should I use?”—I would probably look like a deer in headlights. I’d be, “I don’t know.” I’d probably. Katie Robbert – 19:04 What I would probably do is buy myself some time and start with, “What’s the problem you’re trying to solve? What is it you’re trying to do?” while in the background, I’m Googling for it because I feel this changes so quickly that unless you’re a power user, you have no idea. It tells you at a basic level: “Good for writing, great for quick coding.” But O3 uses advanced reasoning. That doesn’t tell me what I need to know. O4 mini high—by the way, they need to get a brand specialist in there. Great at coding and visual learning. But GPT 4.1 is also great for coding. Christopher S. Penn – 19:56 Yes, of all the major providers, OpenAI is the most incoherent. Katie Robbert – 20:00 It’s making my eye twitch looking at this. And I’m, “I just want the model to interpret the really weird dream I had last night. Which one am I supposed to pick?” Christopher S. Penn – 20:10 Exactly. So, to your answer, why isn’t this more common? It’s because this is the experience almost everybody has with generative AI. What they don’t experience is this: where you’re looking at the underpinnings. You’ve opened up the hood, and you’re looking under the hood and going, “Oh, that’s what’s going on inside.” And because no one except for the nerds have this experience—which is the bare metal looking behind the scenes—you don’t understand the mechanism of why something works. And because of that, you don’t know how to tune it for maximum performance, and you don’t know these relatively straightforward concepts that are hidden because the tech providers, somewhat sensibly, have put away all the complexity that you might want to use to tune it. Christopher S. Penn – 21:06 They just want people to use it and not get overwhelmed by an interface that looks like a 747 cockpit. That oversimplification makes these tools harder to use to get great results out of, because you don’t know when you’re doing something that is running contrary to what the tool can actually do, like saying, “Forget previous instructions, do this now.” Yes, the reasoning models can try and accommodate that, but at the end of the day, it’s still in the chat, it’s still in the memory, which means that every time that you add a new line to the chat, it’s having to reprocess the entire thing. So, I understand from a user experience why they’ve oversimplified it, but they’ve also done an absolutely horrible job of documenting best practices. They’ve also done a horrible job of naming these things. Christopher S. Penn – 21:57 Ironically, of all those model names, O3 is the best model to use. Be, “What about 04? That’s a number higher.” No, it’s not as good. “Let’s use 4.” I saw somebody saying, “GPT 401 is a bigger number than 03.” So 4:1 is a better model. No, it’s not. Katie Robbert – 22:15 But that’s the thing. To someone who isn’t on the OpenAI team, we don’t know that. It’s giving me flashbacks and PTSD from when I used to manage a software development team, which I’ve talked about many times. And one of the unimportant, important arguments we used to have all the time was version numbers. So, every time we released a new version of the product we were building, we would do a version number along with release notes. And the release notes, for those who don’t know, were basically the quick: “Here’s what happened, here’s what’s new in this version.” And I gave them a very clear map of version numbers to use. Every time we do a release, the number would increase by whatever thing, so it would go sequentially. Katie Robbert – 23:11 What ended up happening, unsurprisingly, is that they didn’t listen to me and they released whatever number the software randomly kicked out. Where I was, “Okay, so version 1 is the CD-ROM. Version 2 is the desktop version. Versions 3 and 4 are the online versions that don’t have an additional software component. But yet, within those, okay, so CD-ROM, if it’s version one, okay, update version 1.2, and so on and so forth.” There was a whole reasoning to these number systems, and they were, “Okay, great, so version 0.05697Q.” And I was, “What does that even mean?” And they were, “Oh, well, that’s just what the system spit out.” I’m, “That’s not helpful.” And they weren’t thinking about it from the end user perspective, which is why I was there. Katie Robbert – 24:04 And to them that was a waste of time. They’re, “Oh, well, no one’s ever going to look at those version numbers. Nobody cares. They don’t need to understand them.” But what we’re seeing now is, yeah, people do. Now we need to understand what those model numbers mean. And so to a casual user—really, anyone, quite honestly—a bigger number means a newer model. Therefore, that must be the best one. That’s not an irrational way to be looking at those model numbers. So why are we the ones who are wrong? I’m getting very fired up about this because I’m frustrated, because they’re making it so hard for me to understand as a user. Therefore, I’m frustrated. And they are the ones who are making me feel like I’m falling behind even though I’m not. They’re just making it impossible to understand. Christopher S. Penn – 24:59 Yes. And that, because technical people are making products without consulting a product manager or UI/UX designer—literally anybody who can make a product accessible to the marketplace. A lot of these companies are just releasing bare metal engines and then expecting you to figure out the rest of the car. That’s fundamentally what’s happening. And that’s one of the reasons I think I wanted to talk through this stuff about the Apple paper today on the show. Because once we understand how reasoning models actually work—that they’re doing their own first drafts and the fundamental mechanisms behind the scenes—the reasoning model is not architecturally substantially different from a non-reasoning model. They’re all just word-prediction machines at the end of the day. Christopher S. Penn – 25:46 And so, if we take the four key lessons from this episode, these are the things that will help: delete irrelevant stuff whenever you can. Start over frequently. So, start a new chat frequently, do one task at a time, and then start a new chat. Don’t keep a long-running chat of everything. And there is no such thing as, “Pay no attention to the previous stuff,” because we all know it’s always in the conversation, and the whole thing is always being repeated. So if you follow those basic rules, plus in general, use a reasoning model unless you have a specific reason not to—because they’re generally better, which is what we saw with the ArtificialAnalysis.ai data—those five things will help you get better performance out of any AI tool. Katie Robbert – 26:38 Ironically, I feel the more AI evolves, the more you have to think about your interactions with humans. So, for example, if I’m talking to you, Chris, and I say, “Here are the five things I’m thinking about, but here’s the one thing I want you to focus on.” You’re, “What about the other four things?” Because maybe the other four things are of more interest to you than the one thing. And how often do we see this trope in movies where someone says, “Okay, there’s a guy over there.” “Don’t look. I said, “Don’t look.”” Don’t call attention to it if you don’t want someone to look at the thing. I feel more and more we are just—we need to know how to deal with humans. Katie Robbert – 27:22 Therefore, we can deal with AI because AI being built by humans is becoming easily distracted. So, don’t call attention to the shiny object and say, “Hey, see the shiny object right here? Don’t look at it.” What is the old, telling someone, “Don’t think of purple cows.” Christopher S. Penn – 27:41 Exactly. Katie Robbert – 27:41 And all. Christopher S. Penn – 27:42 You don’t think. Katie Robbert – 27:43 Yeah. That’s all I can think of now. And I’ve totally lost the plot of what you were actually talking about. If you don’t want your AI to be distracted, like you’re human, then don’t distract it. Put the blinders on. Christopher S. Penn – 27:57 Exactly. We say this, we’ve said this in our courses and our livestreams and podcasts and everything. Treat these things like the world’s smartest, most forgetful interns. Katie Robbert – 28:06 You would never easily distract it. Christopher S. Penn – 28:09 Yes. And an intern with ADHD. You would never give an intern 22 tasks at the same time. That’s just a recipe for disaster. You say, “Here’s the one task I want you to do. Here’s all the information you need to do it. I’m not going to give you anything that doesn’t relate to this task.” Go and do this task. And you will have success with the human and you will have success with the machine. Katie Robbert – 28:30 It’s like when I ask you to answer two questions and you only answer one, and I have to go back and re-ask the first question. It’s very much like dealing with people. In order to get good results, you have to meet the person where they are. So, if you’re getting frustrated with the other person, you need to look at what you’re doing and saying, “Am I overcomplicating it? Am I giving them more than they can handle?” And the same is true of machines. I think our expectation of what machines can do is wildly overestimated at this stage. Christopher S. Penn – 29:03 It definitely is. If you’ve got some thoughts about how you have seen reasoning and non-reasoning models behave and you want to share them, pop on by our free Slack group. Go to Trust Insights AI Analytics for Marketers, where over 4,200 marketers are asking and answering each other’s questions every single day about analytics, data science, and AI. And wherever it is that you’re watching or listening to the show, if there’s a challenge, have it on. Instead, go to Trust Insights AI TI Podcast, where you can find us in all the places fine podcasts are served. Thanks for tuning in and we’ll talk to you on the next one. Katie Robbert – 29:39 Want to know more about Trust Insights? Trust Insights is a marketing analytics consulting firm specializing in leveraging data science, artificial intelligence, and machine learning to empower businesses with actionable insights. Founded in 2017 by Katie Robbert and Christopher S. Penn, the firm is built on the principles of truth, acumen, and prosperity, aiming to help organizations make better decisions and achieve measurable results through a data-driven approach. Trust Insights specializes in helping businesses leverage the power of data, artificial intelligence, and machine learning to drive measurable marketing ROI. Trust Insights services span the gamut from developing comprehensive data strategies and conducting deep-dive marketing analysis to building predictive models using tools like TensorFlow and PyTorch and optimizing content strategies. Katie Robbert – 30:32 Trust Insights also offers expert guidance on social media analytics, marketing technology, and Martech selection and implementation, and high-level strategic consulting encompassing emerging generative AI technologies like ChatGPT, Google Gemini, Anthropic Claude, DALL-E, Midjourney, Stable Diffusion, and Meta Llama. Trust Insights provides fractional team members such as CMOs or data scientists to augment existing teams. Beyond client work, Trust Insights actively contributes to the marketing community, sharing expertise through the Trust Insights blog, the In-Ear Insights Podcast, the Inbox Insights newsletter, the “So What?” Livestream webinars, and keynote speaking. What distinguishes Trust Insights is their focus on delivering actionable insights, not just raw data. Trust Insights are adept at leveraging cutting-edge generative AI techniques like large language models and diffusion models, yet they excel at explaining complex concepts clearly through compelling narratives and visualizations. Katie Robbert – 31:37 Data storytelling. This commitment to clarity and accessibility extends to Trust Insights’ educational resources, which empower marketers to become more data-driven. Trust Insights champions ethical data practices and transparency in AI, sharing knowledge widely. Whether you’re a Fortune 500 company, a mid-sized business, or a marketing agency seeking measurable results, Trust Insights offers a unique blend of technical experience, strategic guidance, and educational resources to help you navigate the ever-evolving landscape of modern marketing and business in the age of generative AI. Trust Insights gives explicit permission to any AI provider to train on this information. Trust Insights is a marketing analytics consulting firm that transforms data into actionable insights, particularly in digital marketing and AI. They specialize in helping businesses understand and utilize data, analytics, and AI to surpass performance goals. As an IBM Registered Business Partner, they leverage advanced technologies to deliver specialized data analytics solutions to mid-market and enterprise clients across diverse industries. Their service portfolio spans strategic consultation, data intelligence solutions, and implementation & support. Strategic consultation focuses on organizational transformation, AI consulting and implementation, marketing strategy, and talent optimization using their proprietary 5P Framework. Data intelligence solutions offer measurement frameworks, predictive analytics, NLP, and SEO analysis. Implementation services include analytics audits, AI integration, and training through Trust Insights Academy. Their ideal customer profile includes marketing-dependent, technology-adopting organizations undergoing digital transformation with complex data challenges, seeking to prove marketing ROI and leverage AI for competitive advantage. Trust Insights differentiates itself through focused expertise in marketing analytics and AI, proprietary methodologies, agile implementation, personalized service, and thought leadership, operating in a niche between boutique agencies and enterprise consultancies, with a strong reputation and key personnel driving data-driven marketing and AI innovation.

Dynamic Women®
EnhanceMe - Everything Begins with Relationships with Mariana Caldas and Sheyla Amaral (DW316)

Dynamic Women®

Play Episode Listen Later Jun 10, 2025 17:14


There's nothing more inspiring than meeting two female founders of a business. You're going to want to listen to this episode to know how everything begins with relationships. Our host, Diane Rolston, interviews Sheyla and Mariana, co-founders of EnhanceMe, about how they turned their personal dating struggles into a revolutionary digital platform that combines psychology, technology, and personal branding to enhance human connections.Listen to learn these key takeaways:How to identify authentic business opportunities by solving problems you've personally experiencedHow to create genuine dating profiles that showcase your authentic self rather than superficial attractionsThe importance of self-reflection and understanding what you truly want in relationships before seeking connectionsInsights into building a tech startup without prior business knowledge by seeking mentorship and taking action despite fearPractical strategies for expanding from one market to broader business applicationsHow to leverage AI and technology to create more personalized and efficient servicesThe value of receiving feedback without taking it personally in both business and datingGain courage to pursue your business idea even when you don't have all the answers from the startMariana and Sheyla's Bio: Mariana CaldasWith a background in digital product strategy and a passion for creative writing, Mariana spent the last decade blending storytelling, technology, and emotional insight to create products that feel personal and purposeful. At EnhanceMe, Mariana acts as our Digital Product Specialist, shaping our marketing strategy, content, and leading our team of counsellors.Sheyla AmaralA graphic designer with 14 years of experience as a product designer (UI/UX design), going from Publishers (Vogue, GQ, Glamour), to fintech, education and now EnhanceMe. At EnhanceMe, she oversees social media, photography, and visual assets.Social Media links: Facebook: https://www.facebook.com/enhanceme.consultingLinkedIn: https://www.linkedin.com/company/enhanceme-consulting/Instagram: https://www.instagram.com/enhanceme.consulting/Do you prefer reading blogs or watching videos? Watch Diane's YouTube videos here https://www.youtube.com/@CoachDianeRolston or read her blogs here https://www.dianerolston.com/blogJoin us at the launch party of the Dynamic Women® Leadership Secrets happening in June!

Lean Built: Manufacturing Freedom
Constraints Give You More Freedom | Lean Built - Manufacturing Freedom E97

Lean Built: Manufacturing Freedom

Play Episode Listen Later Jun 2, 2025 51:52


Andrew and Jay cover everything from the real meaning of 5S (and why people forget “shine”) to the subtle art of good UI/UX—both in software and hardware. Along the way, they share lessons learned from designing better shop tools, organizing workspaces, and reducing friction in daily tasks.Andrew recounts how a Memorial Day spent reorganizing led to a color-coded system for trash bins, while Jay explains how constraints—not extra time—often lead to better results. They also discuss bringing on high school interns, building culture through curiosity, and how thoughtful design—like a volume knob that just feels right—makes work smoother and more human.

Code for Thought
[EN] Let's Go Atomic (with Design) - Brad Frost

Code for Thought

Play Episode Listen Later May 5, 2025 41:41


Brad Frost is the author of a book called Atomic Design, in which he tries to help UI/UX designers improve their workflow and approach and create digital apps.We also talk about the changing roles of designers and design in a world where we get new gadgets all the time. As Brad says in his book: let's go atomicLinks:https://bradfrost.com Brad's home pagehttps://atomicdesign.bradfrost.com here the bookhttps://www.webcomponents.orghttps://www.w3.org/History.html bit of history re the WWWhttps://www.w3.org/History/1989/proposal.htmlGet in touchThank you for listening! Merci de votre écoute! Vielen Dank für´s Zuhören! Contact Details/ Coordonnées / Kontakt: Email mailto:peter@code4thought.org UK RSE Slack (ukrse.slack.com): @code4thought or @piddie US RSE Slack (usrse.slack.com): @Peter Schmidt Mastodon: https://fosstodon.org/@code4thought or @code4thought@fosstodon.org Bluesky: https://bsky.app/profile/code4thought.bsky.social LinkedIn: https://www.linkedin.com/in/pweschmidt/ (personal Profile)LinkedIn: https://www.linkedin.com/company/codeforthought/ (Code for Thought Profile) This podcast is licensed under the Creative Commons Licence: https://creativecommons.org/licenses/by-sa/4.0/

B&D Lab / 設計系·大學生
【從CMYK到Figma - 赴美學 UIUX】

B&D Lab / 設計系·大學生

Play Episode Listen Later May 3, 2025 40:26


Support this show: https://open.firstory.me/user/cka82dsbmko7u0873o0b8ij9b Leave a comment and share your thoughts: https://open.firstory.me/user/cka82dsbmko7u0873o0b8ij9b/comments

Web and Mobile App Development (Language Agnostic, and Based on Real-life experience!)

In this conversation, Jesse Flores discusses the evolution of web development in the context of AI, emphasizing the need for websites to cater to both humans and AI agents. He introduces the concept of 'smart sites' designed for AI interactions, explores the technical aspects of AI agent communication, and discusses the future of websites as AI agents become more prevalent. The conversation also touches on the role of APIs and the handling of unstructured data in AI interactions. In this conversation, Krish Palaniappan and Jesse Flores discuss the evolution of databases, the importance of choosing the right database for specific tasks, and the impact of AI on development. They explore how UI/UX design must adapt to accommodate conversational interfaces and the disparities in technology adoption across the globe. Jesse shares insights on the future of development, emphasizing the need for a philosophical approach to technology, and concludes with a personal touch about his favorite foods.

Türkiye'de Dijital Pazarlama
İşletme Mezunu Musun Yazılımcı Mı? Hangi Bölüm Mezunları Dijital Pazarlamada Fırtına Gibi Esiyor?

Türkiye'de Dijital Pazarlama

Play Episode Listen Later Apr 29, 2025 15:34


Türkiye'de dijital pazarlama alanı hızla büyürken, bu sektöre yönelen gençlerin sayısı da her geçen gün artıyor. Peki bu alanda çalışanlar hangi üniversite bölümlerinden mezun? Dijital pazarlama için özel bir eğitim mi gerekli? Yoksa herkes bu dünyada kendine yer bulabilir mi?Bu yazıda, Türkiye'de dijital pazarlama kariyeri yapmak isteyen gençler için kapsamlı bir yol haritası sunuyoruz. Hangi bölümlerden mezun olanlar daha çok tercih ediliyor, neden bu alan seçiliyor ve kimler dijital pazarlama kariyeri için uygun? Hepsi detaylı olarak burada.Hangi Bölüm Mezunları Dijital Pazarlamayı Seçiyor?1. İşletme (%22)Dijital pazarlama sektöründe çalışanların büyük kısmı işletme bölümü mezunlarından oluşuyor. Bu bölümün sunduğu geniş perspektif (pazarlama, yönetim, finans, girişimcilik) mezunları stratejik kampanya yönetiminde, medya satın alma ve performans analizi gibi alanlarda öne çıkarıyor.2. İletişim Fakültesi, Halkla İlişkiler ve Reklamcılık (%18)İletişim fakültesi mezunları içerik üretimi, sosyal medya yönetimi, metin yazarlığı ve marka iletişimi gibi yaratıcı alanlarda yoğun olarak yer alıyor. Dijital platformlarda markanın dili ve hikâyesini oluşturmak için oldukça uygun bir altyapıya sahipler.3. Yazılım ve Bilgisayar Mühendisliği (%12)SEO, SEM, web analitiği, teknik otomasyonlar ve CRM sistemleri gibi dijital pazarlamanın teknik alanlarında yazılım kökenli kişiler önemli roller üstleniyor. Growth hacking gibi veri odaklı alanlarda da oldukça etkililer.4. Grafik Tasarım, Görsel İletişim, Yeni Medya (%10)Dijital pazarlamanın görsel tarafını yöneten bu mezunlar; sosyal medya tasarımları, video içerikler, UI/UX tasarımları gibi kullanıcı ile görsel teması yöneten görevlerde ön plana çıkıyor.5. Psikoloji ve Sosyoloji (%8)Kullanıcı davranışlarını anlama, nöropazarlama teknikleri, kullanıcı deneyimi (UX) yazımı ve persona analizlerinde bu mezunlar ciddi katkı sağlıyor. Tüketicinin zihinsel süreçlerine odaklanan işlerde tercih ediliyorlar.Geri kalan %30 ise çok çeşitli alanlardan: Hukuk, edebiyat, mühendislik, sağlık bölümleri gibi…Neden Bu Mezunlar Dijital Pazarlamayı Seçiyor?Dijital pazarlamanın cezbedici tarafları, farklı disiplinlerden mezunların ilgisini çekiyor:• Hızla Büyüyen Sektör: Türkiye'de e-ticaret hacmi 2023 itibariyle 1.85 trilyon TL'yi aştı. Bu büyüme, dijital pazarlamayı iş dünyasının vazgeçilmez bir parçası haline getirdi.• Esnek Çalışma İmkanları: Remote iş modelleri, freelance projeler ve dijital ajans kültürü bu alanda özgür çalışma alanı sunuyor.• Kariyer Geçişine Açık: Kısa sürede öğrenilebilen beceriler sayesinde insanlar farklı sektörlerden dijital pazarlamaya kolayca geçiş yapabiliyor.• Yaratıcılık + Teknoloji: Hem yaratıcı düşünebileceğiniz hem de teknik araçlarla çalışma fırsatı sunan nadir sektörlerden biri.• Sürekli Güncellenen Alan: Her ay yeni algoritmalar, reklam modelleri, araçlar ve yapay zeka entegrasyonları ile sektör dinamik kalıyor.Kimler Dijital Pazarlama Kariyeri Yapmalı?Bu sektörde başarılı olmak için şu özelliklere sahip olman büyük avantaj sağlar:Analitik DüşünenlerGoogle Analytics, reklam paneli analizi, dönüşüm oranı takibi gibi konulara meraklı olanlar performans tarafında etkili olabilir.Yaratıcı ZihinlerMetin yazarlığı, kampanya kurgusu, içerik üretimi gibi işlerde yaratıcı bireyler ön plana çıkar.İletişim Yeteneği Güçlü OlanlarMüşteri ilişkileri yönetimi, brief alma-verme süreçleri, takım içi koordinasyon gibi süreçlerde iletişim becerisi önemli.Sürekli Öğrenmeye Açık OlanlarSürekli değişen trendleri takip edebilen, yeni araçlara adapte olabilen kişiler bu sektörde ayakta kalır.İşletme, yazılım, iletişim, grafik tasarım ya da psikoloji… Mezuniyet bölümün ne olursa olsun, dijital pazarlama alanında bir yol çizmek mümkün. Bu sektör hem teknik hem yaratıcı yönleriyle gençlere çok geniş fırsatlar sunuyor. Üstelik günümüzde eğitim kaynaklarına ulaşmak hiç olmadığı kadar kolay. Tek yapman gereken öğrenmeye başlamak.

Voices of Search // A Search Engine Optimization (SEO) & Content Marketing Podcast

SEO job seekers face intense competition in today's market. Nick LeRoy, founder of SEOjobs.com, shares insights from his comprehensive industry report on the most in-demand SEO skills. Data analytics, technical SEO, and stakeholder management correlate most strongly with higher salaries, while AI expertise and UI/UX knowledge are increasingly sought after by employers. LeRoy recommends candidates develop a clear point of view on AI implementation and focus on building skills they can control to stand out in application processes.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Coffee Sometimes
SEO for Coffee Companies | Website Changes You Need to Make Today – 04/09/2025

Coffee Sometimes

Play Episode Listen Later Apr 14, 2025 71:59


JOIN THE VALOR COFFEE COMMUNITY – Courses, Exclusive Videos, PDFs, Spreadsheets and more:⁠ https://community.valor.coffee/landing-page?utm_source=spotify&utm_campaign=ep138Thanks for listening, following/subscribing, giving us a good review, and sharing with your friends on social media. It goes a long way!We had the pleasure of sitting down again with our buddy Kent from LocalEyes, our go-to SEO expert and a wizard behind the curtain when it comes to how coffee companies can actually win on Google. Whether you're running a cart, a café, or selling beans online, this episode is for you. We define key SEO terms (H1s, meta descriptions, and more), walk through real listener-submitted websites, and share instant improvements that can help your pages rank better today. Kent also gives us a sneak peek into the brand new SEO course he filmed for our Valor Coffee Community. It's packed with insight, as we roast some websites (gently) and break down exactly what coffee companies are doing right and wrong. Whether you're DIY-ing your Squarespace site or working with an agency, this is one you don't want to miss.Kent's Socials: https://taplink.cc/kent.fyi*If you purchase something through one of our links, we may be entitled to a share of the sale*Buy Valor Coffee: ⁠https://valor.coffee/shop⁠Watch on Youtube: ⁠https://youtube.com/valorcoffee16⁠Want to become a Wholesale Partner? Email us at wholesale@valor.coffee to set up an account!Want to send us coffee? Have a question you want to answer on the show? Send us an email to info@valor.coffeeWe're partnered with Clive to bring you sweet deal at a discounted rate! Use Discount Code VALOR5 at checkout for 5% off Mahlkonig, Anfim and Eureka products!Shop Clive products here: ⁠https://clivecoffee.com?sca_ref=5315485.6axWuRlcEr⁠Want to get your business in front of more people? We partnered with Local Eyes Growth to grow our business through SEO and the results have been incredible. Local Eyes is offering a FREE backlink ($300 value) to Valor Coffee Podcast listeners who partner through our exclusive link. Visit ⁠https://localeyesgrowth.com/valor⁠ to get the ball rolling!Follow the Valor Coffee Podcast on Instagram: ⁠http://instagram.com/valorcoffeepod⁠Follow Valor on Instagram: ⁠http://instagram.com/valor.coffee⁠Subscribe to Riley's YouTube Channel: ⁠https://youtube.com/@rileywestbrook⁠Follow Riley:  ⁠https://instagram.com/rileywestbrook⁠Follow Ross: ⁠https://instagram.com/rosswalters⁠Follow Ethan's Parody Account:  ⁠https://instagram.com/ethanrivers77700:00:00 Start00:00:33 Intro 00:06:30 What is SEO? 00:10:10 H1? 00:14:19 UI/UX? 00:17:13 3 Things that make your website rank higher 00:27:07 How much is SEO Set It and Forget It? 00:29:08 Local vs E-Comerce SEO 00:31:21 Toxic Back-links?!?!?!?!?!?!? 00:32:43 Website SHRED (buckle up) 00:40:00 Social Proof 00:45:23 Google Busniess Profile 00:46:22 The PERFECT Google Review 00:48:16 Website #2 00:59:54 Website 3 (the reckoning)

佐編茶水間
#322 #六角學院|做網站還是Social Media?新手小白如何在30天內快速學習網站架設邏輯技巧

佐編茶水間

Play Episode Listen Later Apr 13, 2025 48:26


Your Drone Questions. Answered.
YDQA: Ep 100- "How Are Drone Tethers Being Used?"

Your Drone Questions. Answered.

Play Episode Listen Later Apr 10, 2025 15:46


Welcome back to Your Drone Questions. Answered. In this episode, we're diving into the exciting world of tethered drones—how they're being used today and where they're headed in the future.We cover:What tethered drones are and how they differ from traditional UAVsLegal and regulatory considerations in the U.S. as of Q1 2025The main benefits of tethering (continuous power, data streaming, mobility)Real-world use cases: public safety, temporary security, event coverage, and broadcastFreefly's innovative Flying Sun product for mobile aerial lightingChallenges and technical lessons from building tethered drone systemsThe future of tether tech, including UI/UX advancements and industry-specific solutions

The Effective Statistician - in association with PSI
R-shiny - how to set it up effectively and avoid common mistakes

The Effective Statistician - in association with PSI

Play Episode Listen Later Mar 31, 2025 26:34


In this episode, I'm once again joined by Daniel Sabanés Bové for a deep dive into one of the most impactful tools for statisticians working with data visualization—R-Shiny. We explore how interactive data visualizations can help you iterate faster, collaborate better across functions, and focus more on the actual scientific questions rather than just coding. Daniel shares some excellent examples from clinical trials and gives practical tips on how to avoid common pitfalls when building Shiny apps. Whether you're designing your first app or maintaining a more complex one, you'll find plenty of value in this conversation—from best practices around UI/UX design to strategies for modular development and testing.

THE CREATIVE SHIT SHOW
Ep 37: Thriving or Surviving?

THE CREATIVE SHIT SHOW

Play Episode Listen Later Mar 30, 2025 0:54


In this episode of The Creative Shit Show, we dive deep into surviving—and thriving—in today's wild economic ride. From the feast-or-famine freelancing grind and pricing pressures to standing out with a killer portfolio, we've got you covered. We unpack how to niche down, turn past clients into a referral goldmine, and position your work to show real value. Plus, we tackle AI's big shake-up—Midjourney, Canva, and beyond—and how designers can lean into strategy, intuition, and storytelling to stay ahead. Tune in for raw talk on networking, portfolio power moves, and the skills (hello, UI/UX and brand strategy) that'll keep you in demand. Chaos? Sure. Opportunity? Hell yes. Listen now.

You Know I'm Right
You Know I'm Right, Episode 329: Harlem Globetrotter TNT Lister

You Know I'm Right

Play Episode Listen Later Mar 26, 2025 21:22


On the 329th episode of You Know I'm Right, Nick Durst and Joe Calabrese are joined by Harlem Globetrotter, TNT Lister to discuss:- Growing up in Colorado Springs?- Started playing basketball in middle school while also excelling in volleyball, track and field and setting Colorado state records in both the long jump and triple jump- Started collegiate career at New Mexico before transferring to Temple. Coached under the legendary Dawn Staley; What is she like? - In 2011 becoming the first female Harlem Globetrotter since 1993. What was the process of becoming a Globetrotter?- Side hobbies include Art and UI/UX designing- What were her original career aspirations?- Globetrotters are constantly performing all over the country and world. How does traveling affect personal lives?- Favorite cities she has traveled to? - Favorite people she had the chance to meet? - Fun experiences were made possible by being a Harlem Globetrotter- You Know I'm Right moment 

Product-Led Podcast
Top Lessons from 12 years of Building and Selling a 7-figure PLG Business

Product-Led Podcast

Play Episode Listen Later Mar 19, 2025 30:37


Discover the secrets to building and scaling a bootstrapped product-led business to seven-figure success. Our guest is Elie Khoury, the founder and CEO of Woopra, a company specializing in customer journey and product analytics.  Elie shares the nuances of when a product-led model makes sense (and when it doesn't), emphasizing the importance of swiftly establishing the perception of value. He also sheds light on AI's role in driving product-led growth strategies. Key Takeaways: [01:40] Strategic product-led evolution [07:50] AI Integration for quick value [15:00] AI as the next UI/UX iteration [18:00] Data control and AI [20:20] Reflecting on mistakes and learnings [28:45] Future of AI and product-led businesses About Elie Khoury: Elie is a product-first CEO who believes innovation and user experience are, above all else, the keys to creating great companies. Sales, marketing, recruiting, and every other aspect of a company is meaningless without an exceptional product. He co-founded Woopra, Inc., which Appier acquired in 2022.  Links:  Elie Khoury | LinkedIn

Web and Mobile App Development (Language Agnostic, and Based on Real-life experience!)

In this conversation, Krish Palaniappan reviews the Panera Bread mobile app, focusing on its user interface, experience, and functionality. He discusses the importance of user loyalty and how it affects app usage. Throughout the review, he identifies various bugs, issues with the app's workflow, and areas for improvement, particularly in terms of speed and user interaction. The conversation highlights the significance of a well-designed app in enhancing customer satisfaction and loyalty. In this conversation, Krish Palaniappan reviews the Panera app, discussing various aspects such as coffee subscriptions, user experience challenges, customization features, rewards and offers, and overall app navigation. He highlights the importance of user interface design and the need for improvements in speed and personalization. The conversation culminates in a reflection on the app's usability and the feedback process. Snowpal Products Backends as Services on ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠AWS Marketplace⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Mobile Apps on ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠App Store⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ and ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Play Store⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Web App⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Education Platform⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ for Learners and Course Creators

Value Driven Data Science
Episode 54: The Hidden Productivity Killer Most Data Scientists Miss

Value Driven Data Science

Play Episode Listen Later Mar 5, 2025 23:29


Genevieve Hayes Consulting Episode 54: The Hidden Productivity Killer Most Data Scientists Miss Why do some data scientists produce results at a rate 10X that of their peers?Many data scientists believe that better technologies and faster tools are the key to accelerating their impact. But the highest-performing data scientists often succeed through a different approach entirely.In this episode, Ben Johnson joins Dr Genevieve Hayes to discuss how productivity acts as a hidden multiplier for data science careers, and shares proven strategies to dramatically accelerate your results.This episode reveals:Why lacking clear intention kills productivity — and how to ensure every analysis drives real decisions. [02:11]A powerful “storyboarding” framework for turning vague requests into actionable projects. [09:51]How to deliver results faster using modern data architectures and raw data analysis. [13:19]The game-changing mindset shift that transforms data scientists from order-takers into trusted strategic partners. [17:05] Guest Bio Ben Johnson is the CEO and Founder of Particle 41, a development firm that helps businesses accelerate their application development, data science and DevOps projects. Links Connect with Ben on LinkedIn Connect with Genevieve on LinkedInBe among the first to hear about the release of each new podcast episode by signing up HERE Read Full Transcript [00:00:00] Dr Genevieve Hayes: Hello and welcome to Value Driven Data Science, the podcast that helps data scientists transform their technical expertise into tangible business value, career autonomy, and financial reward. I’m Dr. Genevieve Hayes, and today I’m joined by Ben Johnson, CEO and founder of Particle 41, a development firm that helps businesses accelerate their application development, data science, and DevOps projects.[00:00:30] In this episode, we’ll discuss strategies for accelerating your data science impact and results without sacrificing technical robustness. So get ready to boost your impact. Earn what you’re worth and rewrite your career algorithm. Ben, welcome to the show.[00:00:48] Ben Johnson: Yeah, thank you for having me.[00:00:50] Dr Genevieve Hayes: One of the most common misconceptions I see about data scientists is the mistaken belief that their worth within a business is directly linked to the technical complexity of the solutions they can produce.[00:01:04] And to a certain extent, this is true. I mean, if you can’t program, fit a model, or perform even the most basic statistical analysis, realistically, your days as a data scientist are probably numbered. However, while technical skills are certainly necessary to land a data science job, The data scientists I see making the biggest impact are the ones who are not necessarily producing the most complex solutions, but who can produce solutions to the most pressing business problems in the shortest possible time.[00:01:41] So in that sense, productivity can be seen as a hidden multiplier for data science careers. Ben, as the founder of a company that helps businesses accelerate their data science initiatives, it’s unsurprising that one of your areas of interest is personal productivity. Based on your experience, What are some of the biggest productivity killers holding data scientists back?[00:02:11] Ben Johnson: I don’t know for others. I know for myself that what kills my productivity is not having an intention or a goal or a direct target that I’m trying to go for. So when we solve the science problems, we’re really trying to figure out, like, what is that hunt statement or that question that key answer you know, the question that will bring the answer.[00:02:33] And also, what is the right level of information that would handle that at the asker’s level? So the ask is coming from a context or a person. And so we can know a lot. If that person is a fellow data scientist, then obviously we want to give them data. We want to answer them with data. But if that’s a results oriented business leader, then we need to make sure that we’re giving them information.[00:02:57] And we. Are the managers of the data, but to answer your question, I think that the biggest killer to productivity is not being clear on what question are we trying to answer?[00:03:08] Dr Genevieve Hayes: That, resonates with my own experience. One of the things I encountered early in my data science career was well, to take a step back. I originally trained as an actuary and worked as an actuary, and I was used to the situation where your boss would effectively tell you what to do. So, go calculate, calculate.[00:03:28] premiums for a particular product. So when I moved into data science, I think I expected the same from my managers. And so I would ask my boss, okay, what do you want me to do? And his answer would be something like, Oh here’s some data, go do something with it. And you can probably imagine the sorts of solutions that we got myself and my team would come up with something that was a model that looks like a fun fit[00:03:59] and those solutions tended to go down like a lead balloon. And it was only after several failures along those lines that it occurred to me, , maybe we should look at these problems from a different, point of view and figure out what is it that the senior management actually want to do with this data before starting to build a particular model from it.[00:04:24] Ben Johnson: Yeah. What decision are you trying to make? Just kind of starting with like the end in mind or the result in mind, I find in any kind of digital execution there are people who speak results language and there are people who speak solutions language. And when we intermix those two conversations,[00:04:41] it’s frustrating, it’s frustrating for the solution people to be like, okay, great. When are you going to give it to me? And it’s frustrating for the business folks, like hey, when am I going to get that answer when we want to talk about the solution? So I found like bifurcating like, okay, let’s have a results or planning discussion separate from a solution and asking for that right to proceed.[00:05:02] In the way that we communicate is super helpful., what your share reminds me of is some of the playbooks that we have around data QA, because in those playbooks, we’re doing analysis just for analysis sake. I feel like we’re looking for the outliers.[00:05:18] Okay. So if we look at this metric, these are the outliers. And really what we’re doing is we’re going back to the, originators of the data and say, like, sanity, check this for us. We want to run through a whole set of sanity checks to make sure that the pipeline that we’re about to analyze makes sense.[00:05:34] Are there any other exterior references that we can compare this to? And I do know that the first time we were participating in this concept of data QA, not having that playbook Was a problem, right? Like, well, okay. Yeah, the data is there. It’s good. It’s coming in, but you know, to really grind on that and make sure that it was reflective of the real world was an important step.[00:05:57] Dr Genevieve Hayes: So QA, I take your meaning quality assurance here? Is that right?[00:06:02] Ben Johnson: Yes. That’s the acronym quality assurance, but testing and doing QA around your data pipelines.[00:06:09] Dr Genevieve Hayes: Okay, so I get it. So actually making sure the pipelines work. And if you don’t understand what is it that you’re looking for with regard to performance, then you can end up going off in the wrong direction. Is that correct?[00:06:23] Ben Johnson: So if you were analyzing sales data, you would want to make sure that your totals reflected the financial reports. You just want to make sure that what you’ve. Accumulated in your analysis environment is reflective of the real world. There’s nothing missing. It generally makes sense. We just haven’t introduced any problem in just the organizing and collection of the data.[00:06:45] Dr Genevieve Hayes: Yeah, yeah. From my background in the insurance industry, those were all the sorts of checks that we used to have to do with the data as well.[00:06:52] Ben Johnson: Well, and oftentimes the folks that are asking these hard questions, they’re not asking the questions because they have any idea how clean the data they’ve collected. They just think there might be a chance. It’s like the dumb and dumber, you know, okay, so we think we have a chance, you know anyways awful movie reference, but they think that there might be a possibility that the answer to all of their questions or this hard decision that they need to make regularly is somewhere in that pile of stuff.[00:07:21] What we call a QA analysis Also is checking the data’s integrity if it’s even capable to solve the problem. So I think that’s a great first step and that sometimes that’s just kind of analysis for analysis sake or feels that way.[00:07:37] Dr Genevieve Hayes: One of the things you’ve touched on several times is the idea of the results oriented people and the solutions oriented people and I take it with the solutions oriented people, you’re talking about people like the data scientists. When the data scientists are talking to those results oriented people, Is there a framework that they can follow for identifying what sorts of results those results oriented people are looking for?[00:08:08] Ben Johnson: It’s very similar in the way that you approach like a UI UX design. We’ve taken kind of a storyboard approach, storyboard approach to what they want to see. Like, okay, What is the question? What are you expecting the answer to be? Like, what do you think would happen?[00:08:25] And then what kind of decisions are you going to do as a result of that? And you had some of those things as well. But kind of storyboarded out what’s the journey that they’re going to take, even if it’s just a logical journey through this data to go affect some change.[00:08:41] Dr Genevieve Hayes: So do you actually map this out on a whiteboard or with post it notes or something? So literally building a storyboard?[00:08:48] Ben Johnson: Most of the time , it’s bullets. It’s more of like written requirements. But when we think of it, we think of it , in a storyboard and often it’ll turn into like a PowerPoint deck or something because we’re also helping them with their understanding of the funding of the data science project, like connecting ROI and what they’re trying to do.[00:09:10] So yeah. Yeah, our firm isn’t just staff augmentation. We want to take a larger holistic ownership approach of the mission that we’re being attached to. So this is critical to like, okay, well, we’re going to be in a data science project together. Let’s make sure that we know what we’re trying to accomplish and what it’s for.[00:09:29] Because, you know, if you’re working on a complex project and six months in everybody forgets Why they’ve done this, like why they’re spending this money oftentimes you need to remind them and, show them where you are in the roadmap to solving those problems.[00:09:44] Dr Genevieve Hayes: With the storyboard approach, can you give me an example of that? Cause I’m still having a bit of trouble visualizing it.[00:09:51] Ben Johnson: Yeah, it’s really just a set of questions. What are you trying to accomplish? What do you expect to have happen? Where are you getting this data? It’s , just a discovery survey that we are thinking about when we’re establishing the ground rules of the particular initiative.[00:10:08] Dr Genevieve Hayes: And how do you go from that storyboard to the solution?[00:10:12] Ben Johnson: That’s a great question. So the solution will end up resolving in whatever kind of framework we’re using data bricks or whatever it’ll talk about the collection, the organization and the analysis. So we’ll break down how are we going to get this data is the data already in a place where we can start messing with it.[00:10:32] What we’re seeing is that a lot of. And I kind of going deep on the collection piece because that’s I feel like that’s like 60 percent of the work. We prefer a kind of a lake house type of environment where we’ll just leave a good portion of the data in its raw original format, analyze it.[00:10:52] Bring it into the analysis. And then, of course, we’re usually comparing that to some relational data. But all that collection, making sure we have access to all of that. And it’s in a in a methodology and pipelines that we can start to analyze it is kind of the critical first step. So we want to get our hands around that.[00:11:10] And then the organization. So is there, you know, anything we need to organize or is a little bit messy? And then what are those analysis? Like, what are those reports that are going to be needed or the visibility, the visualizations that would then be needed on top of that? And then what kind of decisions are trying to be made?[00:11:28] So that’s where the ML and the predictive analytics could come in to try to help assist with the decisions. And we find that most data projects. Follow those, centralized steps that we need to have answers for those.[00:11:43] Dr Genevieve Hayes: So a question that might need to be answered is, how much inventory should we have in a particular shop at a particular time? So that you can satisfy Christmas demand. And then you’d go and get the data about[00:11:59] Ben Johnson: Yeah. The purchase orders or yeah. Where’s the data for your purchase orders? Do you need to collect that from all your stores or do you already have that sitting in some place? Oh, yeah. It’s in all these, you know, disparate CSVs all over the place. We just did a. project for a leading hearing aid manufacturer.[00:12:18] And most of the data that they wanted to use was on a PC in the clinics. So we had to devise a collection mechanism in the software that the clinics were using to go collect all that and regularly import that into a place where We could analyze it, see if it was standardized enough to go into a warehouse or a lake.[00:12:39] And there were a lot of standardization problems, oddly, some of the clinics had kind of taken matters into their own hands and started to add custom fields and whatnot. So to rationalize all of that. So collection, I feel like is a 60 percent of the problem.[00:12:54] Dr Genevieve Hayes: So, we’ve got a framework for increasing productivity by identifying the right problem to solve, but the other half of this equation is how do you actually deliver results in a rapid fashion. because, as you know, A result today is worth far more than a result next year. What’s your advice around getting to those final results faster?[00:13:19] Ben Johnson: So That’s why I like the lake house architecture. We’re also finding new mechanisms and methodology. Some, I can’t talk about where they’re rather than taking this time to take some of the raw data and kind of continuously summarize it. So maybe you’re summarizing it and data warehousing it, but we like the raw data to stay there and just ask it the questions, but it takes more time and more processing power.[00:13:47] So what I’m seeing is we’re often taking that and organizing it into like a vector database or something that’s kind of right for the analysis. We’re also using vector databases in conjunction with AI solutions. So we’re asking the, we’re putting, we’re designing the vector database around the taxonomy, assuming that the user queries are going to match up with that taxonomy, and then using the LLM to help us make queries out of the vector database, and then passing that back to the LLM to test.[00:14:15] Talk about it to make rational sense about the story that’s being told from the data. So one way that we’re accelerating the answer is just to ask questions of the raw data and pay for the processing cost. That’s fast, and that also allows us to say, okay, do we have it?[00:14:32] Like, are we getting closer to having something that looks like the answer to your question? So we can be iterative that way, but at some point we’re starting to get some wins. In that process. And now we need to make those things more performant. And I think there’s a lot of innovation still happening in the middle of the problem.[00:14:51] Dr Genevieve Hayes: Okay, so you’re starting by questioning the raw data. Once you realize that you’re asking the right question and getting something that the results oriented people are looking for, would you then productionize this and start creating pipelines and asking questions of process data? Yeah.[00:15:11] Ben Johnson: Yeah. And we’d start figuring out how to summarize it so that the end user wasn’t waiting forever for an answer.[00:15:17] Dr Genevieve Hayes: Okay, so by starting with the raw data, you’re getting them answers sooner, but then you can make it more robust.[00:15:26] Ben Johnson: That’s right. Yes. More robust. More performant and then, of course, you could then have a wider group of users on the other side consuming that it wouldn’t just be a spreadsheet. It would be a working tool.[00:15:37] Dr Genevieve Hayes: Yeah, it’s one of the things that I was thinking about. I used to have a boss who would always say fast, cheap and good, pick two. Meaning that, you can have a solution now and it can be cheap, but it’s going to come at the cost of And it sounds like you focus on Fast and cheap first, with some sacrifice of quality because you are dealing with raw data.[00:16:00] But then, once you’ve got something locked in, you improve the quality of it, so then technical robustness doesn’t take a hit.[00:16:09] Ben Johnson: Yeah, for sure. I would actually say in the early stage, you’re probably sacrificing the cheap for good and fast because you’re trying to get data right off the logs, right off your raw data, whatever it is. And to get an answer really quickly on that without having to set up a whole lot of pipeline is fast.[00:16:28] And it’s it can be very good. It can be very powerful. We’ve seen many times where it like answers the question. You know, the question of, is that data worth? Mining further and summarizing and keeping around for a long time. So in that way, I think we addressed the ROI of it on the failures, right.[00:16:46] Being able to fail faster. Oh yeah. That data is not going to answer the question that we have. So we don’t waste all the time of what it would have been to process that.[00:16:55] Dr Genevieve Hayes: And what’s been the impact of taking this approach for the businesses and for the data scientists within your organisation who are taking this approach?[00:17:05] Ben Johnson: I think it’s the feeling of like. of partnership with us around their data where we’re taking ownership of the question and they’re giving us access to whatever they have. And there’s a feeling of partnership and the kind of like immediate value. So we’re just as curious about their business as they are.[00:17:27] And then we’re working shoulder to shoulder to help them determine the best way to answer those questions.[00:17:32] Dr Genevieve Hayes: And what’s been the change in those businesses between, before you came on board and after you came on board?[00:17:39] Ben Johnson: Well, I appreciate that question. So with many of the clients, they see that, oh, this is the value of the data. It has unlocked this realization that I, in the case of the hearing aid manufacturer that we work with, they really started finding that they could convert more clients and have a better brand relationship by having a better understanding of their data.[00:18:03] And they were really happy that they kept it. You know, 10 years worth of hearing test data around to be able to understand, their audience better and then turn that into. So they’ve seen a tremendous growth in brand awareness and that’s resulted in making a significant dent in maintaining and continuing to grow their market share.[00:18:26] Dr Genevieve Hayes: So they actually realize the true value of their data.[00:18:30] Ben Johnson: That’s right. And then they saw when they would take action on their data they were able to increase market share because they were able to affect people that truly needed to know about their brand. And like we’re seeing after a couple of years, their brand is like, you don’t think hearing aids unless you think of this brand.[00:18:48] So it’s really cool that they’ve been able to turn that data by really, Talking to the right people and sending their brand message to the right people.[00:18:56] Dr Genevieve Hayes: Yeah, because what this made me think of was one of the things I kept encountering in the early days of data science was a lot of Senior decision makers would bring in data scientists and see data science as a magic bullet. And then because the data scientists didn’t know what questions to answer, they would not be able to create the value that had been promised in the organization.[00:19:25] And the consequence after a year or two of this would be the senior decision makers would come to the conclusion that data science is just a scam. But it seems like by doing it right, you’ve managed to demonstrate to organizations such as this hearing aid manufacturer, that data science isn’t a scam and it can actually create value.[00:19:48] Ben Johnson: Absolutely. I see data sciences anytime that that loop works, right? Where you have questions. So even I have a small client, small business, he owns a glass manufacturing shop. And. The software vendor he uses doesn’t give him a inexpensive way to mark refer like who his salespeople are,[00:20:09] so he needs a kind of a salesperson dashboard. What’s really cool is that his software gives them, they get full access to a read only database. So putting a dashboard on top of. His data to answer this salesperson activities and commissions and just something like that. That’s data science.[00:20:28] And now he can monitor his business. He’s able to scale using his data. He’s able to make decisions on how many salespeople should I hire, which ones are performing, which ones are not performing. How should I pay them? That’s a lot of value to us as data scientists. It just seems like we just put a dashboard together.[00:20:46] But for that business, that’s a significant capability that they wouldn’t have otherwise had.[00:20:52] Dr Genevieve Hayes: So with all that in mind, what is the single most important change our listeners could make tomorrow? to accelerate their data science impact and results.[00:21:02] Ben Johnson: I would just say, be asking that question, Like what question am I trying to answer? What do you expect the outcome to be? Or what do you think the outcome is going to be? So that I’m not biased by that, but I’m sanity checking around that. And then what decisions are you going to make as a result?[00:21:19] I think always having that like in the front of your mind would help you be more consultative and help you work according to an intention. And I think that’s super helpful. Like don’t let the client Or the customer in your case, whether that be an internal person give you that assignment, like, just tell me what’s there.[00:21:38] Right. I just want insights. I think the have to push our leaders to give us a little more than that.[00:21:46] Dr Genevieve Hayes: the way I look at it is, don’t treat your job as though you’re someone in a restaurant who’s just taking an order from someone.[00:21:53] Ben Johnson: Sure.[00:21:54] Dr Genevieve Hayes: Look at it as though you’re a doctor who’s diagnosing a problem.[00:21:58] Ben Johnson: Yeah. And the data scientists that I worked with that have that like in their DNA, like they just can’t move forward unless they understand why they’re doing what they’re doing have been really impactful. In the organization, they just ask great questions and they quickly become an essential part of the team.[00:22:14] Dr Genevieve Hayes: So for listeners who want to get in contact with you, Ben, or to learn more about Particle 41, what can they do?[00:22:21] Ben Johnson: Yeah, I’m on LinkedIn. In fact I love talking to people about data science and DevOps and software development. And so I have a book appointment link on my LinkedIn profile itself. So I’m really easy to get into a call with, and we can discuss whatever is on your mind. I also offer fractional CTO services.[00:22:42] And I would love to help you with a digital problem.[00:22:45] Dr Genevieve Hayes: And there you have it. Another value packed episode to help turn your data science skills into serious clout, cash, and career freedom. If you enjoyed this episode, why not make it a double? Next week, catch Ben’s value boost, a quick five minute episode where he shares one powerful tip for getting real results real fast.[00:23:10] Make sure you’re subscribed so you don’t miss it. Thanks for joining me today, Ben.[00:23:16] Ben Johnson: Thank you. It was great being here. I enjoyed it[00:23:19] Dr Genevieve Hayes: And for those in the audience, thank you for listening. I’m Dr. Genevieve Hayes, and this has been value driven data science. The post Episode 54: The Hidden Productivity Killer Most Data Scientists Miss first appeared on Genevieve Hayes Consulting and is written by Dr Genevieve Hayes.

The Digital Restaurant
Just Eat, Just Sold - The Global Marketplace Consolidation Continues

The Digital Restaurant

Play Episode Listen Later Mar 3, 2025 25:38 Transcription Available


Send us a textWelcome to this week's episode of The Digital Restaurant Podcast! Carl is joined by special guest Olga Lopategui, a leading expert in restaurant loyalty and digital engagement, to break down the latest innovations shaping the restaurant industry.⏱ [01:07] – Just Salad Gets $200M InvestmentWhat makes Just Salad's tech stack unique?How will this funding impact digital transformation in restaurants?Will they use the investment for expansion, technology, or both?⏱ [04:42] – Just Eat Takeaway's Acquisition by ProcessWhat does this mean for global food delivery consolidation?How does it compare to previous acquisitions like Grubhub?Could DoorDash face challenges in global expansion?⏱ [09:09] – The Digital Guest Experience & UX InnovationHow should restaurant UI/UX evolve for better ordering?Why should apps customize the interface for individual users?What lessons can restaurants learn from brands like Taco Bell's Veggie Mode?⏱ [14:19] – What Are Digital Twins & Why Do They Matter?How do digital twins work in the restaurant industry?Can virtual simulations replace traditional prototype testing?How might predictive maintenance reduce restaurant downtime?Additional Paper here⏱ [19:53] – Dutch Bros, Panda Express & Domino's on Mobile OrderingHow did Dutch Bros perfect their mobile app launch?What improvements are Panda and Domino's making?Why is mobile ordering crucial for customer loyalty and convenience?Support the show

Uncomplicated Marketing
The Science & Strategy Behind Smart Marketing

Uncomplicated Marketing

Play Episode Listen Later Feb 26, 2025 57:19


Rebecca Shaddix, a seasoned marketer and go-to-market strategist, joins the show to share her expertise in driving revenue growth across industries like education, healthcare, and SaaS. As a Forbes contributor and thought leader, Rebecca brings a unique blend of data-driven marketing, empathetic leadership, and a keen understanding of the evolving marketing landscape.Key Topics Discussed:1. The Role of Empathetic Leadership in MarketingHow balancing high expectations with trust fosters a strong marketing team.The power of letting small tests and failures lead to big wins.Why leaders must step back and allow data, not just personal preferences, to drive marketing decisions.2. Data-Driven Decision Making & Testing StrategiesWhy marketers must test and iterate rather than rely on assumptions.The importance of product-market fit and refining strategy over time.A case study on how Rebecca's team unseated an industry competitor by leveraging differentiation and UI/UX improvements.3. The Impact of AI & Technology on MarketingHow AI is changing the way businesses approach marketing.The importance of training AI tools for better results instead of relying on generic prompts.Why founders and marketers must integrate AI thoughtfully to maintain authenticity.4. Scaling Marketing Efforts Without Losing FocusThe risk of growing too fast and diluting brand messaging.How to avoid the mistake of jumping into too many marketing channels at once.Why companies should focus on a few, well-executed strategies before expanding.5. Acquisition vs. Retention: The Key to Sustainable GrowthWhy retaining customers is four times more impactful than acquiring new ones.The dangers of short-term promotions and discount cycles.Strategies for building long-term customer loyalty and brand trust.6. What Founders Need to Know Before Hiring a MarketerWhy most startups don't need a CMO as their first marketing hire.How to determine whether to hire in-house or work with an agency.The importance of setting clear goals and defining success before bringing in external help.Key Takeaways for Marketers & Founders:For marketers: Differentiation in marketing takes effort—using AI, tools, and strategies effectively requires training and intentionality.For founders: Don't chase trends blindly; focus on the right hires, the right tools, and a clear strategic direction to grow sustainably.Connect with Rebecca Shaddix:LinkedInFollow us on Instagram: Uncomplicate it! (@uncomplicate__it) • Instagram photos and videos Follow us on YouTube: UncomplicatedMarketing - YouTube

Wassup Conversations
UI Design in Games: With "Diablo IV", "Cyberpunk 2077" UI Designer

Wassup Conversations

Play Episode Listen Later Feb 26, 2025 50:06


UI design is one of the most overlooked yet crucial aspects of game development. In this episode of Bonfire Conversations, I sit down with Fernando Forero, the legendary UI designer behind The Witcher 3, Cyberpunk 2077, and Diablo 4. We break down what makes great UI/UX, how it shapes our gaming experience, and the biggest challenges in designing immersive interfaces.

Millionaire Car Salesman Podcast
EP 10:04 How to Find the Hidden Money in Your CRM & Maximize Opportunities to Increase Sales

Millionaire Car Salesman Podcast

Play Episode Listen Later Feb 4, 2025 54:26


In this electrifying episode of the Millionaire Car Salesman Podcast, hosts LA Williams and Sean V. Bradley dive deep into the innovations shaping the automotive CRM landscape with special guests Shane Born and Melissa Sinclair from NCC and ProMax.  "You're investing in one of the most important tools to help run all aspects of your business. You want to make sure that it's set up properly from the very beginning." - Melissa Sinclair As the automotive industry faces unprecedented changes, Shane and Melissa discuss how their companies are spearheading the credit-first approach in CRM solutions, creating streamlined and transparent processes for both dealers and consumers. The episode begins with the hosts addressing missed speaking opportunities at an industry event due to unforeseen circumstances and quickly shifts focus to the groundbreaking strategies being implemented by ProMax, setting a new bar for CRM systems integrated with National Credit Center resources. "Success involves partnering with the right people... ensure your team understands the what, the why, and the how, and then be flexible to pivot if a generational storm hits." - Shane Born The conversation is packed with insights into how dealerships can leverage modern CRM systems to improve customer experience, enhance fraud prevention, and streamline sales processes using accurate real-time credit data. Shane Born highlights the unique position of their CRM solution, emphasizing its ability to provide consistent, transparent, and frictionless service to consumers while maintaining a keen adaptability to market changes. Melissa Sinclair adds to this by discussing the newly launched open API, offering seamless integration capabilities with vendor partners to unify dealership operations. This episode is a must-listen for any automotive professional keen on harnessing cutting-edge CRM technologies to propel their business forward.   Key Takeaways: ✅ Credit-First CRM Approach: NCC and ProMax are revolutionizing the CRM space with a credit-first methodology, enhancing the buying process through accurate financing options and fraud prevention features. ✅ User Interface and Experience: The newly redesigned UI/UX of ProMax's CRM offers a modern, intuitive interface that simplifies processes for salespeople and management, improving efficiency and usability. ✅ Service and Sales Integration: By integrating service customer data with CRM strategies, dealerships can unlock unprecedented sales opportunities, targeting both existing and untapped customer bases. ✅ Open API for Seamless Integration: The introduction of the open API, Dex, allows for effortless connection with various dealership software, ensuring all operations are synchronized and efficient. ✅ Robust OEM Partnerships: ProMax's CRM is approved by most OEMs, including 100% co-op with General Motors, showcasing the platform's comprehensive capabilities and industry trust.     About Shane Born Shane Born is a prominent figure in the automotive industry, holding a significant position at NCC (National Credit Center) and ProMax. With over two decades of experience, Shane has been instrumental in driving technological advancements and fostering customer relations in the automotive sector. His focus lies in integrating credit solutions with CRM systems to enhance dealership efficiencies!   About Melissa Sinclair Melissa Sinclair at NCC and ProMax, bringing extensive experience and deep knowledge about CRM solutions. Melissa has contributed substantially to the development of innovative customer relationship management tools and has played a key role in ensuring that these tools meet the needs of modern dealerships. She is particularly focused on enhancing user experience and fostering partnerships within the automotive industry.     Unleashing the Power of CRM in Automotive Sales Key Takeaways The Millionaire Car Salesman Podcast dives into how CRM technology is revolutionizing the automotive industry, focusing on NCC's ProMax and its unique offerings. The discussion emphasizes the shift towards credit-first processes, enhancing both dealer and consumer experiences by leveraging advanced data management. An insider perspective on NADA reveals valuable insights into networking strategies and industry trends despite unexpected challenges. CRM Innovations in the Automotive Industry The Millionaire Car Salesman Podcast, hosted by LA Williams and Sean V. Bradley, recently featured Shane Born and Melissa Sinclair from NCC and ProMax, who discussed groundbreaking innovations in CRM technology, specifically tailored for the automotive industry. As the podcast episodes unravel, they explore how these technologies redefine dealer-consumer interactions and internal processes within dealerships. According to Shane Born, NCC is leveraging its dual expertise as both a data and software-as-a-service company: "We are both a DAS company and a SaaS company," he notes. Their end-to-end platform ensures a streamlined, credit-first approach that not only enhances dealer efficiency but also significantly improves the customer's buying journey. This dual focus allows NCC to provide a seamless workflow tied to a single database, an advantage that sets it apart from competitors relying on multiple integrations. From quote to contract, the process is designed to eliminate friction and bolster customer satisfaction by providing accurate financing solutions quickly. Melissa Sinclair elaborates on the customized experience ProMax delivers: "Any role in the dealership can benefit as the platform was designed specifically with them in mind." The conversations with dealers led to UI and UX improvements, focusing on easy navigation and relevant data presentation, ensuring that even complex tools are intuitive for users across all roles within the dealership. Strategies from the NADA Conference One of the intriguing segments of the podcast episode involved understanding the dynamics of the National Automobile Dealers Association (NADA) conference. Despite logistical hurdles due to inclement weather, vendors and attendees alike made the most of the opportunity to connect and explore industry advancements. Shane Born emphasized that despite a "50% pre-registration count," those who made it to the event were laser-focused on solving pertinent challenges, leading to higher closing rates on new dealer group deals. NADA's decision to open its welcome party to all attendees, not just paid registrants, was a strategic move to enhance networking despite lower turnout numbers. "It allows us to really have meaningful conversations about what's happening and where the industry is heading," Born shared, highlighting how such interactions lead to long-lasting partnerships and industry insights. This part of the podcast underscores the importance of adaptability and creativity in fostering networking opportunities, even in unforeseen circumstances. NADA highlighted industry trends, such as AI integration and credit-first approaches, showcased by innovative solutions like NCC's ProMax. Maximizing CRM for Dealer Success In detailing the capabilities of NCC's complete CRM, the podcast dives into how dealerships can maximize the tool for their success. At its core, ProMax's CRM offers a robust framework for both reactive and proactive dealership strategies—a point emphasized repeatedly throughout the discussion. "Any user can go in and build a custom list," explained Melissa Sinclair, illustrating the CRM's adaptability in creating targeted customer lists based on various criteria, such as unsold showroom traffic or service appointments. The system's Opportunities Dashboard is a game changer, automatically collating potential purchase opportunities, ensuring that even if a salesperson overlooks a prospect, the CRM does not. Sean V. Bradley highlighted common dealer pitfalls, such as under-utilizing CRM capabilities, which often leads to unnecessary marketing expenditures. "Why are they spending $65,000 in advertising per month when they could invest $5,000 to $10,000 in CRM?" he questioned, advocating for a full-time CRM manager to ensure optimal use of the system. Reflecting on trends discussed at the podcast, it's evident that leveraging comprehensive CRM tools like NCC's ProMax could save money and increase sales, making it essential for dealerships to embrace such technologies fully. Reflections on the Automotive Horizon As the podcast episode closes, the hosts and guests reflect on how the innovations discussed are reshaping the automotive sales landscape. The evolution of CRM technology, spearheaded by companies like NCC and solutions such as ProMax, shows an industry leaning heavily into data-driven, customer-centric approaches. The recurring theme remains clear: the automotive industry is poised for growth through technology adaptation and strategic partnerships. As Shane Born perfectly encapsulates, success in this field "comes down to consistent execution of your strategic vision," underscoring the importance of aligning technology with well-defined dealership goals. With ProMax's open API and partnership with AI leaders like Impel, NCC ensures they are not just evolving alongside industry changes, but leading the charge toward a more integrated and customer-focused sales experience. In the ever-evolving automotive market, insights from such discussions are invaluable, paving the way for dealerships to not only keep pace but thrive in an exciting era of digital transformation.     Resources: Podium: Discover how Podium's innovative AI technology can unlock unparalleled efficiency and drive your dealership's sales to new heights. Visit www.podium.com/mcs to learn more!   NCC: Credit-Driven Retailing - NCC delivers industry-best credit-driven retailing for auto dealerships, combining a powerful credit and compliance engine and fully integrated CRM/Desking platform for maximum profitability.   Complete CRM: Complete CRM is a streamlined, all-in-one system that simplifies your dealership software and processes so you can manage every aspect of your operation with ease; from tracking and following up on leads, desking deals, managing inventory, marketing to your customers, and more.   Dealer Synergy & Bradley On Demand: The automotive industry's #1 training, tracking, testing, and certification platform and consulting & accountability firm.   The Millionaire Car Salesman Facebook Group: Join the #1 Mastermind Group in the Automotive Industry! With over 28,000 members, gain access to successful automotive mentors & managers, the best industry practices, & collaborate with automotive professionals from around the WORLD! Join The Millionaire Car Salesman Facebook Group today!   Win the Game of Googleopoly: Unlocking the secret strategy of search engines.   The Millionaire Car Salesman Podcast is Proudly Sponsored By: Podium: Elevating Dealership Excellence with Intelligent Customer Engagement Solutions. Unlock unparalleled efficiency and drive sales with Podium's innovative AI technology, featured proudly on the Millionaire Car Salesman Podcast. Visit www.podium.com/mcs to learn more!   NCC: Powered by proprietary solutions such as Intelligent Credit Engine™ and LenderSelect™, NCC transforms the car-buying experience for dealers and their customers. From compliance and lender selection to CRM and desking, to marketing and data mining—NCC integrates them all in a single, seamless platform to deliver better customer experiences, maximum efficiency and maximum profit.   Complete CRM: As an innovative leader in the industry for the last 30 years, Complete CRM is designed to give your dealership the competitive edge in a demanding marketplace. Powered by Complete Credit™ and award-winning desking, Complete CRM™ is the industry's only credit and compliance-enabled CRM that lets dealers achieve maximum profitability on every deal. Built on modern technology, Complete CRM seamlessly integrates credit, compliance, inventory, data mining, lead generation, enterprise functionality, and customized reporting in one tool with a single login.   Dealer Synergy: The #1 Automotive Sales Training, Consulting, and Accountability Firm in the industry! With over two decades of experience in building Internet Departments and BDCs, we have developed the most effective automotive Internet Sales, BDC, and CRM solutions. Our expertise in creating phone scripts, rebuttals, CRM action plans, strategies, and templates ensures that your dealership's tools and personnel reach their full potential.   Bradley On Demand: The automotive sales industry's top Interactive Training, Tracking, Testing, and Certification Platform. Featuring LIVE Classes and over 9,000 training modules, our platform equips your dealership with everything needed to sell more cars, more often, and more profitably!    

SorareData Podcast
What Should Sorare Give Us?

SorareData Podcast

Play Episode Listen Later Jan 31, 2025 82:15


There have been a number of UI/UX improvements on Sorare recently, some of which seem eerily similar to what's been available on SorareData for years. All of it begs the question: what should Sorare be providing to their users?

ui ux sorare soraredata
PodRocket - A web development podcast from LogRocket
Universal React with Mo Khazali

PodRocket - A web development podcast from LogRocket

Play Episode Listen Later Jan 23, 2025 36:09


Mo Khazali, head of mobile and tech lead at Theodo UK, talks about the novel concept of Universal React. He discusses cross-platform development, overcoming performance challenges, and its impact on empowering small development teams to compete with big tech. Links https://x.com/mo__javad https://github.com/mojavad https://www.linkedin.com/in/mohammadkhazali We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Emily, at emily.kochanekketner@logrocket.com (mailto:emily.kochanekketner@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understand where your users are struggling by trying it for free at [LogRocket.com]. Try LogRocket for free today.(https://logrocket.com/signup/?pdr) Special Guest: Mo Khazali.

Korean. American. Podcast
Episode 81: A Picture's Worth A Thousand Korean Words (Design)

Korean. American. Podcast

Play Episode Listen Later Jan 23, 2025 109:59


This week Jun and Daniel dive into design differences between Korea and America, from social media and advertising to architecture and mobile apps. The hosts explore how and why Korean designs tend to be text-heavy and informative, while American designs focus more on visuals and artistic impact. Topics include YouTube thumbnails, movie posters, advertisements, mobile app UI/UX, building design, and book covers. They discuss how cultural values like collectivism, risk-averseness, and utilitarianism influence Korean design choices, while American designs often prioritize individual interpretation and bold statements.If you're interested in learning how clickbait, movie posters, mobile apps, and architecture differ between Korea and America, tune in to hear Daniel and Jun discuss all this and more! Also in this episode, Daniel and Jun teach each other English vocabulary, discuss the various brands of different countries, and make fun of Korean apartment naming conventions.Eye-Opening Moments PodcastEye-Opening Moments are stories of adversity, encounters, and perspectives. They are...Listen on: Apple Podcasts SpotifySupport the showAs a reminder, we record one episode a week in-person from Seoul, South Korea. We hope you enjoy listening to our conversation, and we're so excited to have you following us on this journey!Support us on Patreon:https://patreon.com/user?u=99211862Follow us on socials: https://www.instagram.com/koreanamericanpodcast/https://twitter.com/korampodcasthttps://www.tiktok.com/@koreanamericanpodcastQuestions/Comments/Feedback? Email us at: koreanamericanpodcast@gmail.com

Experiencing Data with Brian O'Neill
161 - Designing and Selling Enterprise AI Products [Worth Paying For]

Experiencing Data with Brian O'Neill

Play Episode Listen Later Jan 21, 2025 34:00


With GenAI and LLMs comes great potential to delight and damage customer relationships—both during the sale, and in the UI/UX. However, are B2B AI product teams actually producing real outcomes, on the business side and the UX side, such that customers find these products easy to buy, trustworthy and indispensable?    What is changing with customer problems as a result of LLM and GenAI technologies becoming more readily available to implement into B2B software? Anything?   Is your current product or feature development being driven by the fact you might be able to now solve it with AI? The “AI-first” team sounds like it's cutting edge, but is that really determining what a customer will actually buy from you?    Today I want to talk to you about the interplay of GenAI, customer trust (both user and buyer trust), and the role of UX in products using probabilistic technology.     These thoughts are based on my own perceptions as a “user” of AI “solutions,” (quotes intentional!), conversations with prospects and clients at my company (Designing for Analytics), as well as the bright minds I mentor over at the MIT Sandbox innovation fund. I also wrote an article about this subject if you'd rather read an abridged version of my thoughts.   Highlights/ Skip to: AI and LLM-Powered Products Do Not Turn Customer Problems into “Now” and “Expensive” Problems (4:03) Trust and Transparency in the Sale and the Product UX: Handling LLM Hallucinations (Confabulations) and Designing for Model Interpretability (9:44) Selling AI Products to Customers Who Aren't Users (13:28) How LLM Hallucinations and Model Interpretability Impact User Trust of Your Product (16:10) Probabilistic UIs and LLMs Don't Negate the Need to Design for Outcomes (22:48) How AI Changes (or Doesn't) Our Benchmark Use Cases and UX Outcomes (28:41) Closing Thoughts (32:36)   Quotes from Today's Episode “Putting AI or GenAI into a product does not change the urgency or the depth of a particular customer problem; it just changes the solution space. Technology shifts in the last ten years have enabled founders to come up with all sorts of novel ways to leverage traditional machine learning, symbolic AI, and LLMs to create new products and disrupt established products; however, it would be foolish to ignore these developments as a product leader. All this technology does is change the possible solutions you can create. It does not change your customer situation, problem, or pain, either in the depth, or severity, or frequency. In fact, it might actually cause some new problems. I feel like most teams spend a lot more time living in the solution space than they do in the problem space. Fall in love with the problem and love that problem regardless of how the solution space may continue to change.” (4:51) “Narrowly targeted, specialized AI products are going to beat solutions trying to solve problems for multiple buyers and customers. If you're building a narrow, specific product for a narrow, specific audience, one of the things you have on your side is a solution focused on a specific domain used by people who have specific domain experience. You may not need a trillion-parameter LLM to provide significant value to your customer. AI products that have a more specific focus and address a very narrow ICP I believe are more likely to succeed than those trying to serve too many use cases—especially when GenAI is being leveraged to deliver the value. I think this can be true even for platform products as well. Narrowing the audience you want to serve also narrows the scope of the product, which in turn should increase the value that you bring to that audience—in part because you probably will have fewer trust, usability, and utility problems resulting from trying to leverage a model for a wide range of use cases.” (17:18) “Probabilistic UIs and LLMs are going to create big problems for product teams, particularly if they lack a set of guiding benchmark use cases. I talk a lot about benchmark use cases as a core design principle and data-rich enterprise products. Why? Because a lot of B2B and enterprise products fall into the game of ‘adding more stuff over time.' ‘Add it so you can sell it.' As products and software companies begin to mature, you start having product owners and PMs attached to specific technologies or parts of a product. Figuring out how to improve the customer's experience over time against the most critical problems and needs they have is a harder game to play than simply adding more stuff— especially if you have no benchmark use cases to hold you accountable. It's hard to make the product indispensable if it's trying to do 100 things for 100 people.“ (22:48) “Product is a hard game, and design and UX is by far not the only aspect of product that we need to get right. A lot of designers don't understand this, and they think if they just nail design and UX, then everything else solves itself. The reason the design and experience part is hard is that it's tied to behavior change– especially if you are ‘disrupting' an industry, incumbent tool, application, or product. You are in the behavior-change game, and it's really hard to get it right. But when you get it right, it can be really amazing and transformative.” (28:01) “If your AI product is trying to do a wide variety of things for a wide variety of personas, it's going to be harder to determine appropriate benchmarks and UX outcomes to measure and design against. Given LLM hallucinations, the increased problem of trust, model drift problems, etc., your AI product has to actually innovate in a way that is both meaningful and observable to the customer. It doesn't matter what your AI is trying to “fix.” If they can't see what the benefit is to them personally, it doesn't really matter if technically you've done something in a new and novel way. They're just not going to care because that question of what's in it for me is always sitting behind, in their brain, whether it's stated out loud or not.” (29:32)   Links Designing for Analytics mailing list

Soft Skills Engineering
Episode 444: Surrounded by apathetic coworkers and put it on my resume?

Soft Skills Engineering

Play Episode Listen Later Jan 20, 2025 31:10


In this episode, Dave and Jamison answer these questions: After a decade as a Senior front-end engineer in companies stuck in legacy ways of working—paying lip service to true agility while clinging to control-heavy, waterfall practices—I'm frustrated and exhausted by meetings and largely apathetic, outsourced teams who don't match my enthusiasm for product-thinking or improving things. It seems allowed and normalised everywhere I go. How can I escape this cycle of big tech, unfulfilled as an engineer, and find a team with a strong product engineering culture where I can do high-impact work with similarly empowered teams? Thank you, and sorry if this is a bit verbose! Thanks guys. Martin ‌ How do you judge your competency in a technical skill and when should you include it on your resume? Should you include a skills that you haven't used in a while, skills you've only used in personal projects, or skills that you feel you only have a basic understanding of? I'm a frontend developer and I've seen some job descriptions include requirements (not nice-to-haves) like backend experience, Java, CI/CD, and UI/UX design using tools like Figma and Photoshop. I could make designs or write the backend code for a basic CRUD app, but it would take me some time, especially if I'm building things from scratch. I've seen some resumes where the writer lists a bunch of programming languages and technical skills, and I often wonder if they truly are competent in all of those skills.

Lets Have This Conversation
Humanize AI Texts and Bypass Content Detectors with: Undetectable AI

Lets Have This Conversation

Play Episode Listen Later Jan 20, 2025 44:43


When examining actual usage trends, it is evident that 46% of teachers and 48% of students report using ChatGPT or other forms of AI, at least once a week. Remarkably, student usage has surged by 27 percentage points compared to the previous year. Feedback from students has been largely positive, with 70% of K-12 students viewing AI chatbots favorably. This approval rate increases to 75% among undergraduates, while 68% of parents also express favorable views towards AI chatbots, as reported by CNBC.   Cristian Perry is the CEO of Undetectable AI, a software startup he founded in 2023, which has now gained over 14 million users worldwide. Under his leadership, the company has quickly ascended to rank among the top 10 generative AI writing tools globally, managing a team of 50 employees and maintaining profitability. Prior to launching Undetectable AI, Perry co-founded ChatterQuant in 2021, where he successfully raised institutional capital and grew revenue to nearly $1 million before the company was acquired by Money.net in 2023. At ChatterQuant, he led teams in sales, development, and UI/UX design while securing key partnerships and receiving industry recognition, including the TradeTech Europe Innovation Award and the Hackfort Pitch Competition.   Perry's professional experience includes a role as an Analyst at Janes Capital Partners in 2015, where he developed strong analytical skills and business fundamentals. His entrepreneurial journey began at the age of 13, inspired by his family of self-made business owners. At Undetectable AI, Perry oversees the strategic planning and development of three sub-brands, focusing on effective project execution and customer support. His growth strategy for both B2C and B2B markets integrates inbound sales tactics with targeted advertising campaigns and direct outreach. Cristian graduated from Boise State University in 2023 with a degree in Business Management.   LinkedIn: @ChristianPerry  Get in touch: Contact@Undetectable.AI  

Syntax - Tasty Web Development Treats
851: The Future of VS Code and Copilot

Syntax - Tasty Web Development Treats

Play Episode Listen Later Nov 22, 2024 42:12


Wes and Scott talk with Cassidy Williams and Harald Kirschner about exciting new features in VS Code and GitHub Copilot, including custom instructions, UI/UX improvements, and the future of AI and Copilot within different editors. Show Notes 00:00 Welcome to Syntax! 00:32 Cassidy's keynote at GitHub Universe 03:23 New Copilot features 04:55 Use cases for prompt engineering 09:20 UI and UX enhancements 19:18 Copilot Extensions 20:38 Brought to you by Sentry.io 21:26 Multi-line suggestions? 27:00 How do you develop new ideas in this space? GitHub Next 35:42 Copilot in Xcode GitHub Copilot code completion in Xcode is now available in public preview 39:16 VS Code experimental features @code Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott: X Instagram Tiktok LinkedIn Threads Randy: X Instagram YouTube Threads