POPULARITY
Christopher Savoie, the founder and CEO of Zapata Computing, has had a fascinating career journey. After beginning as a young programmer working with early computers, he switched gears to immunology and biophysics in Japan and is now founding AI companies. Along the way, he was also involved in creating the foundational technology for Apple Siri, working on early language models embedded in agents to solve complex natural language problems. In this interview with our host, Daniel Bogdanoff, Savoie highlights the evolution of AI into specialized systems. Like an orchestra, small, task-specific models working in ensembles are more effective than large, monolithic ones. He also shares how AI transforms automotive, motorsports, and grid management industries. Savoie recounts his experiences at Nissan with predictive battery analytics and Andretti Autosport, where AI-driven simulations optimize race strategies. Savoy warned about the potential misuse of AI and big data, advocating for ethical considerations, especially around privacy and government control. Despite these challenges, he remains optimistic about AI's potential, expressing a desire for tools to handle complex personal organization tasks, such as multi-modal time and travel management.
From his accidental start in interactive TV to founding ListReports and now launching Ethica, Ajay Shah shares the strategic pivots, industry insights, and technical breakthroughs that powered his AI-video platform. Explore how animation, data-driven infographics, and voice-generated commentary combine to create listing videos that win more listings, streamline workflows, and redefine client expectations. What you'll learn from this episode The transformative power of AI in professional property marketing for agents Why title professionals should become “AI Sherpas” for their clients How generative AI tools like Ethica are transforming listing videos Ways to future-proof your real estate business amid rapid tech disruption Practical advice for adopting AI without getting overwhelmed Resources mentioned in this episode Highway About Ajay ShahAjay combines his extensive experience in developing technology solutions for consumers and businesses with a proven track record of leading international teams. His mission is to transform the way the residential real estate industry operates. Through his leadership, his team is building an industry-impacting business designed to generate billions of dollars in value for shareholders while creating significant opportunities for ListReports' employees, partners, and customers. With a diverse background spanning successful ventures in technology, SaaS, and entertainment startups, Ajay leverages nearly two decades of experience across multiple industries to solve complex problems and unlock new business opportunities. His ultimate goal is to create businesses that drive positive, meaningful change in the world. Connect with Ajay Website: Ethica AI Connect With UsLove what you're hearing? Don't miss an episode! Follow us on our social media channels and stay connected. Explore more on our website: www.alltechnational.com/podcast Stay updated with our newsletter: www.mochoumil.com Follow Mo on LinkedIn: Mo Choumil Stop waiting on underwriter emails or callbacks—TitleGPT.ai gives you instant, reliable answers to your title questions. Whether it's underwriting, compliance, or tricky closings, the information you need is just a click away. No more delays—work smarter, close faster. Try it now at www.TitleGPT.ai. Closing more deals starts with more appointments. At Alltech National Title, our inside sales team works behind the scenes to fill your pipeline, so you can focus on building relationships and closing business. No more cold calling—just real opportunities. Get started at AlltechNationalTitle.com. Extra hands without extra overhead—that's Safi Virtual. Our trained virtual assistants specialize in the title industry, handling admin work, client communication, and data entry so you can stay focused on closing deals. Scale smarter and work faster at SafiVirtual.com.
New Relic's Head of AI and ML Innovation, Camden Swita discusses their four-cornered AI strategy and envisions a future of "agentic orchestration" with specialized agents.Topics Include:Introduction of Camden Swita, Head of AI at New Relic.New Relic invented the observability space for monitoring applications.Started with Java workloads monitoring and APM.Evolved into full-stack observability with infrastructure and browser monitoring.Uses advanced query language (NRQL) with time series database.AI strategy focuses on AI ops for automation.First cornerstone: Intelligent detection capabilities with machine learning.Second cornerstone: Incident response with generative AI assistance.Third cornerstone: Problem management with root cause analysis.Fourth cornerstone: Knowledge management to improve future detection.Initially overwhelmed by "ocean of possibilities" with LLMs.Needed narrow scope and guardrails for measurable progress.Natural language to NRQL translation proved immensely complex.Selecting from thousands of possible events caused accuracy issues.Shifted from "one tool" approach to many specialized tools.Created routing layer to select right tool for each job.Evaluation of NRQL is challenging even when syntactically correct.Implemented multi-stage validation with user confirmation step.AWS partnership involves fine-tuning models for NRQL translation.Using Bedrock to select appropriate models for different tasks.Initially advised prototyping on biggest, best available models.Now recommends considering specialized, targeted models from start.Agent development platforms have improved significantly since beginning.Future focus: "Agentic orchestration" with specialized agents.Envisions agents communicating through APIs without human prompts.Integration with AWS tools like Amazon Q.Industry possibly plateauing in large language model improvements.Increasing focus on inference-time compute in newer models.Context and quality prompts remain crucial despite model advances.Potential pros and cons to inference-time compute approach.Participants:Camden Swita – Head of AI & ML Innovation, Product Management, New RelicSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon/isv/
In this thought-provoking episode of Play the King Win the Day, host Brad Banyas sits down with Noah Kenney, (AI) Research & Development, ethical AI strategist, entrepreneur and founder of Disruptive AI Lab. Together, they explore the transformative impact of artificial intelligence (AI) across major industries—including healthcare, finance, and transportation.Noah Kenney shares expert insights on the future of autonomous vehicles, the role of AI in medical diagnostics, and the growing challenge of misinformation and algorithmic bias. The discussion dives into the ethical complexities of AI-generated content, intellectual property in the age of generative tools, and the global push for standardized, human-centered AI governance.This episode also introduces the Global Artificial Intelligence Framework (GAIF)—a pioneering set of guidelines designed to ensure AI is developed and deployed responsibly.Noah Kenney offers fresh insights into AI that will broaden your perspective—whether you're in technology, business, policy, or simply curious. Play the King, Win the Day!*wisdom to power your success.
Jon Yoo, CEO of Suger, shares how his company automates the complex & challenging workflows of selling software through cloud marketplaces like AWS.Topics Include:Jon Yoo is co-founder/CEO of Suger.Suger automates B2B marketplace workflows.Handles listing, contracts, offers, billing for marketplaces like AWS.Co-founder previously led Confluent's marketplace enablement product.Confluent had 40-50% revenue through cloud marketplaces.Required 10-20 engineers working solely on marketplace integration.Engineers prefer core product work over marketplace integration.Product/engineering leaders struggle with marketplace deployment requirements.Marketplace customers adopt without marketing, creating unexpected management needs.Version control is challenging for marketplace-deployed products.License management through marketplace creates engineering challenges.Suger helps sell, resell, co-sell through AWS Marketplace.Marketplace integration isn't one-time; requires ongoing maintenance.Business users constantly request marketplace automation features.Suger works with Snowflake, Intel, and AI startups.Data security concerns drive self-hosted AI deployments.AI products increasingly deploy via AMI/container solutions.AI products use usage-based pricing, not seat-based.Usage-based pricing creates complex billing challenges.AI products are tested at unprecedented rates.Two deployment options: vendor cloud or customer cloud.SaaS requires reporting usage to marketplace APIs.Customer-hosted deployment simplifies some billing aspects.Marketplaces need integration with ERP systems.Version control particularly challenging for AI products.Companies need automated updates for marketplace-deployed products.License management includes scaling up/down and expiration handling.Suger aims to integrate with GitHub for automatic updates.Participants:· Jon Yoo – CEO and Co-founder, SugerSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon/isv/
RJJ Software's Software Development Service This episode of The Modern .NET Show is supported, in part, by RJJ Software's Software Development Services, whether your company is looking to elevate its UK operations or reshape its US strategy, we can provide tailored solutions that exceed expectations. Show Notes "So on my side it was actually, the interesting experience was that I kind of used it one way, because it was mainly about reading the Python code, the JavaScript code, and, let's say like, the Go implementations, trying to understand what are the concepts, what are the ways about how it has been implemented by the different teams. And then, you know, switching mentally into the other direction of writing than the code in C#."— Jochen Kirstaetter Welcome friends to The Modern .NET Show; the premier .NET podcast, focusing entirely on the knowledge, tools, and frameworks that all .NET developers should have in their toolbox. We are the go-to podcast for .NET developers worldwide, and I am your host: Jamie “GaProgMan” Taylor. In this episode, Jochen Kirstaetter joined us to talk about his .NET SDK for interacting with Google's Gemini suite of LLMs. Jochen tells us that he started his journey by looking at the existing .NET SDK, which didn't seem right to him, and wrote his own using the HttpClient and HttpClientFactory classes and REST. "I provide a test project with a lot of tests. And when you look at the simplest one, is that you get your instance of the Generative AI type, which you pass in either your API key, if you want to use it against Google AI, or you pass in your project ID and location if you want to use it against Vertex AI. Then you specify which model that you like to use, and you specify the prompt, and the method that you call is then GenerateContent and you get the response back. So effectively with four lines of code you have a full integration of Gemini into your .NET application."— Jochen Kirstaetter Along the way, we discuss the fact that Jochen had to look into the Python, JavaScript, and even Go SDKs to get a better understanding of how his .NET SDK should work. We discuss the “Pythonistic .NET” and “.NETy Python” code that developers can accidentally end up writing, if they're not careful when moving from .NET to Python and back. And we also talk about Jochen's use of tests as documentation for his SDK. Anyway, without further ado, let's sit back, open up a terminal, type in `dotnet new podcast` and we'll dive into the core of Modern .NET. Supporting the Show If you find this episode useful in any way, please consider supporting the show by either leaving a review (check our review page for ways to do that), sharing the episode with a friend or colleague, buying the host a coffee, or considering becoming a Patron of the show. Full Show Notes The full show notes, including links to some of the things we discussed and a full transcription of this episode, can be found at: https://dotnetcore.show/season-7/google-gemini-in-net-the-ultimate-guide-with-jochen-kirstaetter/ Jason's Links: JoKi's MVP Profile JoKi's Google Developer Expert Profile JoKi's website Other Links: Generative AI for .NET Developers with Amit Bahree curl Noda Time with Jon Skeet Google Cloud samples repo on GitHub Google's Gemini SDK for Python Google's Gemini SDK for JavaScript Google's Gemini SDK for Go Vertex AI JoKi's base NuGet package: Mscc.GenerativeAI JoKi's NuGet package: Mscc.GenerativeAI.Google System.Text.Json gcloud CLI .NET Preprocessor directives .NET Target Framework Monikers QUIC protocol IAsyncEnumerable Microsoft.Extensions.AI Supporting the show: Leave a rating or review Buy the show a coffee Become a patron Getting in Touch: Via the contact page Joining the Discord Remember to rate and review the show on Apple Podcasts, Podchaser, or wherever you find your podcasts, this will help the show's audience grow. Or you can just share the show with a friend. And don't forget to reach out via our Contact page. We're very interested in your opinion of the show, so please get in touch. You can support the show by making a monthly donation on the show's Patreon page at: https://www.patreon.com/TheDotNetCorePodcast. Music created by Mono Memory Music, licensed to RJJ Software for use in The Modern .NET Show
In this episode of the Wise Decision Maker Show, Dr. Gleb Tsipursky speaks to Andrew Min, SVP, Strategy & Digital Initiatives at RXR, about how Gen AI helps their staff delight clients.You can learn about RXR at https://rxr.com/
How can AI make streets safer and smarter? With AI, predictive analytics, and human-centered design that combines technology with collaboration.
SentinelOne's Ric Smith shares how Purple AI, built on Amazon Bedrock, helps security teams handle increasing threat volumes while facing budget constraints and talent shortages.Topics Include:Introduction of Ric Smith, President of Product Technology and OperationsSentinelOne overview: cybersecurity company focused on endpoint and data securityCustomer range: small businesses to Fortune 10 companiesProducts protect endpoints, cloud environments, and provide enterprise observabilityRic oversees 65% of company operationsPurple AI launched on AWS BedrockPurple AI helps security teams become more efficient and productiveSecurity teams face budget constraints and talent shortagesPurple AI helps teams manage increasing alert volumesTop security challenge: increased malware variants through AIAI enables more convincing spear-phishing attemptsIdentity breaches through social engineering are increasingVoice deepfakes used to bypass security protocolsFuture threats: autonomous AI agents conducting orchestrated attacksSentinelOne helps with productivity and advanced detection capabilitiesSentinelOne primarily deployed on AWS infrastructureUsing SageMaker and Bedrock for AI capabilitiesBest practice: find partners for AI training and deploymentCustomer insight: Purple AI made teams more confident and creativeAI frees security teams from constant anxietySentinelOne's hyper-automation handles cascading remediation tasksMultiple operational modes: fully automated or human-in-the-loopAgent-to-agent interactions expected within 24 monthsCommon misconception: generative AI is infallibleAI helps with "blank slate problem" providing starting frameworksAI content still requires human personalization and reviewAWS partnership provides cost efficiency and governance benefitsParticipants:· Ric Smith – President – Product, Technology and Operations, SentinelOneSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon/isv/
Get ready to explore how generative AI is transforming development in Oracle APEX. In this episode, hosts Lois Houston and Nikita Abraham are joined by Oracle APEX experts Apoorva Srinivas and Toufiq Mohammed to break down the innovative features of APEX 24.1. Learn how developers can use APEX Assistant to build apps, generate SQL, and create data models using natural language prompts. Oracle APEX: Empowering Low Code Apps with AI: https://mylearn.oracle.com/ou/course/oracle-apex-empowering-low-code-apps-with-ai/146047/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Nikita: Welcome back to another episode of the Oracle University Podcast! I'm Nikita Abraham, Team Lead of Editorial Services with Oracle University, and I'm joined by Lois Houston, Director of Innovation Programs. Lois: Hi everyone! In our last episode, we spoke about Oracle APEX and AI. We covered the data and AI -centric challenges businesses are up against and explored how AI fits in with Oracle APEX. Niki, what's in store for today? Nikita: Well, Lois, today we're diving into how generative AI powers Oracle APEX. With APEX 24.1, developers can use the Create Application Wizard to tell APEX what kind of application they want to build based on available tables. Plus, APEX Assistant helps create, refine, and debug SQL code in natural language. 01:16 Lois: Right. Today's episode will focus on how generative AI enhances development in APEX. We'll explore its architecture, the different AI providers, and key use cases. Joining us are two senior product managers from Oracle—Apoorva Srinivas and Toufiq Mohammed. Thank you both for joining us today. We'll start with you, Apoorva. Can you tell us a bit about the generative AI service in Oracle APEX? Apoorva: It is nothing but an abstraction to the popular commercial Generative AI products, like OCI Generative AI, OpenAI, and Cohere. APEX makes use of the existing REST infrastructure to authenticate using the web credentials with Generative AI Services. Once you configure the Generative AI Service, it can be used by the App Builder, AI Assistant, and AI Dynamic Actions, like Show AI Assistant and Generate Text with AI, and also the APEX_AI PL/SQL API. You can enable or disable the Generative AI Service on the APEX instance level and on the workspace level. 02:31 Nikita: Ok. Got it. So, Apoorva, which AI providers can be configured in the APEX Gen AI service? Apoorva: First is the popular OpenAI. If you have registered and subscribed for an OpenAI API key, you can just enter the API key in your APEX workspace to configure the Generative AI service. APEX makes use of the chat completions endpoint in OpenAI. Second is the OCI Generative AI Service. Once you have configured an OCI API key on Oracle Cloud, you can make use of the chat models. The chat models are available from Cohere family and Meta Llama family. The third is the Cohere. The configuration of Cohere is similar to OpenAI. You need to have your Cohere OpenAI key. And it provides a similar chat functionality using the chat endpoint. 03:29 Lois: What is the purpose of the APEX_AI PL/SQL public API that we now have? How is it used within the APEX ecosystem? Apoorva: It models the chat operation of the popular Generative AI REST Services. This is the same package used internally by the chat widget of the APEX Assistant. There are more procedures around consent management, which you can configure using this package. 03:58 Lois: Apoorva, at a high level, how does generative AI fit into the APEX environment? Apoorva: APEX makes use of the existing REST infrastructure—that is the web credentials and remote server—to configure the Generative AI Service. The inferencing is done by the backend Generative AI Service. For the Generative AI use case in APEX, such as NL2SQL and creation of an app, APEX performs the prompt enrichment. 04:29 Nikita: And what exactly is prompt enrichment? Apoorva: Let's say you provide a prompt saying "show me the average salary of employees in each department." APEX will take this prompt and enrich it by adding in more details. It elaborates on the prompt by mentioning the requirements, such as Oracle SQL syntax statement, and providing some metadata from the data dictionary of APEX. Once the prompt enrichment is complete, it is then passed on to the LLM inferencing service. Therefore, the SQL query provided by the AI Assistant is more accurate and in context. 05:15 Unlock the power of AI Vector Search with our new course and certification. Get more accurate search results, handle complex datasets easily, and supercharge your data-driven decisions. From now to May 15, 2025, we are waiving the certification exam fee (valued at $245). Visit mylearn.oracle.com to enroll. 05:41 Nikita: Welcome back! Let's talk use cases. Apoorva, can you share some ways developers can use generative AI with APEX? Apoorva: SQL is an integral part of building APEX apps. You use SQL everywhere. You can make use of the NL2SQL feature in the code editor by using the APEX Assistant to generate SQL queries while building the apps. The second is the prompt-based app creation. With APEX Assistant, you can now generate fully functional APEX apps by providing prompts in natural language. Third is the AI Assistant, which is a chat widget provided by APEX in all the code editors and for creation of apps. You can chat with the AI Assistant by providing your prompts and get responses from the Generative AI Services. 06:37 Lois: Without getting too technical, can you tell us how to create a data model using AI? Apoorva: A SQL Workshop utility called Create Data Model Using AI uses AI to help you create your own data model. The APEX Assistant generates a script to create tables, triggers, and constraints in either Oracle SQL or Quick SQL format. You can also insert sample data into these tables. But before you use this feature, you must create a generative AI service and enable the Used by App Builder setting. If you are using the Oracle SQL format, when you click on Create SQL Script, APEX generates the script and brings you to this script editor page. Whereas if you are using the Quick SQL format, when you click on Review Quick SQL, APEX generates the Quick SQL code and brings you to the Quick SQL page. 07:39 Lois: And to see a detailed demo of creating a custom data model with the APEX Assistant, visit mylearn.oracle.com and search for the "Oracle APEX: Empowering Low Code Apps with AI" course. Apoorva, what about creating an APEX app from a prompt. What's that process like? Apoorva: APEX 24.1 introduces a new feature where you can generate an application blueprint based on a prompt using natural language. The APEX Assistant leverages the APEX Dictionary Cache to identify relevant tables while suggesting the pages to be created for your application. You can iterate over the application design by providing further prompts using natural language and then generating an application based on your needs. Once you are satisfied, you can click on Create Application, which takes you to the Create Application Wizard in APEX, where you can further customize your application, such as application icon and other features, and finally, go ahead to create your application. 08:53 Nikita: Again, you can watch a demo of this on MyLearn. So, check that out if you want to dive deeper. Lois: That's right, Niki. Thank you for these great insights, Apoorva! Now, let's turn to Toufiq. Toufiq, can you tell us more about the APEX Assistant feature in Oracle APEX. What is it and how does it work? Toufiq: APEX Assistant is available in Code Editors in the APEX App Builder. It leverages generative AI services as the backend to answer your questions asked in natural language. APEX Assistant makes use of the APEX dictionary cache to identify relevant tables while generating SQL queries. Using the Query Builder mode enables Assistant. You can generate SQL queries from natural language for Form, Report, and other region types which support SQL queries. Using the general assistance mode, you can generate PL/SQL JavaScript, HTML, or CSS Code, and seek further assistance from generative AI. For example, you can ask the APEX Assistant to optimize the code, format the code for better readability, add comments, etc. APEX Assistant also comes with two quick actions, Improve and Explain, which can help users improve and understand the selected code. 10:17 Nikita: What about the Show AI Assistant dynamic action? I know that it provides an AI chat interface, but can you tell us a little more about it? Toufiq: It is a native dynamic action in Oracle APEX which renders an AI chat user interface. It leverages the generative AI services that are configured under Workspace utilities. This AI chat user interface can be rendered inline or as a dialog. This dynamic action also has configurable system prompt and welcome message attributes. 10:52 Lois: Are there attributes you can configure to leverage even more customization? Toufiq: The first attribute is the initial prompt. The initial prompt represents a message as if it were coming from the user. This can either be a specific item value or a value derived from a JavaScript expression. The next attribute is use response. This attribute determines how the AI Assistant should return responses. The term response refers to the message content of an individual chat message. You have the option to capture this response directly into a page item, or to process it based on more complex logic using JavaScript code. The final attribute is quick actions. A quick action is a predefined phrase that, once clicked, will be sent as a user message. Quick actions defined here show up as chips in the AI chat interface, which a user can click to send the message to Generative AI service without having to manually type in the message. 12:05 Lois: Thank you, Toufiq and Apoorva, for joining us today. Like we were saying, there's a lot more you can find in the “Oracle APEX: Empowering Low Code Apps with AI” course on MyLearn. So, make sure you go check that out. Nikita: Join us next week for a discussion on how to integrate APEX with OCI AI Services. Until then, this is Nikita Abraham… Lois: And Lois Houston signing off! 12:28 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
Sam Gantner, Chief Product Officer of Nexthink, reveals how DEX is moving IT from reactive firefighting to proactive problem prevention and transforming enterprise productivity.Topics Include:DEX stands for Digital Employee ExperienceDEX eliminates IT issues preventing employee productivityShifts IT from reactive to proactive problem-solvingEmployees often serve as IT problem alerting systemsBest IT is transparent to employeesDEX solves device sluggishness and slow application issuesNetwork problems consistently appear across organizationsIT teams often lack visibility into employee experiencesMany organizations waste money on unused software licensesDEX Score measures comprehensive employee IT experienceSurveys capture subjective aspects of technology experienceReduction of actual problems differs from ticket reductionNexthink uses lightweight agents on employee devicesBrowser monitoring essential as browsers become application platformsEmployee engagement metrics capture real-time feedbackNexthink rebuilt as cloud-native platform using AWS servicesCompany deploys across 10+ global AWS regions30% of engineering resources dedicated to AI developmentOne customer eliminated 50% of IT ticketsAnother recovered 37,000 productivity hours worth $3M annuallyA third saved $1.3M by identifying unused licensesAI implementation requires dedicated employee trainingGood AI now better than perfect AI neverTechnology adoption is the next DEX frontierDigital dexterity becoming critical for maximizing IT investmentsParticipants:Samuele Gantner – Chief Product Officer, NexthinkSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon/isv/
Linda Ivy-Rosser, Vice President for Forrester, outlines the evolution of business applications and forward thinking predictions of their future.Topics Include:Linda Ivy-Rosser has extensive business applications experience since the 1990s.Business applications historically seen as rigid and lethargic.1990s: On-premise software with limited scale and flexibility.2000s: SaaS emergence with Salesforce, AWS, and Azure.2010s: Mobile-first applications focused on accessibility.Present: AI-driven applications characterize the "AI economy."Purpose of applications evolved from basic to complex capabilities.User expectations grew from friendly interfaces to intelligent systems.Four agreements: AI-infused, composable, cloud-native, ecosystem-driven.AI-infused: 69% consider essential/important in vendor selection.Composability expected to grow in importance with API architectures.Cloud-native: 79% view as foundation for digital transformation.Ecosystem-driven: 68% recognize importance of strategic alliances.Challenges: integration, interoperability, data accessibility, user adoption.43% prioritizing cross-functional workflow and data accessibility capabilities.Tech convergence recycles as horizontal strategy for software companies.Data contextualization crucial for employee adoption of intelligent applications.Explainable AI necessary to build trust in recommendations.Case study: 83% of operators rejected AI recommendations without explanations.Tulip example demonstrated three of four agreements successfully.Software giants using strategic alliances as competitive advantage.AWS offers comprehensive AI infrastructure, platforms, models, and services.Salesforce created ecosystem both within and outside their platform.SaaS marketplaces bridge AI model providers and businesses.Innovation requires partnerships between software vendors and ISVs.Enterprises forming cohorts with startups to solve business challenges.Software supply chain transparency increasingly important.Government sector slower to adopt cloud and AI technologies.Change resistance remains significant challenge for adoption.69% prioritize improving innovation capability over next year.Participants:Linda Ivy-Rosser - Vice President, Enterprise Software, IT services and Digital Transformation Executive Portfolio, ForresterSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon/isv/
New Relic's Chief Customer Officer Arnaldo (Arnie) Lopez details how their observability platform helps 70,000+ customers monitor cloud performance through AWS infrastructure while introducing AI capabilities that simplify operations.Topics Include:Arnie Lopez is SVP, Chief Customer Officer at New Relic.Oversees pre-sales, post-sales, technical support, and enablement teams.New Relic University offers customer certifications.Founded in 2008, pioneered application performance monitoring (APM).Now offers "Observability 3.0" for full-stack visibility.Prevents interruptions during cloud migration and operations.Serves 70,000+ customers across various industries.16,000 enterprise-level paying customers.Platform consolidates multiple monitoring tools into one solution.Helps detect issues before customers experience performance problems.Market challenge: customers using disparate observability solutions.Reduces TCO by eliminating multiple monitoring tools.Targets VPs, CTOs, CIOs, and sometimes CEOs.Decade-long partnership with AWS.Platform built on largest unified telemetry data cloud.Uses AWS Graviton instances and Amazon EKS.AWS partnership enables innovation and customer trust.Three AI approaches: user assistance, LLM monitoring, faster insights.New Relic AI helps write query language (NURCLs).Monitors LLMs in customer environments.Uses AI to accelerate incident resolution.Lesson learned: should have started AI implementation sooner.Many customers still cautiously adopting AI technologies.Goal: continue growth with AWS partnership.Offers compute-based pricing model.Customers only pay for what they use.Announced one-step AWS monitoring for enterprise scale.Amazon Q Business and New Relic AI integration.Agent-to-agent AI eliminates data silos.Embeds performance insights into business application workflows.Participants:Arnie Lopez – SVP Chief Customer Officer, New RelicSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon/isv/
Lois Houston and Nikita Abraham kick off a new season of the podcast, exploring how Oracle APEX integrates with AI to build smarter low-code applications. They are joined by Chaitanya Koratamaddi, Director of Product Management at Oracle, who explains the basics of Oracle APEX, its global adoption, and the challenges it addresses for businesses managing and integrating data. They also explore real-world use cases of AI within the Oracle APEX ecosystem Oracle APEX: Empowering Low Code Apps with AI: https://mylearn.oracle.com/ou/course/oracle-apex-empowering-low-code-apps-with-ai/146047/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ----------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! Thank you for joining us as we begin a new season of the podcast, this time focused on Oracle APEX and how it integrates with AI to help you create powerful applications. This season is for everyone—from beginners and SQL developers to DBA data scientists and low-code enthusiasts. So, if you're interested in using Oracle APEX to build low-code applications that have custom generative AI features, you'll want to stay tuned in. 01:07 Lois: That's right, Niki. Today, we're going to discuss Oracle APEX at a high level, starting with what it is. Then, we'll cover a few business challenges related to data and AI innovation that organizations face, and learn how the powerful combination of APEX and AI can help overcome these challenges. 01:27 Nikita: To take us through it all, we've got Chaitanya Koratamaddi with us. Chaitanya is Director of Product Management for Oracle APEX. Hi Chaitanya! For anyone new to Oracle APEX, can you explain what it is and why it's so widely used? Chaitanya: Oracle APEX is the world's most popular enterprise low code application platform. APEX enables you to build secure and scalable enterprise-scale applications with world class features that can be deployed anywhere, cloud or on-premises. And with APEX, you can build applications 20 times faster with 100 times less code. APEX delivers the most productive way to develop and deploy mobile and web applications everywhere. 02:18 Lois: That's impressive. So, what's the adoption rate like for Oracle APEX? Chaitanya: As of today, there are 19 million plus APEX applications created globally. 5,000 plus APEX applications are created on a daily basis and there are 800,000 plus APEX developers worldwide. 60,000 plus customers in 150 countries across various industry verticals. And 75% of Fortune 500 companies use Oracle APEX. 02:56 Nikita: Wow, the numbers really speak for themselves, right? But Chaitanya, why are organizations adopting Oracle APEX at this scale? Or to put it differently, what's the core business challenge that Oracle APEX is addressing? Chaitanya: From databases to all data, you know that the world is more connected and automated than ever. To drive new business value, organizations need to explore and exploit new sources of data that are generated from this connected world. That can be sounds, feeds, sensors, videos, images, and more. Businesses need to be able to work with all types of data and also make sure that it is available to be used together. Typically, businesses need to work on all data at a massive scale. For example, supply chains are no longer dependent just on inventory, demand, and order management signals. A manufacturer should be able to understand data describing global weather patterns and how it impacts their supply chains. Businesses need to pull in data from as many social sources as possible to understand how customer sentiment impacts product sales and corporate brands. Our customers need a data platform that ensures all this data works together seamlessly and easily. 04:38 Lois: So, you're saying Oracle APEX is the platform that helps businesses manage and integrate data seamlessly. But data is just one part of the equation, right? Then there's AI. How are the two related? Chaitanya: Before we start talking about Oracle AI, let's first talk about what customers are looking for and where they are struggling within their AI innovation. It all starts with data. For decades, working with data has largely involved dealing with structured data, whether it is your customer records in your CRM application and orders from your ERP database. Data was organized into database and tables, and when you needed to find some insights in your data, all you need to do is just use stored procedures and SQL queries to deliver the answers. But today, the expectations are higher. You want to use AI to construct sophisticated predictions, find anomalies, make decisions, and even take actions autonomously. And the data is far more complicated. It is in an endless variety of formats scattered all over your business. You need tools to find this data, consume it, and easily make sense of it all. And now capabilities like natural language processing, computer vision, and anomaly detection are becoming very essential just like how SQL queries used to be. You need to use AI to analyze phone call transcripts, support tickets, or email complaints so you can understand what customers need and how they feel about your products, customer service, and brand. You may want to use a data source as noisy and unstructured as social media data to detect trends and identify issues in real time. Today, AI capabilities are very essential to accelerate innovation, assess what's happening in your business, and most importantly, exceed the expectations of your customers. So, connecting your application, data, and infrastructure allows everyone in your business to benefit from data. 07:32 Raise your game with the Oracle Cloud Applications skills challenge. Get free training on Oracle Fusion Cloud Applications, Oracle Modern Best Practice, and Oracle Cloud Success Navigator. Pass the free Oracle Fusion Cloud Foundations Associate exam to earn a Foundations Associate certification. Plus, there's a chance to win awards and prizes throughout the challenge! What are you waiting for? Join the challenge today by visiting oracle.com/education. 08:06 Nikita: Welcome back! So, let's focus on AI across the Oracle Cloud ecosystem. How does Oracle bring AI into the mix to connect applications, data, and infrastructure for businesses? Chaitanya: By embedding AI throughout the entire technology stack from the infrastructure that businesses run on through the applications for every line of business, from finance to supply chain and HR, Oracle is helping organizations pragmatically use AI to improve performance while saving time, energy, and resources. Our core cloud infrastructure includes a unique AI infrastructure layer based on our supercluster technology, leveraging the latest and greatest hardware and uniquely able to get the maximum out of the AI infrastructure technology for scenarios such as large language processing. Then there is generative AI and ML for data platforms. On top of the AI infrastructure, our database layer embeds AI in our products such as autonomous database. With autonomous database, you can leverage large language models to use natural language queries rather than writing a SQL when interacting with the autonomous database. This enables you to achieve faster adoption in your application development. Businesses and their customers can use the Select AI natural language interface combined with Oracle Database AI Vector Search to obtain quicker, more intuitive insights into their own data. Then we have AI services. AI services are a collection of offerings, including generative AI with pre-built machine learning models that make it easier for developers to apply AI to applications and business operations. The models can be custom-trained for more accurate business results. 10:17 Nikita: And what specific AI services do we have at Oracle, Chaitanya? Chaitanya: We have Oracle Digital Assistant Speech, Language, Vision, and Document Understanding. Then we have Oracle AI for Applications. Oracle delivers AI built for business, helping you make better decisions faster and empowering your workforce to work more effectively. By embedding classic and generative AI into its applications, Fusion Apps customers can instantly access AI outcomes wherever they are needed without leaving the software environment they use every day to power their business. 11:02 Lois: Let's talk specifically about APEX. How does APEX use the Gen AI and machine learning models in the stack to empower developers. How does it help them boost productivity? Chaitanya: Starting APEX 24.1, you can choose your preferred large language models and leverage native generative AI capabilities of APEX for AI assistants, prompt-based application creation, and more. Using native OCI capabilities, you can leverage native platform capabilities from OCI, like AI infrastructure and object storage, etc. Oracle APEX running on autonomous infrastructure in Oracle Cloud leverages its unique native generative AI capabilities tuned specifically on your data. These language models are schema aware, data aware, and take into account the shape of information, enabling your applications to take advantage of large language models pre-trained on your unique data. You can give your users greater insights by leveraging native capabilities, including vector-based similarity search, content summary, and predictions. You can also incorporate powerful AI features to deliver personalized experiences and recommendations, process natural language prompts, and more by integrating directly with a suite of OCI AI services. 12:38 Nikita: Can you give us some examples of this? Chaitanya: You can leverage OCI Vision to interpret visual and text inputs, including image recognition and classification. Or you can use OCI Speech to transcribe and understand spoken language, making both image and audio content accessible and actionable. You can work with disparate data sources like JSON, spatial, graphs, vectors, and build AI capabilities around your own business data. So, low-code application development with APEX along with AI is a very powerful combination. 13:22 Nikita: What are some use cases of AI-powered Oracle APEX applications? Chaitanya: You can build APEX applications to include conversational chatbots. Your APEX applications can include image and object detection capability. Your APEX applications can include speech transcription capability. And in your applications, you can include code generation that is natural language to SQL conversion capability. Your applications can be powered by semantic search capability. Your APEX applications can include text generation capability. 14:00 Lois: So, there's really a lot we can do! Thank you, Chaitanya, for joining us today. With that, we're wrapping up this episode. We covered Oracle APEX, the key challenges businesses face when it comes to AI innovation, and how APEX and AI work together to give businesses an AI edge. Nikita: Yeah, and if you want to know more about Oracle APEX, visit mylearn.oracle.com and search for the Oracle APEX: Empowering Low Code Apps with AI course. Join us next week for a discussion on AI-assisted development in Oracle APEX. Until then, this is Nikita Abraham… Lois: And Lois Houston signing off! 14:39 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
PDI Technologies' Steve Antonakakis shares how his company is implementing generative AI across their fuel and retail technology ecosystem through a practical, customer-focused approach using Amazon Bedrock.Topics Include:PDI's COO/CTO discussing generative AI implementationPractical step-by-step approach to AI integrationTesting in real-world settings with customer feedbackAWS Bedrock and Nova models exceeded expectationsEarly adoption phase with huge potentialFuel/retail industry processes many in-person transactionsPDI began in 1983 as ERP providerGrew through 33+ acquisitionsProvides end-to-end fuel industry solutionsOwns GasBuddy and Shell Fuel RewardsProcesses millions of transactions dailyGenerative AI fits into their intelligence plane architectureAWS Bedrock integrates well with existing infrastructureFocus on trusted, controlled, accountable AIProductizing AI features harder than traditional featuresCreated entrepreneurial structure alongside regular product teamsTeam designed to fail fast but stay customer-focusedAI features can access databases without disrupting applicationsCustomers want summarization across different business areasAI provides insights and actionable recommendationsConversational AI replaces traditional reporting limitationsWorking closely with customers to solve problems togetherBeyond prototyping phase, now in implementationAWS Nova provides excellent cost-to-value ratioFocus on measuring customer value over immediate profitabilityRFP use case saved half a million dollarsEarly prompts were massive, now more structuredSetting realistic customer expectations is importantData security approach same as other applicationsTreating AI outputs with same data classification standardsParticipants:Steve Antonakakis – COO & CTO, Retail & Energy, PDI TechnologiesSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon/isv/
AWS partners Braze, Qualtrics, and Tealium share strategies for marketplace success, vertical industry expansion, and generative AI integration that have driven significant business growth. Topics Include:Jason Warren introduces AWS Business Application Partnerships panel.Three key topics: Marketplace Strategy, Vertical Expansion, Gen-AI Integration.Alex Rees of Braze, Matthew Gray of Tealium, and Jason Mann of Qualtrics join discussion.Braze experienced triple-digit percentage growth through AWS Marketplace.Braze dedicating resources specifically to Marketplace procurement.Tealium accelerated deal velocity by listing on Marketplace.Tealium saw broader use case expansion with AWS co-selling.Qualtrics views Marketplace listing as earning a "diploma."Understanding AWS incentives and metrics is crucial.Knowing AWS "love language" helps partnership success.Braze saw transaction volume increase between Q1 and Q4.Aligning with industry verticals unlocked faster growth.Tealium sees bigger deals and faster close times.Tealium moved from transactional to strategic marketplace approach.Private offers work well for complex enterprise agreements.Qualtrics measures AWS partnership through "influence, intel, introductions."AWS relationships help navigate IT and procurement challenges.Propensity-to-buy data guides AWS engagement strategy.Marketplace strategy evolving with new capabilities and international expansion.Brazilian marketplace distribution reduces currency and tax challenges.Partnership evolution: sell first, then market, then co-innovate.Braze penetrated airline market through AWS Travel & Hospitality.RFP introductions show tangible partnership benefits.Tealium partnering with Virgin Australia and United Airlines.MUFG bank case study shows joint AWS-Tealium success.Qualtrics won awards despite not completing formal competencies.Focus on fewer verticals yields better results.Gen AI brings both opportunities and regulatory concerns.First-party data rights critical for AI implementation.AWS Bedrock integration provides security and prescriptive solutions.Participants:Alex Rees – Director Tech Partnerships, BrazeJason Mann – Global AWS Alliance Lead, QualtricsMatthew Gray - SVP, Partnerships & Alliances, TealiumJason Warren - Head of Business Applications ISV Partnerships (Americas), AWSSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon/isv/
Eric Simons discusses the meteoric rise of Bolt.new, an AI-powered web app builder that went from zero to $40 million ARR in just five months. He shares insights on how they built an AI agent capable of creating full-stack web applications from simple prompts, the challenges of rapid growth, and the future of AI in software development. From nearly shutting down the company to becoming one of the fastest-growing AI products in history, Eric offers valuable lessons for anyone building in the AI space.Chapters:00:00 - Introduction and Bolt.new overview06:05 - The journey from near-shutdown to rapid growth13:28 - Challenges of explosive growth and scaling18:50 - Technical deep dive: Building Bolt.new26:37 - Debugging and improving AI-generated code32:09 - Future directions and enterprise adoption34:11 - Advice for building AI applications37:03 - The concept of "vibe revenue" in AI startups39:39 - Is AI over or under-hyped?------------------------------------------------------------------------------------------------------------------------------------------------Humanloop is the LLM evals platform for enterprises. We give you the tools that top teams use to ship and scale AI with confidence. To find out more go to humanloop.com
Yashodha Bhavnani, Head of AI at Box, reveals Box's vision for intelligent content management that transforms unstructured data into actionable insights. Topics Include:Yashodha Bhavnani leads AI products at Box.Box's mission: power how the world works together.Box serves customers globally across various industries.Works with majority of Fortune 500 companies.AI agents will join workforce for repetitive tasks.Workflows like hiring will become easily automated with AI.Content will work for users, not vice versa.Customers demand better experiences with generative AI.Box calls this shift "intelligent content management."90% of enterprise content is unstructured data.AI thrives on unstructured data.Current content systems are unproductive and unsecured.AI can generate insights from scattered company knowledge.AI extracts metadata automatically from documents like contracts.Automated workflows triggered by AI-extracted data.Box provides enterprise-grade AI connected to your content.AI follows same permissions as the content itself.Customer data never used to train AI models.AI helps classify sensitive data to prevent leaks.Box offers choice of AI models to customers.AI is seamlessly connected with customer content.Administrators control AI deployment across their organization.Partnership with AWS Bedrock brings frontier models to Box.Box supports customers using their own custom models.Box preparing for AI agents to join workforce.Introduced "AI Units" for flexible pricing.Basic AI included free with Business Plus tiers.Both horizontal and vertical multi-agent architectures planned.Working toward agent-to-agent communication protocols.Participants:Yashodha Bhavnani - VP of Product Management, AI products, BoxSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon/isv/
Ghibli images aren't really about copyright or ethics, they're about unexamined questions of power. Who gets to make images? What gives them meaning? And what is their value when machines can produce them at scale? Full Show Notes: https://thejaymo.net/2025/03/30/2506-information-age-iconoclasm/ Experience.Computer: https://experience.computer/ Worldrunning.guide: https://worldrunning.guide/ Subscriber Zine! https://startselectreset.com/ Permanently moved is a personal podcast 301 seconds in length, written and recorded by @thejaymo Subscribe to the Podcast: https://permanentlymoved.online/
Tech leaders from RingCentral, Zoom and AWS discuss how generative AI is transforming business communications while balancing challenges & regulatory concerns in this rapidly evolving landscape.Topics Include:Introduction of panel on generative AI's impact on businesses.How to transition AI from prototypes to production.Understanding value creation for customers through AI.Introduction of Khurram Tajji from RingCentral.Introduction of Brendan Ittleson from Zoom.How generative AI fits into Zoom's product offerings.Zoom's AI companion available to all paid customers.Zoom's federated approach to AI model selection.RingCentral's new AI Receptionist (AIR) launch.How AIR routes calls using generative AI capabilities.AI improving customer experience through sentiment analysis.The disproportionate value of real-time AI assistance.Economics of delivering real-time AI capabilities.Real-time AI compliance monitoring in banking.Value of preventing regulatory fines through AI.Voice cloning detection through AI security.Democratizing AI access across Zoom's platform.Monetizing specialized AI solutions for business value.Challenges in taking AI prototypes to production.Importance of selecting the right AI models.Privacy considerations when training AI models.Maintaining quality without using customer data for training.Co-innovation with customers during product development.Scaling challenges for AI businesses.Case study of AI in legal case assessment.Ensuring unit economics work before scaling AI applications.Zoom's approach to scaling AI across products.Importance of centralizing but federating AI capabilities.Breaking down data silos for effective AI context.Navigating evolving regulations around AI.EU AI Act restrictions on emotion inference.Balancing regulations with customer experience needs.Future of AI agents interacting with other agents.How AI enhances human connection by handling routine tasks.Impact of AI on company valuations and M&A activity.Participants:Khurram Tajji – Group CMO & Partnerships, RingCentralBrendan Ittleson – Chief Ecosystem Officer, ZoomSirish Chandrasekaran – VP of Analytics, AWSSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon/isv/
Declining enrollment was a problem at WKU. The university used AI-powered analytics to optimize outreach, which helped increase enrollment and revenue.
CEO Joe Kim shares how Sumo Logic has implemented generative AI to democratize data analytics, leveraging AWS Bedrock's multi-agent capabilities to dramatically improve accuracy.Topics Include:Introduction of Joe Kim, CEO of Sumo Logic.Question: Overview of Sumo Logic's products and customers?Sumo Logic specializes in observability and security markets.Company leverages industry-leading log management and analytics capabilities.Question: How has generative AI entered this space?Kim's background is in product, strategy and engineering.Non-experts struggle to extract value from complex telemetry data.Generative AI provides easier interface for interacting with data.Question: How do you measure success of AI initiatives?Focus on customer problems, not retrofitting AI everywhere.Launched "Mo, the co-pilot" at AWS re:Invent.Mo enables natural language queries of complex data.Mo suggests visualizations and follow-up questions during incidents.Question: What challenges did you face implementing AI?Team knew competitors would eventually implement similar capabilities.Single model approach topped out at 80% accuracy.Multi-agent approach with AWS Bedrock achieved mid-90% accuracy.Bedrock offered security benefits and multiple model capabilities.Question: How was working with the AWS team?Partnered with Bedrock team and tribe.ai for implementation.Partners helped avoid pitfalls from thousands of prior projects.Question: What advice for other software leaders?Don't implement AI just to satisfy board pressure.Identify problems without mentioning generative AI first.Innovation should come from listening to customers.Question: Future plans with AWS partnership?Moving toward automated remediation beyond just analysis.Question: Has Sumo Logic monetized generative AI?Changed pricing from data ingestion to data usage.New model encourages more data sharing without cost barriers.Participants:Joe Kim – Chief Executive Officer, Sumo LogicSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon/isv/
CTO Arun Kumar discusses how Socure leverages AWS and generative AI to collect billions of data points each day in order to combat sophisticated online fraud at scale.Topics Include:Introduction of Arun Kumar, CTO of SocureWhat does Socure specialize in?KYC and anti-money laundering checksMission: eliminate 100% fraud on the internetFraud has increased since COVIDSocure blocks fraud at entry pointWorks with top banks and government agenciesCTO responsibilities include product and engineeringFocus on increasing efficiency through technologyTwo goals: internal efficiency and combating fraudCountering tools like FraudGPT on dark webMeasuring success through reduced human capital needsFraud investigations reduced from hours to minutesImproved success rates in uncovering fraud ringsDetecting multi-hop connections in fraud networksQuestion: Who's winning - fraudsters or AI?It's a constant "cat and mouse game"Creating a fraud "red team" similar to cybersecurityPartnership details with AWSAmazon Bedrock provides multiple LLM optionsBuilding world's largest identity graph with NeptuneReal-time suspicious activity detectionBlocking account takeovers through phone number changesSuccess story: detecting deepfake across 3,000 IDsCollecting hundreds of data points per identityChallenges: adding selfie checks and liveness detectionFuture strategy: 10x-100x performance improvementsCreating second and third-order intelligence signalsInternal efficiency applications of generative AIAI-powered sales tools and legal document reviewParticipants:Arun Kumar – Chief Technical Officer, SocureSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon/isv/
In this episode of High Agency, Patrick Leung from Faro Health explains how they're using AI to revolutionize clinical trial design by both generating regulatory documents and extracting insights from thousands of existing trials. Patrick emphasises the essential collaboration between clinical experts and AI engineers when building reliable systems in healthcare's high-stakes environment. Chapters:00:00 - Introduction04:26 - Clinical trials before: Microsoft Word Documents08:17 - Document generation using AI12:26 - What makes clinical trials so expensive16:26 - Parsing and processing clinical trial data18:04 - Challenges with traditional evaluation metrics21:28 - Importance of domain experts in the evaluation process24:35 - Collaboration between domain experts and engineering31:26 - Building a graph-based knowledge system34:27 - Roles and skillsets required38:06 - Lessons learned building LLM products40:56 - Discussion on AI capabilities and limitations46:07 - Is AI overhyped or underhyped------------------------------------------------------------------------------------------------------------------------------------------------Humanloop is the LLM evals platform for enterprises. We give you the tools that top teams use to ship and scale AI with confidence. To find out more go to humanloop.com
In this episode, Carlos Gonzalez de Villaumbrosia interviews Chiara McPhee, Chief Product Officer at Postscript, a leader in SMS marketing for e-commerce. Postscript is revolutionizing how e-commerce brands engage with customers. With over $100 million in Annual Recurring Revenue (ARR) and more than 20,000 Shopify merchants using their platform, Postscript has become a powerhouse in conversational commerce. Their SMS marketing tools have helped generate over $2 billion in e-commerce revenue for their customers annually, with open rates exceeding 90%, far outpacing traditional email marketing.In this episode, Chiara shares her insights on:* Leveraging generative AI to create personalized, one-to-one conversations that drive revenue.* How AI agents are outperforming human customer support in certain areas.* Key revenue leading indicators in SMS marketing and setting up effective attribution models.* Overcoming challenges in scaling infrastructure to achieve $100M in Annual Recurring Revenue.* The benefits of focusing exclusively on Shopify.* The "Horizon Strategy" approach to building teams tailored for Horizon 1 (cash cow), Horizon 2 (growth), and Horizon 3 (moonshots), balancing short-term wins with long-term ambitious goals in product development.In this episode, we'll explore how Postscript is leveraging cutting-edge technology to deliver personalized customer experiences, driving revenue and redefining e-commerce marketing. We'll discuss leveraging generative AI to create personalized, one-to-one conversations that drive revenue, how AI agents are outperforming human customer support in certain areas, overcoming challenges in scaling infrastructure to achieve $100M in Annual Recurring Revenue, and the benefits of focusing exclusively on Shopify. What you'll learn:* Chiara's journey to becoming CPO at Postscript and her insights on the power of SMS marketing.* How generative AI enables personalized, one-to-one conversations that drive revenue.* Key strategies for SMS marketing, including compliance, personalization, and integration with other channels.* How to structure product teams using the "Horizon Strategy" to balance short-term wins with long-term innovation. Key Takeaways:*Personalized Conversations: Chiara emphasizes the importance of leveraging generative AI to create personalized, one-to-one conversations that drive revenue.*Focus on Shopify: Chiara highlights the company's strategic decision to focus exclusively on Shopify, and the impact it had on business outcomes.*Horizon Strategy: Chiara shares the benefits of the "Horizon Strategy" approach to building product teams tailored for different stages of growth and innovation.
Oron Noah of Wiz outlines how organizations evolve their security practices to address new vulnerabilities in AI systems through improved visibility, risk assessment, and pipeline protection.Topics Include:Introduction of Oron Noah, VP at Wiz.Wiz: largest private service security company.$1.9 billion raised from leading VCs.45% of Fortune 100 use Wiz.Wiz scans 60+ Amazon native services.Cloud introduced visibility challenges.Cloud created risk prioritization issues.Security ownership shifted from CISOs to everyone.Wiz offers a unified security platform.Three pillars: Wiz Cloud, Code, and Defend.Wiz democratizes cloud security for all teams.Security Graph uses Amazon Neptune.Wiz has 150+ available integrations.Risk analysis connects to cloud environments.Wiz identifies critical attack paths.AI assists in security graph searches.AI helps with remediation scripts.AI introduces new security challenges.70% of customers already use AI services.AI security requires visibility, risk assessment, pipeline protection.AI introduces risks like prompt injection.Data poisoning can manipulate AI results.Model vulnerabilities create attack vectors.AI Security Posture Management (ASPM) introduced.Four key questions for AI security.AI pipelines resemble traditional cloud infrastructure.Wiz researchers found real AI security vulnerabilities.Wiz AI ASPM provides agentless visibility.Supports major AI services (AWS, OpenAI, etc.).Built-in rules detect AI service misconfigurations.Participants:Oron Noah – VP Product Extensibility & Partnerships, WizSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon/isv/
Ruslan Kusov of SoftServe presents how their Application Modernization Framework accelerates ISV modernization, assesses legacy code, and delivers modernized applications through platform engineering principles.Topics Include:Introduction of Ruslan Kusov, Cloud CoE Director at SoftServeSoftServe builds code for top ISVsSuccess case: accelerated security ISV modernization by six monthsHealthcare tech company assessment: 1.6 million code lines in weeksBusiness need: product development acceleration for competitive advantageBusiness need: intelligent operations automationBusiness need: ecosystem integration and "sizeification" to cloudBusiness need: secure and compliant solutionsBusiness need: customer-centric platforms with personalized experiencesBusiness need: AWS marketplace integrationDistinguishing intentional from unintentional complexityPlatform engineering concept introductionSelf-service internal platforms for standardizationApplying platform engineering across teams (GenAI, CSO, etc.)No one-size-fits-all approach to modernizationSAMP/SEMP framework introductionCore components: EKS, ECS, or LambdaModular structure with interchangeable componentsCase study: ISV switching from hardware to software productsFour-week MVP instead of planned ten weeksSix-month full modernization versus planned twelve monthsAssessment phase importance for business case developmentCalculating cost of doing nothing during modernization decisionsHealthcare customer case: 1.6 million code lines assessedBenefits: platform deployment in under 20 minutesBenefits: 5x reduced assessment timeBenefits: 30% lower infrastructure costsBenefits: 20% increased development productivity with GenAIIntegration with Amazon Q for developer productivityClosing Q&A on security modernization and ongoing managementParticipants:Ruslan Kusov – Cloud CoE Director, SoftserveSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon/isv/
Richard Sonnenblick and Lee Rehwinkel of Planview discuss their transition to Amazon Bedrock for a multi-agent AI system while sharing valuable implementation and user experience lessons.Topics Include:Introduction to Planview's 18-month journey creating an AI co-pilot.Planview builds solutions for strategic portfolio and agile planning.5,000+ companies with millions of users leverage Planview solutions.Co-pilot vision: AI assistant sidebar across multiple applications.RAG used to ingest customer success center documents.Tracking product data, screens, charts, and tables.Incorporating industry best practices and methodologies.Can ingest customer-specific documents to understand company terminology.Key benefit: Making every user a power user.Key benefit: Saving time on tedious and redundant tasks.Key benefit: De-risking initiatives through early risk identification.Cost challenges: GPT-4 initially cost $60 per million tokens.Cost now only $1.20 per million tokens.Market evolution: AI features becoming table stakes.Performance rubrics created for different personas and applications.Multi-agent architecture provides technical and organizational scalability.Initial implementation used Azure and GPT-4 models.Migration to AWS Bedrock brought model choice benefits.Bedrock allowed optimization across cost, benchmarking, and speed dimensions.Added AWS guardrails and knowledge base capabilities.Lesson #1: Users hate typing; provide clickable options.Lesson #2: Users don't like waiting; optimize for speed.Lesson #3: Users take time to trust AI; provide auditable answers.Question about role-based access control and permissions.Co-pilot uses user authentication to access application data.Question about subscription pricing for AI features.Need to educate customers about AI's value proposition.Question about reasoning modes and timing expectations.Showing users the work process makes waiting more tolerable.Participants:Richard Sonnenblick - Chief Data Scientist, PlanviewLee Rehwinkel – Principal Data Scientist, PlanviewSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon/isv/
Executives from DataRobot, LaunchDarkly and ServiceNow share strategies, actions and recommendations to achieve profitable growth in today's competitive SaaS landscape.Topics Include:Introduction of panelists from DataRobot, LaunchDarkly & ServiceNowServiceNow's journey from service management to workflow orchestration platform.DataRobot's evolution as comprehensive AI platform before AI boom.LaunchDarkly's focus on helping teams decouple release from deploy.Rule of 40: balancing revenue growth and profit margin.ServiceNow exceeding standards with Rule of 50-60 approach.Vertical markets expansion as key strategy for sustainable growth.AWS Marketplace enabling largest-ever deal for ServiceNow.R&D investment effectiveness through experimentation and feature management.Developer efficiency as driver of profitable SaaS growth.Competition through data-driven decisions rather than guesswork.Speed and iteration frequency determining competitive advantage in SaaS.Balancing innovation with early customer adoption for AI products.Product managers should adopt revenue goals and variable compensation.Product-led growth versus sales-led motion: strategies and frictions.Sales-led growth optimized for enterprise; PLG for practitioners.Marketplace-led growth as complementary go-to-market strategy.Customer acquisition cost (CAC) as primary driver of margin erosion.Pricing and packaging philosophy: platform versus consumption models.Value realization must precede pricing and packaging discussions.Good-better-best pricing model used by LaunchDarkly.Security as foundation of trust in software delivery.LaunchDarkly's Guardian Edition for high-risk software release scenarios.Security for regulated industries through public cloud partnerships.GenAI security: benchmarks, tests, and governance to prevent issues.M&A strategy: ServiceNow's 33 acquisitions for features, not revenue.Replatforming acquisitions into core architecture for consistent experience.Balancing technology integration with people aspects during acquisitions.Trends in buying groups: AI budgets and tool consolidation.Implementing revenue goals in product teams for new initiatives.Participants:Prajakta Damle – Head of Product / SVP of Product, DataRobotClaire Vo – Chief Product & Technology Officer, LaunchDarklyAnshuman Didwania – VP/GM, Hyperscalers Business Group, ServiceNowAkshay Patel – Global SaaS Strategist, Amazon Web ServicesSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon/isv/
Our Head of U.S. IT Hardware Erik Woodring gives his key takeaways from Morgan Stanley's Technology, Media and Telecom (TMT) conference, including why there appears to be a long runway ahead for AI infrastructure spending, despite macro uncertainty. ----- Transcript -----Welcome to Thoughts on the Market. I'm Erik Woodring, Morgan Stanley's Head of U.S. IT Hardware Research. Here are some reflections I recorded last week at Morgan Stanley's Technology, Media, and Telecom Conference in San Francisco. It's Monday, March 10th at 9am in New York. This was another year of record attendance at our TMT Conference. And what is clear from speaking to investors is that the demand for new, under-discovered or under-appreciated ideas is higher than ever. In a stock-pickers' market – like the one we have now – investors are really digging into themes and single name ideas. Big picture – uncertainty was a key theme this week. Whether it's tariffs and the changing geopolitical landscape, market volatility, or government spending, the level of relative uncertainty is elevated. That said, we are not hearing about a material change in demand for PCs, smartphones, and other technology hardware. On the enterprise side of my coverage, we are emerging from one of the most prolonged downcycles in the last 10-plus years, and what we heard from several enterprise hardware vendors and others is an expectation that most enterprise hardware markets – PCs , Servers, and Storage – return to growth this year given pent up refresh demand. This, despite the challenges of navigating the tariff situation, which is resulting in most companies raising prices to mitigate higher input costs. On the consumer side of the world, the demand environment for more discretionary products like speakers, cameras, PCs and other endpoint devices looks a bit more challenged. The recent downtick in consumer sentiment is contributing to this environment given the close correlation between sentiment and discretionary spending on consumer technology goods. Against this backdrop, the most dynamic topic of the conference remains GenerativeAI. What I've been hearing is a confidence that new GenAI solutions can increasingly meet the needs of market participants. They also continue to evolve rapidly and build momentum towards successful GenAI monetization. To this point, underlying infrastructure spending—on servers, storage and other data center componentry – to enable these emerging AI solutions remains robust. To put some numbers behind this, the 10 largest cloud customers are spending upwards of [$]350 billion this year in capex, which is up over 30 percent year-over-year. Keep in mind that this is coming off the strongest year of growth on record in 2024. Early indications for 2026 CapEx spending still point to growth, albeit a deceleration from 2025. And what's even more compelling is that it's still early days. My fireside chats this week highlighted that AI infrastructure spending from their largest and most sophisticated customers is only in the second inning, while AI investments from enterprises, down to small and mid-sized businesses, is only in the first inning, or maybe even earlier. So there appears to be a long runway ahead for AI infrastructure spending, despite the volatility we have seen in AI infrastructure stocks, which we see as an opportunity for investors. I'd just highlight that amidst the elevated market uncertainty, there is a prioritization on cost efficiencies and adopting GenAI to drive these efficiencies. Company executives from some of the major players this week all discussed near-term cost efficiency initiatives, and we expect these efforts to both help protect the bottom line and drive productivity growth amidst a quickly changing market backdrop. Thanks for listening. If you enjoy the show, please leave us a review wherever you listen and share Thoughts on the Market with a friend or colleague today.
In this episode, Raza is joined by Shahriar Tajbakhsh, the co-founder of Metaview. They discuss how Metaview's AI scribe automates interview note-taking, how AI agents can surface top candidates from thousands of resumes, and why hiring managers should think of AI as a co-worker, not just a tool. Raza's recomended reading: Creating a LLM-as-a-Judge That Drives Business Results.Chapters:00:00 - Introduction03:32 - How AI Co-Workers Are Transforming Recruiting06:21 - Inside MetaView: AI Scribe and Workflow Automation09:11 - Unlocking Hiring Insights with AI-Driven Conversations11:30 - Balancing AI Innovation and User Adoption14:05 - Metaview's Tech Stack and the Role of LLMs18:29 - How MetaView Generates Superhuman Interview Notes23:18 - The Challenges of Building Reliable AI Hiring Agents32:40 - The Future of AI in Hiring: Automating Job Descriptions40:26 - AI Co-Workers That Work While You Sleep47:08 - Why Vertical AI Will Win Over General AI Agents50:24 - The Underrated Power of Graph-Based AI------------------------------------------------------------------------------------------------------------------------------------------------Humanloop is the LLM evals platform for enterprises. We give you the tools that top teams use to ship and scale AI with confidence. To find out more go to humanloop.com
Recorded at our 2025 Technology, Media and Telecom (TMT) Conference, TMT Credit Research Analyst Lindsay Tyler joins Head of Investment Grade Debt Coverage Michelle Wang to discuss the how the industry is strategically raising capital to fund growth.----- Transcript -----Lindsay Tyler: Welcome to Thoughts on the Market. I'm Lindsay Tyler, Morgan Stanley's Lead Investment Grade TMT Credit Research Analyst, and I'm here with Michelle Wang, Head of Investment Grade Debt Coverage in Global Capital Markets.On this special episode, we're recording at the Morgan Stanley Technology, Media, and Telecom (TMT) Conference, and we will discuss the latest on the technology space from the fixed income perspective.It's Thursday, March 6th at 12 pm in San Francisco.What a week it's been. Last I heard, we had over 350 companies here in attendance.To set the stage for our discussion, technology has grown from about 2 percent of the broader investment grade market – about two decades ago – to almost 10 percent now; though that is still relatively a small percentage, relative to the weightings in the equity market.So, can you address two questions? First, why was tech historically such a small part of investment grade? And then second, what has driven the growth sense?Michelle Wang: Technology is still a relatively young industry, right? I'm in my 40s and well over 90 percent of the companies that I cover were founded well within my lifetime. And if you add to that the fact that investment grade debt is, by definition, a later stage capital raising tool. When the business of these companies reaches sufficient scale and cash generation to be rated investment grade by the rating agencies, you wind up with just a small subset of the overall investment grade universe.The second question on what has been driving the growth? Twofold. Number one the organic maturation of the tech industry results in an increasing number of scaled investment grade companies. And then secondly, the increasing use of debt as a cheap source of capital to fund their growth. This could be to fund R&D or CapEx or, in some cases, M&A.Lindsay Tyler: Right, and I would just add in this context that my view for this year on technology credit is a more neutral one, and that's against a backdrop of being more cautious on the communications and media space.And part of that is just driven by the spread compression and the lack of dispersion that we see in the market. And you mentioned M&A and capital allocation; I do think that financial policy and changes there, whether it's investment, M&A, shareholder returns – that will be the main driver of credit spreads.But let's turn back to the conference and on the – you know, I mentioned investment. Let's talk about investment.AI has dominated the conversation here at the conference the past two years, and this year is no different. Morgan Stanley's research department has four key investment themes. One of those is AI and tech diffusion.But from the fixed income angle, there is that focus on ongoing and upcoming hyperscaler AI CapEx needs.Michelle Wang: Yep.Lindsay Tyler: There are significant cash flows generated by many of these companies, but we just discussed that the investment grade tech space has grown relative to the index in recent history.Can you discuss the scale of the technology CapEx that we're talking about and the related implications from your perspective?Michelle Wang: Let's actually get into some of the numbers. So in the past three years, total hyperscaler CapEx has increased from [$]125 billion three years ago to [$]220 billion today; and is expected to exceed [$]300 billion in 2027.The hyperscalers have all publicly stated that generative AI is key to their future growth aspirations. So, why are they spending all this money? They're investing heavily in the digital infrastructure to propel this growth. These companies, however, as you've pointed out, are some of the most scaled, best capitalized companies in the entire world. They have a combined market cap of [$]9 trillion. Among them, their balance sheet cash ranges from [$]70 to [$]100 billion per company. And their annual free cash flow, so the money that they generate organically, ranges from [$]30 to [$]75 billion.So they can certainly fund some of this CapEx organically. However, the unprecedented amount of spend for GenAI raises the probability that these hyperscalers could choose to raise capital externally.Lindsay Tyler: Got it.Michelle Wang: Now, how this capital is raised is where it gets really interesting. The most straightforward way to raise capital for a lot of these companies is just to do an investment grade bond deal.Lindsay Tyler: Yep.Michelle Wang: However, there are other more customized funding solutions available for them to achieve objectives like more favorable accounting or rating agency treatment, ways for them to offload some of their CapEx to a private credit firm. Even if that means that these occur at a higher cost of capital.Lindsay Tyler: You touched on private credit. I'd love to dig in there. These bespoke capital solutions.Michelle Wang: Right.Lindsay Tyler: I have seen it in the semiconductor space and telecom infrastructure, but can you please just shed some more light, right? How has this trend come to fruition? How are companies assessing the opportunity? And what are other key implications that you would flag?Michelle Wang: Yeah, for the benefit of the audience, Lindsay, I think just to touch a little bit…Lindsay Tyler: Some definitions,Michelle Wang: Yes, some definitions around ...Lindsay Tyler: Get some context.Michelle Wang: What we're talking about.Lindsay Tyler: Yes.So the – I think what you're referring to is investment grade companies doing asset level financing. Usually in conjunction with a private credit firm, and like all financing trends that came before it, all good financing trends, this one also resulted from the serendipitous intersection of supply and demand of capital.On the supply of capital, the private credit pocket of capital driven by large pockets of insurance capital is now north of $2 trillion and it has increased 10x in scale in the past decade. So, the need to deploy these funds is driving these private credit firms to seek out ways to invest in investment grade companies in a yield enhanced manner.Lindsay Tyler: Right. And typically, we're saying 150 to 200 basis points greater than what maybe an IG bond would yield.Michelle Wang: That's exactly right. That's when it starts to get interesting for them, right? And then the demand of capital, the demand for this type of capital, that's always existed in other industries that are more asset-heavy like telcos.However, the new development of late is the demand for capital from tech due to two megatrends that we're seeing in tech. The first is semiconductors. Building these chip factories is an extremely capital-intensive exercise, so creates a demand for capital. And then the second megatrend is what we've seen with the hyperscalers and GenerativeAI needs. Building data centers and digital infrastructure for GenerativeAI is also extremely expensive, and that creates another pocket of demand for capital that private credit conveniently kinda serves a role in.Lindsay Tyler: Right.Michelle Wang: So look, think we've talked about the ways that companies are using these tools. I'm interested to get your view, Lindsay, on the investor perspective.Lindsay Tyler: Sure.Michelle Wang: How do investors think about some of these more bespoke solutions?Lindsay Tyler: I would say that with deals that have this touch of extra complexity, it does feel that investor communication and understanding is all important. And I have found that, some of these points that you're raising – whether it's the spread pickup and the insurance capital at the asset managers and also layering in ratings implications and the deal terms. I think all of that is important for investors to get more comfortable and have a better understanding of these types of deals.The last topic I do want us to address is the macro environment. This has been another key theme with the conference and with this recent earnings season, so whether it's rate moves this year, the talk of M& A, tariffs – what's your sense on how companies are viewing and assessing macro in their decision making?Michelle Wang: There are three components to how they're thinking about it.The first is the rate move. So, the fact that we're 50 to 60 basis points lower in Treasury yields in the past month, that's welcome news for any company looking to issue debt. The second thing I'll say here is about credit spreads. They remain extremely tight. Speaking to the incredible kind of resilience of the investment grade investor base. The last thing I'll talk about is, I think, the uncertainty. [Because] that's what we're hearing a ton about in all the conversations that we've had with companies that have presented here today at the conference.Lindsay Tyler: Yeah. For my perspective, also the regulatory environment around that M&A, whether or not companies will make the move to maybe be more acquisitive with the current new administration.Michelle Wang: Right, so until the dust settles on some of these issues, it's really difficult as a corporate decision maker to do things like big transformative M&A, to make a company public when you don't know what could happen both from a the market environment and, as you point out, regulatory standpoint.The thing that's interesting is that raising debt capital as an investment grade company has some counter cyclical dynamics to it. Because risk-off sentiment usually translates into lower treasury yields and more favorable cost of debt.And then the second point is when companies are risk averse it drives sometimes cash hoarding behavior, right? So, companies will raise what they call, you know, rainy day liquidity and park it on balance sheet – just to feel a little bit better about where their balance sheets are. To make sure they're in good shape…Lindsay Tyler: Yeah, deal with the maturities that they have right here in the near term.Michelle Wang: That's exactly right. So, I think as a consequence of that, you know, we do see some tailwinds for debt issuance volumes in an uncertain environment.Lindsay Tyler: Got it. Well, appreciate all your insights. This has been great. Thank you for taking the time, Michelle, to talk during such a busy week.Michelle Wang: It's great speaking with you, Lindsay.Lindsay Tyler: And thanks to everyone listening in to this special episode recorded at the Morgan Stanley TMT Conference in San Francisco. If you enjoy Thoughts on the Market, please leave us a review wherever you listen and share the podcast with a friend or colleague today.
Ryan Steeb shares DTEX Systems' strategic approach to implementing generative AI with AWS Bedrock, reducing risk while focusing on meaningful customer outcomes.Topics Include:Introduction of Ryan Steeb, Head of Product at DTEX Systems Explanation of insider risk challenges Three categories of insider risk (malicious, negligent, compromised) How DTEX Systems is using generative AI Collection of proprietary data to map human behavior on networks Three key areas leveraging Gen AI: customer value, services acceleration, operations How partnership with AWS has impacted DTEX's AI capabilities Value of AWS expertise for discovering AI possibilities AWS Bedrock providing flexibility in AI implementation Collaboration on unique applications beyond conventional chat assistants AWS OpenSearch as a foundational component Creating invisible AI workflows that simplify user experiences The path to monetization for generative AI Three approaches: direct pricing, service efficiency, operational improvements Second and third-order effects (retention, NPS, reduced churn) How DTEX prioritizes Gen AI projects Starting with customer problems vs. finding problems for AI solutions Business impact prioritization framework Technical capability considerations Benefits of moving AI solutions to AWS Bedrock Fostering a culture of experimentation and innovation Adopting Amazon's "working backwards" philosophy Balancing customer-driven evolution with original innovation Time machine advice: start experimenting with Gen AI earlier Importance of leveraging peer groups and experts Future outlook: concerns about innovation outpacing risk mitigation Security implications of Gen AI adoption Participation in the OpenSearch Linux Foundation initiative Final thoughts on the DTEX-AWS partnershipParticipants:Ryan Steeb – Head of Product, DTEX SystemsSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon/isv/
Our Semiconductors and Software analysts Joe Moore and Keith Weiss dive into the biggest market debate around AI and why it's likely to shape conversations at Morgan Stanley's Technology, Media and Telecom (TMT) Conference in San Francisco. ----- Transcript -----Joe Moore: Welcome to Thoughts on the Market. I'm Joe Moore, Morgan Stanley's Head of U.S. Semiconductors.Keith Weiss: And I'm Keith Weiss, Head of U.S. Software.Joe Moore: Today on the show, one of the biggest market debates in the tech sector has been around AI and the Return On Investment, or ROI. In fact, we think this will be the number one topic of conversation at Morgan Stanley's annual Technology, Media and Telecom (TMT) conference in San Francisco.And that's precisely where we're bringing you this episode from.It's Monday, March 3rd, 7am in San Francisco.So, let's get right into it. ChatGPT was released November 2022. Since then, the biggest tech players have gained more than $9 trillion in combined market capitalization. They're up more than double the amount of the S&P 500 index. And there's a lot of investor expectation for a new technology cycle centered around AI. And that's what's driving a lot of this momentum.You know, that said, there's also a significant investor concern around this topic of ROI, especially given the unprecedented level of investment that we've seen and sparse data points still on the returns.So where are we now? Is 2025 going to be a year when the ROI and GenAI finally turns positive?Keith Weiss: If we take a step back and think about the staging of how innovation cycles tend to play out, I think it's a helpful context.And it starts with research. I would say the period up until When ChatGPT was released – up until that November 2022 – was a period of where the fundamental research was being done on the transformer models; utilizing, machine learning. And what fundamental research is, is trying to figure out if these fundamental capabilities are realistic. If we can do this in software, if you will.And with the release of ChatGPT, it was a very strong, uh, stamp of approval of ‘Yes, like these transformer models can work.'Then you start stage two. And I think that's basically November 22 through where are today of, where you have two tracks going on. One is development. So these large language models, they can do natural language processing well.They can contextually understand unstructured and semi structured data. They can generate content. They could create text; they could create images and videos.So, there's these fundamental capabilities. But you have to develop a product to get work done. How are we going to utilize those capabilities? So, we've been working on development of product over the past two years. And at the same time, we've been scaling out the infrastructure for that product development.And now, heading into 2025, I think we're ready to go into the next stage of the innovation cycle, which will be market uptake.And that's when revenue starts to flow to the software companies that are trying to automate business processes. We definitely think that monetization starts to ramp in 2025, which should prove out a better ROI or start to prove out the ROI of all this investment that we've been making.Joe Moore: Morgan Stanley Research projects that GenAI can potentially drive a $1.1 trillion dollar revenue opportunity in 2028, up from $45 billion in 2024. Can you break this down for our listeners?Keith Weiss: We recently put out a report where we tried to size kind of what the revenue generation capability is from GenerativeAI, because that's an important part of this ROI equation. You have the return on the top of where you could actually monetize this. On the bottom, obviously, investment. And we took a look at all the investment needed to serve this type of functionality.The [$]1.1 trillion, if you will, it breaks down into two big components. Um, One side of the equation is in my backyard, and that's the enterprise software side of the equation. It's about a third of that number. And what we see occurring is the automation of more and more of the work being done by information workers; for people in overall.And what we see is about 25 percent, of overall labor being impacted today. And we see that growing to over 45 percent over the next three years.So, what that's going to look like from a software perspective is a[n] opportunity ramping up to about, just about $400 billion of software opportunity by 2028. At that point, GenerativeAI will represent about 22 percent of overall software spending. At that point, the overall software market we expect to be about a $1.8 trillion market.The other side of the equation, the bigger side of the equation, is actually the consumer platforms. And that kind of makes sense if you think about the broader economy, it's basically one-third B2B, two-thirds B2C. The automation is relatively equivalent on both sides of the equation.Joe Moore: So, let's drill further into your outlook for software. What are the biggest catalysts you expect to see this year, and then over the coming three years?Keith Weiss: The key catalyst for this year is proving out the efficacy of these solutions, right?Proving out that they're going to drive productivity gains and yield real hard dollar ROI for the end customer. And I think where we'll see that is from labor savings.Once that occurs, and I think it's going to be over the next 12 to 18 months, then we go into the period of mainstream adoption. You need to start utilizing these technologies to drive the efficiencies within your businesses to be able to keep up with your competitors. So, that's the main thing that we're looking for in the near term.Over the next three years, what you're looking for is the breakthrough technologies. Where can we find opportunities not just to create efficiencies within existing processes, but to completely rewrite the business process.That's where you see new big companies emerge within the software opportunity – is the people that really fundamentally change the equation around some of these processes.So, Joe, turning it over to you, hardware remains a bottleneck for AI innovation. Why is that the case? And what are the biggest hurdles in the semiconductor space right now?Joe Moore: Well, this has proven to be an extremely computationally intensive application, and I think it started with training – where you started seeing tens of thousands of GPUs or XPUS clustered together to train these big models, these Large Language Models. And you started hearing comments two years ago around the development of ChatGPT that, you know, the scaling laws are tricky.You might need five times as much hardware to make a model that's 10 percent smarter. But the challenge of making a model that's 10 percent smarter, the table stakes of that are very significant. And so, you see, you know, those investments continuing to scale up. And that's been a big debate for the market.But we've heard from most of the big spenders in the market that we are continuing to scale up training. And then after that happened, we started seeing inference suddenly as a big user of advanced processors, GPUs, in a way that they hadn't before. And that was sort of simple conversational types of AI.Now as you start migrating into more of a reasoning AI, a multi pass approach, you're looking at a really dramatic scaling in the amount of hardware, that's required from both GPUs and XPUs.And at the same time the hardware companies are focused a lot on how do we deliver that – so that it doesn't become prohibitively expensive; which it is very expensive. But there's a lot of improvement. And that's where you're sort of seeing this tug of war in the stocks; that when you see something that's deflationary, uh, it becomes a big negative. But the reality is the hardware is designed to be deflationary because the workloads themselves are inflationary.And so I think there's a lot of growth still ahead of us. A lot of investment, and a lot of rich debate in the market about this.Keith Weiss: Let's pull on that thread a little bit. You talked initially about the scaling of the GPU clusters to support training. Over the past year, we've gotten a little bit more pushback on the ideas or the efficacy of those scaling laws.They've come more under question. And at the same time, we've seen the availability of some lower cost, but still very high-performance models. Is this going to reshape the investments from the large semiconductor players in terms of how they're looking to address the market?Joe Moore: I think we have to assess that over time. Right now, there are very clear comments from everybody who's in charge of scaling large models that they intend to continue to scale.I think there is a benefit to doing so from the standpoint of creating a richer model, but is the ROI there? You know, and that's where I think, you know, your numbers do a very good job of justifying our model for our core companies – where we can say, okay, this is not a bubble. This is investment that's driven by these areas of economic benefit that our software and internet teams are seeing.And I think there is a bit of an arms race at the high end of the market where people just want to have the biggest cluster. And that's, we think that's about 30 percent of the revenue right now in hardware – is supporting those really big models. But we're also seeing, to your point, a very rich hardware configuration on the inference side post training model customization. Nvidia said on their on their earnings call recently that they see several orders of magnitude more compute required for those applications than for that pre-training. So, I think over time that's where the growth is going to come from.But you know, right now we're seeing growth really from all aspects of the market.Keith Weiss: Got it. So, a lot of really big opportunities out there utilizing these GPUs and ASICs, but also a lot of unknowns and potential risks. So, what are the key catalysts that you're looking for in the semiconductor space over the course of this year and maybe over the next three years?Joe Moore: Well, 2025 is, is a year that is really mostly about supply.You know, we're ramping up, new hardware But also, several companies doing custom silicon. We have to ramp all that hardware up and it's very complicated.It uses every kind of trick and technique that semiconductors use to do advanced packaging and things like that. And so, it's a very challenging supply chain and it has been for two years. And fortunately, it's happened in a time when there's plenty of semiconductor capacity out there.But I think, you know, we're ramping very quickly. And I think what you're seeing is the things that matter this year are gonna be more about how quickly we can get that supply, what are the gross margins on hardware, things like that.I think beyond that, we have to really get a sense of, you know, these ROI questions are really important beyond 2025. Because again, this is not a bubble. But hardware is cyclical and there; it doesn't slow gracefully. So, there will be periods where investment may fall off and it'll be a difficult time to own the stocks. And that's, you know, we do think that over time, the value sort of transitions from hardware to software.But we model for 2026 to be a year where it starts to slow down a little bit. We start to see some consolidation in these investments.Now, 12 months ago, I thought that about 2025. So, the timeframe keeps getting pushed out. It remains very robust. But I think at some point it will plateau a little bit and we'll start to see some fragmentation; and we'll start to see markets like, you know, reasoning models, inference models becoming more and more critical. But that's where when I hear you and Brian Nowak talking about sort of the early stage that we are of actually implementing this stuff, that inference has a long way to go in terms of growth.So, we're optimistic around the whole AI space for semiconductors. Obviously, the market is as well. So, there's expectations, challenges there. But there's still a lot of growth ahead of us.So Keith, looking towards the future, as AI expands the functionality of software, how will that transform the business models of your companies?Keith Weiss: We're also fundamentally optimistic about software and what GenerativeAI means for the overall software industry.If we look at software companies today, particularly application companies, a lot of what you're trying to do is make information workers more productive. So, it made a lot of sense to price based upon the number of people who are using your software. Or you've got a lot of seat-based models.Now we're talking about completely automating some of those processes, taking people out of the loop altogether. You have to price differently. You have to price based upon the number of transactions you're running, or some type of consumptive element of the amount of work that you're getting done. I think the other thing that we're going to see is the market opportunity expanding well beyond information workers.So, the way that we count the value, the way that we accrue the value might change a little bit. But the underlying value proposition remains the same. It's about automating, creating productivity in those business processes, and then the software companies pricing for their fair share of that productivity.Joe Moore: Great. Well, let me just say this has been a really useful process for me. The collaboration between our teams is really helpful because as a semiconductor analyst, you can see the data points, you can see the hardware being built. And I know the enthusiasm that people have on a tactical level. But understanding where the returns are going to come from and what milestones we need to watch to see any potential course correction is very valuable.So on that note, it's time for us to get to the exciting panels at the Morgan Stanley TMT conference. Uh, And we'll have more from the conference on the show later this week. Keith, thanks for taking the time to talk.Keith Weiss: Great speaking with you, Joe.Joe Moore: And thanks for listening. If you enjoy Thoughts on the Market, please leave us a review wherever you listen and share the podcast with a friend or colleague today.
From cost management to practical implementation, Sage's Amaya Souarez shares invaluable insights on building AI-powered business tools that deliver measurable value to customers.Topics Include:Amaya Souarez introduced as EVP Cloud Services at SageOverview of Sage: offers accounting, finance, HR and payroll tech for small businessesCompany emphasizes human values alongside technology developmentAmaya oversees core cloud services and operations across 200+ productsSage Co-Pilot announced as new AI assistant – helping automate invoicing and cash flow managementCommon misconceptions with Generative AIAI solutions aren't always solution to every problemCompares AI hype to previous blockchain enthusiasmEmphasizes starting with clear use cases before implementationDifference between task-based and reporting-based use casesPartnering with AWS to build accounting-specific language modelsDifferent accounting terminology varies by countryUsing AWS Bedrock and Lex for a domain-specific language model developmentMultiple AI models may be needed for single solutionCustomer feedback drives project funding decisionsAI development integrated into regular product roadmapsFocus on reducing cost per user for AI featuresSuccess story: reducing 20-hour task to 5 minutesTracks AI usage costs per customer interactionEarly Gen AI hype caused confusion in the marketPlans to make domain-specific models available via APIWill offer language models on AWS MarketplaceEmphasizes practical AI application over blind implementationParticipants:Amaya Souarez - EVP Cloud Services and Operations, SageSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon/isv/
How is AI transforming our world and specifically the insurance industry? In this episode of Let's Get Surety®, hosts Kat Shamapande and Mark McCallum sit down with Peter Miller, President and CEO of The Institutes, to explore AI's expanding role in risk management, fraud detection, underwriting, and more. Discover how AI enhances efficiency, predicts and prevents losses, and reshapes workforce dynamics. Plus, learn about regulations shaping its future. Don't miss this insightful conversation! You might also want to check out these other tech-focused episodes and Virtual Seminars. With special guest: Peter Miller, President/CEO, The Institutes Hosted by: Kat Shamapande, Director, Professional Development, NASBP and Mark McCallum, CEO, NASBP Sponsored by Old Republic!
In this episode, Raza Habib chats with Zach Lloyd, CEO and founder of Warp, about how AI is transforming the developer experience. They explore how Warp is reimagining the command line, the power of AI-driven automation, and what the future holds for coding workflows.Chapters:00:00 - Introduction04:06 - Why the terminal needed reinvention07:11 - AI's role in Warp's evolution08:55 - Key AI features in Warp12:49 - Balancing safety, reliability, and usability19:43 - Challenges in AI-Powered development22:33 - Changing developer behavior with AI27:24 - Prompt engineering and context optimization31:05 - Lessons for building AI products37:50 - The future of AI in software development46:42 - Underappreciated AI innovations------------------------------------------------------------------------------------------------------------------------------------------------Humanloop is the LLM evals platform for enterprises. We give you the tools that top teams use to ship and scale AI with confidence. To find out more go to humanloop.com
A discussion with Doug Hague, Executive Director, Corporate Engagement at UNC Charlotte. In recent years, he helped UNC Charlotte establish its School of Data Science. Prior to that, he spent many years with Bank of America ending with the role of Chief Analytics Officer of Bank of America Merchant Services. Doug discusses how the variety and growth in the financial industry kept him happy over the years. He also discusses his long-term plan and path to academia and how his management style has had to adjust. We have a back and forth on how the roles of CAO, CDO, and CDAO have evolved. He finishes with some good advice for students and early career professionals, as well as some insights into how to stay relevant in the age of generative AI. #analytics #datascience #ai #artificialintelligence #generativeAI #banking #finance
Box's Chief Product Officer Diego Dugatkin discusses how the enterprise content management platform is leveraging AI through partnerships with AWS Bedrock and continuing to innovate for their customers.Topics Include:Introduction of Diego Dugatkin as Box's Chief Product OfficerBox provides cloud content management for enterprise customersFocus on Intelligent Content ManagementBox serves 115,000 customers including 70% of Fortune 500Company manages approximately one exabyte of enterprise dataBox expanding product portfolio to offer more customer valuePartnership with AWS Bedrock for AI implementation announcedCollaboration with Anthropic for LLM technology integrationBox offers neutral approach letting customers choose preferred LLMsCommon misconceptions about generative AI capabilities and limitationsGenerative AI helps accelerate contract analysis and classification processesBox Hubs enables content curation and multi-document queriesSuccess measured through hub creation and query accuracy metricsLong-term AWS partnership continues expanding with new technologiesAmazon is major Box customer while Box uses AWSAPI integration important for third-party developer implementationsAI development exceeding speed expectations in efficiency improvementsChallenges remain in defining AI agent roles and capabilitiesContent strategy crucial for deploying intelligent content managementCompanies must prepare for AI agents in workplaceFlexibility in tech stack recommended over single-vendor approachNext 12-24 months will see accelerated industry changesBox maintains innovative culture through intrapreneurship approachCompany regularly hosts internal and external hackathonsFocus on maintaining integrated platform while acquiring companiesPartnership between Box and AWS continues growing strongerParticipants:Diego Dugatkin – Chief Product Officer, BoxSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon/isv/
The Innovation and Expectations episode has knowledgeable guests addressing lithium supply legal issues, how AI is impacting our profession, expectations for attorney hiring, and new lawyer notes about how innovation may impact legal careers in the future. (1) A New Era of Legal Issues Associated with Lithium ExtractionStephanie Noble of Vinson & Elkins LLP discusses the increased demand for lithium, what lithium extraction is, the many types of legal issues that are arising due to this increase in demand, and why it's an exciting time to be an energy lawyer. *New Lawyer Note: John Hinojosa (Winston & Strawn)* (2) Generative AI: Increasing EfficiencyNelson Ebaugh of Nelson S. Ebaugh P.C. addresses common concerns about confidentiality and privileged communication, provides some guidance for addressing this concerns, advocates for the use of generative AI to increase the efficiency of and enhance your legal practice, and briefly addresses legal developments relating to AI. *New Lawyer Note: Caitlin Rogers (Winston & Strawn)*(3) We're Hiring (Mostly): Trends in the Houston Legal Market Now and in the Near FutureTim Reagan of ELR Legal Search discusses current and future trends in the attorney hiring market in Houston, addresses the potential impact of AI on the market, and provides sound advice for young lawyers and students who are trying to determine how they fit into the legal market and where they should direct their talents. *New Lawyer Note: Drew Pierce (Winston & Strawn)*The first and second segments of this episode are approved for CLE for HBA members. See The Houston Lawyer page on the HBA's website for details. For full speaker bios, visit The Houston Lawyer (hba.org). To read The Houston Lawyer magazine, visit The Houston Lawyer_home. For more information about the Houston Bar Association, visit Houston Bar Association (hba.org).*The views expressed in this episode do not necessarily reflect the views of The Houston Lawyer Editorial Board or the Houston Bar Association.
Through case studies of Graviton implementation and GPU integration, Justin Fitzhugh, Snowflake's VP of Engineering, demonstrates how cloud-native architecture combined with strategic partnerships can drive technical innovation and build business value.Topics Include:Cloud engineering and AWS partnershipTraditional databases had fixed hardware ratios for compute/storageSnowflake built cloud-native with separated storage and computeCompany has never owned physical infrastructureApplications must be cloud-optimized to leverage elastic scalingSnowflake uses credit system for customer billingCredits loosely based on compute resources providedCompany maintains cloud-agnostic approach across providersInitially aimed for identical pricing across cloud providersNow allows price variation while maintaining consistent experienceConsumption-based revenue model ties to actual usagePerformance improvements can actually decrease revenueCompany tracked ARM's move to data centersInitially skeptical of Graviton performance claimsPorting to ARM required complete pipeline reconstructionDiscovered floating point rounding differences between architecturesAmazon partnership crucial for library optimizationGraviton migration took two years instead of oneAchieved 25% performance gain with 20% cost reductionTeam requested thousands of GPUs within two monthsGPU infrastructure was new territory for SnowflakeNeeded flexible pricing for uncertain future needsSigned three to five-year contracts with flexibilityTeam pivoted from building to fine-tuning modelsPartnership allowed adaptation to business changesEmphasizes importance of leveraging provider expertiseRecommends early engagement with cloud providersBuild relationships before infrastructure needs ariseMaintain personal connections with provider executivesParticipants:Justin Fitzhugh – VP of Engineering, SnowflakeSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon/isv/
A discussion with Sasha Verbitsky, SVP - Data, Analytics, Digital at @Simon Property Group. Some people have gotten to work at a number of well-known companies throughout their career, and today's guest is one of them! Sasha recently joined Simon after past roles at Nike, Panera Bread, Marriott, Abercrombie & Fitch, Lands' End, and Victoria's Secret Direct. Sasha discusses how his first job luckily – and unplanned! – put him in an analytics role and why he stuck with it from that point forward. He provides a nice overview of the evolution of direct marketing from classic mail pieces to full scale digital models. He also talks through the consistencies and differences he's seen as he has worked across various companies and industries. The chat concludes with his views on why it is so important for people in data science to learn to define and solve problems early in their schooling and careers. #analytics #datascience #ai #artificialintelligence #generativeAI #retail #hospitality #directmarketing
In this AWS panel discussion, Naveen Rao, VP of AI of Databricks and Vijay Karunamurthy, Field CTO of Scale AI share practical insights on implementing generative AI in enterprises, leveraging private data effectively, and building reliable production systems.Topics Include:Sherry Marcus introduces panel discussion on generative AI adoptionScale AI helps make AI models more reliableDatabricks focuses on customizing AI with company dataCompanies often stressed about where to start with AIBoard-level pressure driving many enterprise AI initiativesStart by defining specific goals and success metricsBuild evaluations first before implementing AI solutionsAvoid rushing into demos without proper planningEnterprise data vastly exceeds public training data volumeCustomer support histories valuable for AI trainingModels learning to anticipate customer follow-up questionsProduction concerns: cost, latency, and accuracy trade-offsGood telemetry crucial for diagnosing AI application issuesSpeed matters more for prose, accuracy for legal documentsCost becomes important once systems begin scaling upOrganizations struggle with poor quality existing dataPrivacy crucial when leveraging internal business dataRole-based access control essential for regulated industriesAI can help locate relevant data across legacy systemsModels need organizational awareness to find data effectivelyPrivate data behind firewalls most valuable for AICustomization gives competitive advantage over generic modelsCurrent AI models primarily do flexible data recallNext few years: focus on deriving business valueFuture developments in causal inference expected post-5 yearsComplex multi-agent systems becoming more importantScale AI developing "humanity's last exam" evaluation metricDiscussion of responsibility and liability in AI decisionsCompanies must stand behind their AI system outputsExisting compliance frameworks can be adapted for AIParticipants:Naveen Rao – VP of AI, DatabricksVijay Karunamurthy – Field CTO, Scale AISherry Marcus Ph.D. - Director, Applied Science, AWSSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon/isv/
What does it take to chase greatness in a world of rapid change? Rajeev Kapur, CEO of 1105 Media and bestselling author, shares his journey of saying "yes" to hard things—from navigating international markets in the early days of Gateway and Dell to leading businesses during a global pandemic. His stories reveal the resilience and courage required to build opportunities where none seem to exist. Rajeev opens up about how bold decisions and a mindset of growth led him to write his first book, Chase Greatness: Enlightened Leadership for the Next Generation of Disruption, during the challenges of COVID-19. He outlines his "GREAT" framework—Gratitude, Resilience, Empathy, Accountability, and Transparency—essential traits for leaders facing demographic shifts, technological disruptions, and cultural divides. This episode also explores Rajeev's second book, AI Made Simple: A Beginner's Guide to Generative Intelligence, written for those intimidated by technology. He explains how generative AI can empower individuals and why understanding its potential and risks is crucial in today's world. From harnessing AI for business to embracing deep fakes as a societal challenge, Rajeev makes the case for thoughtful leadership in an era of disruption. Whether you're a CEO, entrepreneur, or aspiring thought leader, Rajeev's insights will inspire you to step up, stay curious, and embrace the hard path to success. Three Key Takeaways: • Saying Yes to Hard Challenges: Taking on difficult tasks and stepping out of your comfort zone can fast-track career growth and open unexpected opportunities. • The "GREAT" Leadership Framework: Gratitude, Resilience, Empathy, Accountability, and Transparency are essential traits for navigating disruption and leading effectively in today's fast-changing world. • AI is for Everyone: Understanding and embracing AI's potential—even for non-technical individuals—can drive innovation and create new opportunities, but leaders must also be mindful of its risks, like deep fakes and ethical concerns. Rajeev has a GREAT framework and you can too! Thought Leadership consists of four components that must work together. You can learn more about them here.
In this episode of the Pipeliners Podcast, Clint Bodungen of ThreatGEN joins to discuss the intersection of gamification and generative AI in training. The conversation explores how these innovative tools can simulate cybersecurity training scenarios and enhance incident response exercises. Clint and Russel also discuss their collaboration on a PHMSA-sponsored R&D project, where they aim to adapt these technologies for the pipeline industry, offering an exciting glimpse into the potential of AI-driven, multiplayer training environments for pipeline operators. Visit PipelinePodcastNetwork.com for a full episode transcript, as well as detailed show notes with relevant links and insider term definitions.
Suresh Vasudevan, CEO of Sysdig, discusses the evolving challenges of cloud security incident response and the need for new approaches to mitigate organizational risk.Topics Include:Cybersecurity regulations mandate incident response reporting.Challenges of cloud breach detection and response.Complex cloud attack patterns: reconnaissance, lateral movement, exploit.Rapid exploitation - minutes vs. days for on-prem.Importance of runtime, identity, and control plane monitoring.Limitations of EDR and SIEM tools for cloud.Coordinated incident response across security, DevOps, executives.Criticality of pre-defined incident response plans.Increased CISO personal liability risk and mitigation.Documenting security team's diligence to demonstrate due care.Establishing strong partnerships with legal and audit teams.Covering defensive steps in internal communications.Sysdig's cloud-native security approach and Falco project.Balancing prevention, detection, and response capabilities.Integrating security tooling with customer workflows and SOCs.Providing 24/7 monitoring and rapid response services.Correlating workload, identity, and control plane activities.Detecting unusual reconnaissance and lateral movement behaviors.Daisy-chaining events to identify potential compromise chains.Tracking historical identity activity patterns for anomaly detection.Aligning security with business impact assessment and reporting.Adapting SOC team skills for cloud-native environments.Resource and disruption cost concerns for cloud agents.Importance of "do no harm" philosophy for response.Enhancing existing security data sources with cloud context.Challenges of post-incident forensics vs. real-time response.Bridging security, DevOps, and executive domains.Establishing pre-approved incident response stakeholder roles.Maintaining documentation to demonstrate proper investigation.Evolving CISO role and personal liability considerations.Proactive management of cyber risk at board level.Developing strong general counsel and audit relationships.Transparency in internal communications to avoid discovery risks.Security teams as business partners, not just technicians.Sysdig's cloud security expertise and open-source contributions.Participants:· Suresh Vasudevan – CEO, SysdigSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon/isv/
From hard-coded credentials to boardroom buy-in, join four tech security leaders from Clumio, Mongo DB, Symphony and AWS, as they unpack how building the right security culture can be your organization's strongest defense against cyber threats.Topics Include:Security culture is crucial for managing organizational cyber riskGood culture enables quick decision-making without constant expert consultationMany security incidents occur from well-meaning people getting dupedPanel includes leaders from AWS, Symphony, MongoDB, and ClumioMeasuring security culture requires both quantitative and qualitative metricsBoard-level engagement indicates organizational security culture maturitySelf-reporting of security incidents shows positive cultural developmentSecurity committees' participation helps measure cultural engagementHard-coded credentials remain persistent problem across organizationsInternal audits and risk committees strengthen security governancePublic security incidents change board conversations about prioritiesLeadership vulnerability and transparency help build trustBeing pragmatic beats emotional responses in security leadershipSecurity programs should align with business revenue goalsCustomer security requirements drive program improvementsExcessive security questionnaires drain resources from actual securitySecurity culture started as exclusionary, evolved toward collaborationFinancial institutions often create unnecessary compliance burdenEarly security involvement in product development prevents delaysSecurity teams must match development team speedTrust between security and development teams enables efficiencySmall security teams can support large enterprise requirementsVendor partnerships help scale security capabilitiesProcess changes work better than adding security toolsSecurity leaders need deep business knowledgeTechnical depth and breadth remain essential skillsEvangelism capability critical for security leadership successInfluencing without authority key for security effectivenessCrisis moments create opportunities for security improvementSocializing between security and development teams builds trustDEF CON attendance helps developers understand security perspectiveBug bounty programs provide continuous security feedbackRegular informal meetings between teams improve collaborationBuilding personal relationships improves security outcomesModern security leadership requires balance of IQ and EQParticipants:Jacob Berry – Head of Information Security, ClumioGeorge Gerchow – Interim CISO, Head of Trust, Mongo DBBrad Levy – Chief Executive Officer, SymphonyBrendan Staveley – Global Sales Leader, Security Services, Amazon Web ServicesSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon/isv/
Our first episode of 2025 will feature the Indian epic "Mahabharata" which is authored by Sage Vedavyasa. Our host, Preethy Padmanabhan guides an inspiring conversation with Tatyana Mamut, Co-founder and CEO of Wayfound. They discuss the concept of Dharma (duty, moral law) that plays a central role in the Mahabharata. Tatyana was the Board Director and Chair of Comp Committee for UserTesting's IPO on NYSE. She is a leader and builder of successful new products & businesses at Nextdoor, Amazon, Salesforce, IDEO, and Pendo. She has 15 patents & patents pending for technology inventions, including 5 for GenerativeAI innovations. Topics: 00:00 Introduction to Leadership and Time 00:48 Welcome to 10x Growth Strategies Podcast 01:15 Guest Introduction: Tatyana Mamut 01:54 Tatyana's Personal Journey 04:09 Exploring the Mahabharata 07:29 Leadership Lessons from the Mahabharata 10:58 Key Takeaways on Leadership 15:40 Understanding Dharma and Duty 25:07 Lessons from Arjuna 34:11 Self-Awareness and Overcoming Failures 38:40 Final Words of Wisdom Read the book: Amazon - https://a.co/d/gyKPLHi Wikipedia - https://en.wikipedia.org/wiki/Mahabharata
AWS executive Giancarlo Casella explains how organizations can navigate global privacy regulations and achieve compliant international expansion using AWS's privacy reference architecture.Topics Include:Welcome to executive forum on security and Gen AIIntroduction of Giancarlo Casella from AWS Security Assurance ServicesAWS helps organizations with compliance and audit readinessGlobal expansion requires understanding local privacy lawsGermany and France interpret GDPR differentlyGermany has Federal Data Protection Act (BDSG)France focuses on consumer privacy through CENILRisk of non-compliance includes fines and reputation damagePrivacy laws existed in only 10 countries in 2000EU Privacy Directive of 1990 was prominentBy 2010, forty countries had privacy lawsHIPAA and GLBA introduced in United StatesNow over 150 countries have privacy regulations75% of world population under privacy laws soonRegulations are vague and open to interpretationGDPR example: encryption requirements lack specificityNeed right stakeholders for privacy complianceLegal team must lead privacy interpretationEngineering implements technical privacy aspectsRisk and compliance teams coordinate evidence gatheringData Protection Officer oversees entire programCIO, CTO, CISO alignment creates strong foundationSecurity transforms from bureaucratic to revenue enablerAWS develops cloud-specific privacy reference architectureIndustry standards provide guidance frameworksAWS privacy reference architecture focuses on cloud specificsData minimization and individual autonomy are keyCase study: Middle Eastern AI company expands to CanadaCompany used CCTV at gas stationsCreated privacy baseline and roadmapData flow documentation essential for complianceContinuous compliance strategy helps enable successAligning stakeholders across different organizational linesFuture of US federal privacy regulation discussedDiscussion of responsible AI usage requirementsParticipants:Giancarlo Casella - Head of Business Development and Growth Strategies, AWS Security Assurance ServicesSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon/isv/
Fat-Burning Man by Abel James (Video Podcast): The Future of Health & Performance
Technology is disrupting the music industry once again. In this week's special 4-part bonus series, you'll hear rapid fire interviews with incredible musicians who are using cutting edge podcasting tech to reinvent the music industry. We're kicking off this bonus series with one of the top artists in the world of value-for-value music, Ainsley Costello. Ainsley began performing in school and community musical theatre productions at the age of 7. Fueled by her passion and determination, she began taking classes at Berklee College of Music online at just 15 years old. A rare combination of raw talent and relentless ambition, Ainsley has a resume of those twice her age, graduating from Berkeley magna cum laude with a degree in music business at the age of 19, with over 20 commercial single releases and 200 shows in 20+ states under her belt—no big deal. And I get the feeling that she's just getting warmed up. In this episode with Ainsley Costello, you'll hear: What really happens when artists sign their lives away to record companies The future of music in the age of generative AI How artists and DJs are using podcasting technology to reimagine the music industry A few inspiring words that might just convince you to chase your dreams And more… Make sure to listen to the end of this interview to hear one of Ainsley's tunes, “Cherry on Top” the first song to ever to hit one million Satoshis on Wavlake! And if you'd like to drop a some “sats” in the tip jar for today's episode, which will be split 50/50 with Ainsley, simply: Download the Fountain.fm app Add a few bucks to your lightning wallet Find this episode on Fountain, click the lightning icon, and send us a Boost with an optional message If you'd like to hear Ainsley LIVE, we'll be sharing the stage this Monday, 12/16 at Antone's in Austin, TX. >> Get your tickets ($10): AntonesNightclub.com >> Or join the Livestream on Tunestr or Adam Curry's Boostagram Ball podcast (Show starts at 6pm Central / 7pm Eastern / 4pm Pacific on Monday, December 16, 2024) Go to AinsleyCostello.com to connect with Ainsley and hear a sampling of her music Read the show notes: https://fatburningman.com/ainsley-costello-how-bitcoin-is-disrupting-the-music-industry