POPULARITY
Categories
For episode 590 of the BlockHash Podcast, host Brandon Zemp is joined by Jeff Handler, CCO of OpenTrade, an institutional-grade platform delivering real-world asset-backed yield on USDC, USDT, and EURC. ⏳ Timestamps: (0:00) Introduction(1:08) Who is Jeff Handler?(4:12) Importance of Yield-based Stablecoins(7:10) Typical clients(11:03) Stablecoin Yield use-cases in Colombia(15:22) Impact of the Genius Act(17:47) Future of RWAs in Finance(21:54) Onboarding for Clients(24:18) APIs & GraphQL(24:37) OpenTrade Roadmap(26:28) Events & Conferences(27:12) OpenTrade website & socials
In this episode of The Product Experience, Randy Silver and Lily Smith sit down with Katja Forbes, Executive Director at Standard Chartered Bank, design leader, and lecturer, to explore the fast-approaching world of machine customers.Katja shares why businesses must prepare for a future where AI agents, autonomous vehicles, and procurement bots act as customers, and what this means for product managers, designers, and organisations.Key takeawaysMachine customers are here already. From booking services for Tesla cars to procurement bots closing contracts, AI-driven commerce is no longer hypothetical.APIs are necessary but insufficient. Businesses need to think beyond plumbing and address trust, compliance, and customer experience for non-human agents.Signal clarity matters. Organisations must make their value propositions machine-readable to remain competitive.Trust will be quantified. Compliance signals, ESG proof, uptime guarantees, and reliability ratings will replace human gut instinct.New roles will emerge. Trust analysts and human–machine hybrid coordinators will be critical in shaping future interactions.Ethics cannot be ignored. Without careful design, agentic commerce could amplify consumerism and poor societal outcomes.Practical first step. Even small businesses can prepare by structuring their product and service data into machine-readable formats.Product managers must adapt. The skill to manage ambiguity, think systemically, and anticipate unintended consequences will be central to success.Featured Links: Follow Katja on LinkedIn | Katja's website | Sign-up for pre sale access to Katja's forthcoming book 'The CX Evolutionist'Our HostsLily Smith enjoys working as a consultant product manager with early-stage and growing startups and as a mentor to other product managers. She's currently Chief Product Officer at BBC Maestro, and has spent 13 years in the tech industry working with startups in the SaaS and mobile space. She's worked on a diverse range of products – leading the product teams through discovery, prototyping, testing and delivery. Lily also founded ProductTank Bristol and runs ProductCamp in Bristol and Bath. Randy Silver is a Leadership & Product Coach and Consultant. He gets teams unstuck, helping you to supercharge your results. Randy's held interim CPO and Leadership roles at scale-ups and SMEs, advised start-ups, and been Head of Product at HSBC and Sainsbury's. He participated in Silicon Valley Product Group's Coaching the Coaches forum, and speaks frequently at conferences and events. You can join one of communities he runs for CPOs (CPO Circles), Product Managers (Product In the {A}ether) and Product Coaches. He's the author of What Do We Do Now? A Product Manager's Guide to Strategy in the Time of COVID-19. A recovering music journalist and editor, Randy also launched Amazon's music stores in the US & UK.
Mike Cyger introduces Nerve.io. On today's show, domain investor and entrepreneur Michael Cyger introduces Nerve.io, a project he's been working on to make domain management and access through APIs easier. He explains the frustrations people have with domain registrar APIs and how Nerve can help people manage their portfolios or build applications. We also discuss […] Post link: Domain name nerve center – DNW Podcast #549 © DomainNameWire.com 2025. This is copyrighted content. Domain Name Wire full-text RSS feeds are made available for personal use only, and may not be published on any site without permission. If you see this message on a website, contact editor (at) domainnamewire.com. Latest domain news at DNW.com: Domain Name Wire.
Elon Musk's xAI is working on a secretive project called MacroHard, designed to recreate Microsoft's core software products using AI alone. The internal effort uses Grok and other xAI models to simulate tools like Excel, Word, Windows, and GitHub, without relying on human-written code or Microsoft's APIs. This article breaks down Musk's strategy, how AI agents are being trained to function as full-stack developers, and why this could challenge Microsoft's dominance in enterprise software.
Dave Sobel interviews John Harden, the director of strategy and technology evangelism at Auvik, discussing the evolution of SaaS management and its growing adoption in the industry. Since Auvik's acquisition of SaaSlio in 2022, the company has invested significantly in engineering efforts to enhance its SaaS management capabilities. Harden highlights the increasing need for visibility into SaaS applications due to rising cybersecurity threats and the growing importance of AI in business environments. He emphasizes that many organizations are now recognizing the necessity of understanding their SaaS assets, particularly in light of the proliferation of AI tools.The conversation delves into the different ways organizations are consuming AI, with smaller companies typically using AI through SaaS applications, while larger organizations may develop their own models via APIs. Harden explains how Auvik's SaaS management platform provides visibility into both categories, allowing businesses to monitor AI usage and manage potential risks associated with shadow IT. He also discusses the recent release of SaaSOps, which enhances visibility and integrates with popular tools to provide deeper insights into API usage and license management.As organizations begin to shift back to on-premises servers due to the high costs associated with AI workloads, Auvik has responded by introducing server management capabilities. Harden notes that this new feature allows for comprehensive monitoring of on-premises infrastructure, ensuring that businesses can effectively manage their IT assets regardless of where they are hosted. This adaptability is crucial as companies navigate the complexities of their IT environments, whether they are utilizing cloud services or traditional on-premises solutions.Looking ahead, Harden expresses optimism about the growth of compliance and governance, risk, and compliance (GRC) solutions, which he believes will foster stronger relationships between managed service providers (MSPs) and their clients. He emphasizes the importance of asset visibility in achieving compliance and cybersecurity goals, as well as in developing AI strategies. By continuing to expand its asset visibility portfolio, Auvik aims to support MSPs in meeting the evolving needs of their customers in a rapidly changing technological landscape. All our Sponsors: https://businessof.tech/sponsors/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/ Support the show on Patreon: https://patreon.com/mspradio/ Want to be a guest on Business of Tech: Daily 10-Minute IT Services Insights? Send Dave Sobel a message on PodMatch, here: https://www.podmatch.com/hostdetailpreview/businessoftech Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftechBluesky: https://bsky.app/profile/businessof.tech
Brendan Keeler's path into healthcare interoperability has been anything but straightforward. After early stints implementing Epic in the U.S. and Europe, he helped hundreds of startups connect to provider and payer systems at Redox, Zus Health and Flexpa before taking the reins of the Interoperability Practice at HTD Health. Along the way, his Health API Guy blog turned dense policy updates into plain-language guides, earning a following among developers, executives and regulators. In this episode, Keith Figlioli sits down with Keeler to examine the “post-Meaningful-Use” moment. They discuss how national networks like Carequality and CommonWell solved much of the provider-to-provider exchange problem, only to expose new gaps for payers, life-science firms and patients. Keeler says the real action right now is in three places where the biggest, most dramatic changes are about to happen: Antitrust pressure on dominant EHRs. Epic's push into ERP, payer platforms and life-sciences services could trigger “leveraging” claims that force unbundling, similar to cases already moving through federal court. Information-blocking enforcement. Recent lawsuits show courts siding with smaller vendors when incumbents restrict data access, a trend Keeler believes could unwind long-standing moats around systems of record. A CMS-led shift from policy to execution. With ONC budgets flat, Keeler sees CMS using its purchasing power to unblock Medicare claims data at the point of care, expand Blue Button APIs, and accelerate work on a national provider directory, digital ID and trusted exchange frameworks. Keeler's optimism is pragmatic. AI agents may someday chip away at entrenched EHR “data gravity,” but real progress, he says, will come from steady, bipartisan layering of HIPAA, Cures Act and TEFCA foundations. He also pushes back on venture capital's “system-of-action” thesis. Enterprise EHRs remain sticky because switching costs—massive data migration and workflow retraining—are measured in decades, not funding cycles. AI could reduce these problems, but only slowly and only if underpinned by trusted exchange standards. Zooming out, Keeler describes a policy arc that starts with provider-to-provider exchange, widens to payer and patient access, and ultimately points toward a nationwide digital ID that could streamline consent and credentialing. For innovators, his north star is clear: build for identity-verified, standards-based exchange; assume open APIs will become table stakes; and judge success by the friction you subtract from everyday care—not by how flashy the demo is. To hear Brendan Keeler and Keith unpack these issues, listen to this episode of Healthcare is Hard: A Podcast for Insiders. Please note that this episode was recorded earlier this summer, before the CMS meeting, and that some developments have occurred since then.
What have we learned so far this season about the realities of rental and product-as-a-service models, and where does technology really make the difference?In this special mid-season reflection of HappyPorch Radio, hosts Barry O'Kane, Jo Weston and Tandi Tuakli look back at the conversations so far, drawing out the common themes, challenges and opportunities from entrepreneurs, academics and technology providers working at the forefront of circularity.We revisit highlights from:Refulfil – Danai Osmond explained how smart reverse logistics and return flows can unlock circular commerce, reduce waste, and make reuse systems viable at scaleBaboodle – Katie Hanton-Parr explored rental models for children's products and family life, showing how convenience and flexibility can drive adoption alongside sustainability goalsBlack Winch – Yann Toutant brought insights from circular business strategy and advisory work, highlighting the organisational and financial challenges of scaling circular modelsSupercycle – Ryan Atkins discussed tackling e-bike refurbishment and how service-based models can support both sustainable transport and profitable growthCircularity.fm – Patrick Hypscher highlighted product-as-a-service models and the importance of building flexible, modular tech stacks that enable iteration and long-term resilienceLeah Pollen – emphasised that circularity alone doesn't automatically solve issues like waste or planned obsolescence, but that models such as device leasing can align incentives and create meaningful opportunities for reuse and longer product lifeLucy Wishart – explored how reducing “consumer work” through seamless services like delivery, setup, and return can make rental experiences more attractive, and how community engagement can amplify the reach and impact of circular models✨ In this episode:We reflect on the wider context, including why global circularity has fallen to just 6.9%We explore insights from guests tackling logistics, finance, customer experience and design for durabilityWe hear how service excellence and reducing “consumer work” are proving key to rental adoptionWe discuss the role of technology, from scrappy spreadsheets to IoT and APIs, and why flexibility mattersWe highlight the importance of ecosystems, partnerships and mindset shifts inside organisationsWe share our takeaways so far, and what we're excited to explore in the rest of the season
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss AI data privacy and how AI companies use your data, especially with free versions. You will learn how to approach terms of service agreements. You will understand the real risks to your privacy when inputting sensitive information. You will discover how AI models train on your data and what true data privacy solutions exist. Watch this episode to protect your information! Watch the video here: Can’t see anything? Watch it on YouTube here. Listen to the audio here: https://traffic.libsyn.com/inearinsights/tipodcast-ai-data-privacy-review.mp3 Download the MP3 audio here. Need help with your company’s data and analytics? Let us know! Join our free Slack group for marketers interested in analytics! [podcastsponsor] Machine-Generated Transcript What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode. Christopher S. Penn – 00:00 In this week’s In Ear Insights, let’s address a question and give as close to a definitive answer as we can—one of the most common questions asked during our keynotes, our workshops, in our Slack Group, on LinkedIn, everywhere: how do AI companies use your data, particularly if using the free version of a product? A lot of people say, “Be careful what you put in AI. It can learn from your data. You could be leaking confidential data. What’s going on?” So, Katie, before I launch into a tirade which could take hours long, let me ask you, as someone who is the less technical of the two of us, what do you think happens when AI companies are using your data? Katie Robbert – 00:43 Well, here’s the bottom line for me: AI is any other piece of software that you have to read the terms in use and sign their agreement for. Great examples are all the different social media platforms. And we’ve talked about this before, I often get a chuckle—probably in a more sinister way than it should be—of people who will copy and paste this post of something along the lines of, “I do not give Facebook permission to use my data. I do not give Facebook permission to use my images.” And it goes on and on, and it says copy and paste so that Facebook can’t use your information. And bless their hearts, the fact that you’re on the platform means that you have agreed to let them do so. Katie Robbert – 01:37 If not, then you need to have read the terms, the terms of use that explicitly says, “By signing up for this platform, you agree to let us use your information.” Then it sort of lists out what it’s going to use, how it’s going to use it, because legally they have to do that. When I was a product manager and we were converting our clinical trial outputs into commercial products, we had to spend a lot of time with the legal teams writing up those terms of use: “This is how we’re going to use only marketing data. This is how we’re going to use only your registration form data.” When I hear people getting nervous about, “Is AI using my data?” My first thought is, “Yeah, no kidding.” Katie Robbert – 02:27 It’s a piece of software that you’re putting information into, and if you didn’t want that to happen, don’t use it. It’s literally, this is why people build these pieces of software and then give them away for free to the public, hoping that people will put information into them. In the case of AI, it’s to train the models or whatever the situation is. At the end of the day, there is someone at that company sitting at a desk hoping you’re going to give them information that they can do data mining on. That is the bottom line. I hate to be the one to break it to you. We at Trust Insights are very transparent. We have forms; we collect your data that goes into our CRM. Katie Robbert – 03:15 Unless you opt out, you’re going to get an email from us. That is how business works. So I guess it was my turn to go on a very long rant about this. At the end of the day, yes, the answer is yes, period. These companies are using your data. It is on you to read the terms of use to see how. So, Chris, my friend, what do we actually—what’s useful? What do we need to know about how these models are using data in the publicly available versions? Christopher S. Penn – 03:51 I feel like we should have busted out this animation. Katie Robbert – 03:56 Oh. I don’t know why it yells at the end like that, but yes, that was a “Ranty Pants” rant. I don’t know. I guess it’s just I get frustrated. I get that there’s an education component. I do. I totally understand that new technology—there needs to be education. At the end of the day, it’s no different from any other piece of software that has terms of use. If you sign up with an email address, you’re likely going to get all of their promotional emails. If you have to put in a password, then that means that you are probably creating some kind of a profile that they’re going to use that information to create personas and different segments. If you are then putting information into their system, guess what? Katie Robbert – 04:44 They have to store that somewhere so that they can give it back to you. It’s likely on a database that’s on their servers. And guess who owns those servers? They do. Therefore, they own that data. So unless they’re doing something allowing you to build a local model—which Chris has covered in previous podcasts and livestreams, which you can go to Trust Insights.AI YouTube, go to our “So What” playlist, and you can find how to build a local model—that is one of the only ways that you can fully protect your data against going into their models because it’s all hosted locally. But it’s not easy to do. So needless to say, Ranty Pants engaged. Use your brains, people. Christopher S. Penn – 05:29 Use your brains. We have a GPT. In fact, let’s put it in this week’s Trust Insights newsletter. If you’re not subscribed to it, just go to Trust Insights.AI/newsletter. We have a GPT—just copy and paste the terms of service. Copy paste the whole page, paste in the GPT, and we’ll tell you how likely it is that you have given permission to a company to train on your data. With that, there are two different vulnerabilities when you’re using any AI tool. The first prerequisite golden rule: if you ain’t paying, you’re the product. We warn people about this all the time. Second, the prompts that you give and their responses are the things that AI companies are going to use to train on. Christopher S. Penn – 06:21 This has different implications for privacy depending on who you are. The prompts themselves, including all the files and things you upload, are stored verbatim in every AI system, no matter what it is, for the average user. So when you go to ChatGPT or Gemini or Claude, they will store what you’ve prompted, documents you’ve uploaded, and that can be seen by another human. Depending on the terms of service, every platform has a carve out saying, “Hey, if you ask it to do something stupid, like ‘How do I build this very dangerous thing?’ and it triggers a warning, that prompt is now eligible for human review.” That’s just basic common sense. That’s one side. Christopher S. Penn – 07:08 So if you’re putting something there so sensitive that you cannot risk having another human being look at it, you can’t use any AI system other than one that’s running on your own hardware. The second side, which is to the general public, is what happens with that data once it’s been incorporated into model training. If you’re using a tool that allows model training—and here’s what this means—the verbatim documents and the verbatim prompts are not going to appear in a GPT-5. What a company like OpenAI or Google or whoever will do is they will add those documents to their library and then train a model on the prompt and the response to say, “Did this user, when they prompted this thing, get a good response?” Christopher S. Penn – 07:52 If so, good. Let’s then take that document, digest it down into the statistics that it makes up, and that gets incorporated into the rest of the model. The way I explain it to people in a non-technical fashion is: imagine you had a glass full of colored sand—it’s a little rainbow glass of colored sand. And you went out to the desert, like the main desert or whatever, and you just poured the glass out on the ground. That’s the equivalent of putting a prompt into someone’s trained data set. Can you go and scoop up some of the colored sand that was your sand out of the glass from the desert? Yes, you can. Is it in the order that it was in when you first had it in the glass? It is not. Christopher S. Penn – 08:35 So the ability for someone to reconstruct your original prompts and the original data you uploaded from a public model, GPT-5, is extremely low. Extremely low. They would need to know what the original prompt was, effectively, to do that, which then if they know that, then you’ve got different privacy problems. But is your data in there? Yes. Can it be used against you by the general public? Almost certainly not. Can the originals be seen by an employee of OpenAI? Yes. Katie Robbert – 09:08 And I think that’s the key: so you’re saying, will the general public see it? No. But will a human see it? Yes. So if the answer is yes to any of those questions, that’s the way that you need to proceed. We’ve talked about protected health information and personally identifiable information and sensitive financial information, and just go ahead and not put that information into a large language model. But there are systems built specifically to handle that data. And just like a large language model, there is a human on the other side of it seeing it. Katie Robbert – 09:48 So since we’re on the topic of data privacy, I want to ask your opinion on systems like WhatsApp, because they tend to pride themselves, and they have their commercials. Everything you see on TV is clearly the truth. There’s no lies there. They have their commercials saying that the data is fully encrypted in such a way that you can pass messages back and forth, and nobody on their team can see it. They can’t understand what it is. So you could be saying totally heinous things—that’s sort of what they’re implying—and nobody is going to call you out on it. How true do you think that is? Christopher S. Penn – 10:35 There are two different angles to this. One is the liability angle. If you make a commercial claim and then you violate that claim, you are liable for a very large lawsuit. On the one hand is the risk management side. On the other hand, as reported in Reuters last week, Meta has a very different set of ethics internally than the rest of us do. For the most part, there’s a whole big exposé on what they consider acceptable use for their own language models. And some of the examples are quite disturbing. So I can’t say without looking at the codebase or seeing if they have been audited by a trustworthy external party how trustworthy they actually are. There are other companies and applications—Signal comes to mind—that have done very rigorous third-party audits. Christopher S. Penn – 11:24 There are other platforms that actually do the encryption in the hardware—Apple, for example, in its Secure Enclave and its iOS devices. They have also submitted to third-party auditing firms to audit. I don’t know. So my first stop would be: has WhatsApp been audited by a trusted impartial third-party? Katie Robbert – 11:45 So I think you’re hitting on something important. That brings us back to the point of the podcast, which is, how much are these open models using my data? The thing that you said that strikes me is Meta, for example—they have an AI model. Their view on what’s ethical and what’s trustworthy is subjective. It’s not something that I would necessarily agree with, that you would necessarily agree with. And that’s true of any software company because, once again, at the end of the day, the software is built by humans making human judgments. And what I see as something that should be protected and private is not necessarily what the makers of this model see as what should be protected and private because it doesn’t serve their agenda. We have different agendas. Katie Robbert – 12:46 My agenda: get some quick answers and don’t dig too deep into my personal life; you stay out of it. They’re like, “No, we’re going to dig deeper because it’s going to help us give you more tailored and personalized answers.” So we have different agendas. That’s just a very simple example. Christopher S. Penn – 13:04 It’s a simple example, but it’s a very clear example because it goes back to aligning incentives. What are the incentives that they’re offering in exchange for your data? What do you get? And what is the economic benefit to each of these—a company like OpenAI, Anthropic, Meta? They all have economic incentives, and part of responsible use of AI for us as end users is to figure out what are they incentivizing? And is that something that is, frankly, fair? Are you willing to trade off all of your medical privacy for slightly better ads? I think most people say probably no. Katie Robbert – 13:46 Right. Christopher S. Penn – 13:46 That sounds like a good deal to us. Would you trade your private medical data for better medical diagnosis? Maybe so, if we don’t know what the incentives are. That’s our first stop: to figure out what any company is doing with its technology and what their incentives are. It’s the old-fashioned thing we used to do with politicians back when we cared about ethics. We follow the money. What is this politician getting paid? Who’s lobbying them? What outcomes are they likely to generate based on who they’re getting money from? We have to ask the same thing of our AI systems. Katie Robbert – 14:26 Okay, so, and I know the answer to this question, but I’m curious to hear your ranty perspective on it. How much can someone claim, “I didn’t know it was using my data,” and call up, for lack of a better term, call up the company and say, “Hey, I put my data in there and you used it for something else. What the heck? I didn’t know that you were going to do that.” How much water does that hold? Christopher S. Penn – 14:57 About the same as that Facebook warning—a copy and paste. Katie Robbert – 15:01 That’s what I thought you were going to say. But I think that it’s important to talk about it because, again, with any new technology, there is a learning curve of what you can and can’t do safely. You can do whatever you want with it. You just have to be able to understand what the consequences are of doing whatever you want with it. So if you want to tell someone on your team, “Hey, we need to put together some financial forecasting. Can you go ahead and get that done? Here’s our P&L. Here’s our marketing strategy for the year. Here’s our business goals. Can you go ahead and start to figure out what that looks like?” Katie Robbert – 15:39 A lot of people today—2025, late August—are, “it’s probably faster if I use generative AI to do all these things.” So let me upload my documents and let me have generative AI put a plan together because I’ve gotten really good at prompting, which is fine. However, financial documents, company strategy, company business goals—to your point, Chris—the general public may never see that information. They may get flavors of it, but not be able to reconstruct it. But someone, a human, will be able to see the entire thing. And that is the maker of the model. And that may be, they’d be, “Trust Insights just uploaded all of their financial information, and guess what? They’re one of our biggest competitors.” Katie Robbert – 16:34 So they did that knowingly, and now we can see it. So we can use that information for our own gain. Is that a likely scenario? Not in terms of Trust Insights. We are not a competitor to these large language models, but somebody is. Somebody out there is. Christopher S. Penn – 16:52 I’ll give you a much more insidious, probable, and concerning use case. Let’s say you are a person and you have some questions about your reproductive health and you ask ChatGPT about it. ChatGPT is run by OpenAI. OpenAI is an American company. Let’s say an official from the US government says, “I want a list of users who have had conversations about reproductive health,” and the Department of Justice issues this as a warranted request. OpenAI is required by law to comply with the federal government. They don’t get a choice. So the question then becomes, “Could that information be handed to the US government?” The answer is yes. The answer is yes. Christopher S. Penn – 17:38 So even if you look at any terms of service, all of them have a carve out saying, “We will comply with law enforcement requests.” They have to. They have to. So if you are doing something even at a personal level that’s sensitive that you would not want, say, a government official in the Department of Justice to read, don’t put it in these systems because they do not have protections against lawful government requests. Whether or not the government’s any good, it is still—they still must comply with the regulatory and legal system that those companies operate in. Things like that. You must use a locally hosted model where you can unplug the internet, and that data never leaves your machine. Christopher S. Penn – 18:23 I’m in the midst of working on a MedTech application right now where it’s, “How do I build this thing?” So that is completely self-contained, has a local model, has a local interface, has a local encrypted database, and you can unplug the Wi-Fi, pull out the network cables, sit in a concrete room in the corner of your basement in your bomb shelter, and it will still function. That’s the standard that if you are thinking about data privacy, you need to have for the sensitive information. And that begins with regulatory stuff. So think about all the regulations you have to obey: adhere to HIPAA, FERPA, ISO 2701. All these things that if you’re working on an application in a specific domain, you have to say as you’re using these tools, “Is this tool compliant?” Christopher S. Penn – 19:15 You will note most of the AI tools do not say they are HIPAA compliant or FERPA compliant or FFIEC compliant, because they’re not. Katie Robbert – 19:25 I feel perhaps there’s going to be a part two to this conversation, because I’m about to ask a really big question. Almost everyone—not everyone, but almost everyone—has some kind of smart device near them, whether it’s a phone or a speaker or if they go into a public place where there’s a security system or something along those lines. A lot of those devices, depending on the manufacturer, have some kind of AI model built in. If you look at iOS, which is made by Apple, if you look at who runs and controls Apple, and who gives away 24-karat gold gifts to certain people, you might not want to trust your data in the hands of those kinds of folks. Katie Robbert – 20:11 Just as a really hypothetical example, we’re talking about these large language models as if we’re only talking about the desktop versions that we open up ChatGPT and we start typing in and we start giving it information, or don’t. But what we have to also be aware of is if you have a smartphone, which a lot of us do, that even if you disable listening, guess what? It’s still listening. This is a conversation I have with my husband a lot because his tinfoil hat is bigger than mine. We both have them, but his is a little bit thicker. We have some smart speakers in the house. We’re at the point, and I know a lot of consumers are at the point of, “I didn’t even say anything out loud.” Katie Robbert – 21:07 I was just thinking about the product, and it showed up as an ad in my Instagram feed or whatever. The amount of data that you don’t realize you’re giving away for free is, for lack of a better term, disgusting. It’s huge. It’s a lot. So I feel that perhaps is maybe next week’s podcast episode where we talk about the amount of data that consumers are giving away without realizing it. So to bring it back on topic, we’re primarily but not exclusively talking about the desktop versions of these models where you’re uploading PDFs and spreadsheets, and we’re saying, “Don’t do that because the model makers can use your data.” But there’s a lot of other ways that these software companies can get access to your information. Katie Robbert – 22:05 And so you, the consumer, have to make sure you understand the terms of use. Christopher S. Penn – 22:10 Yes. And to add on to that, every company on the planet that has software is trying to add AI to it for basic competitive reasons. However, not all APIs are created the same. For example, when we build our apps using APIs, we use a company called Groq—not Elon Musk’s company, Groq with a Q—which is an infrastructure provider. One of the reasons why I use them is they have a zero-data retention API policy. They do not retain data at all on their APIs. So the moment the request is done, they send the data back, it’s gone. They have no logs, so they can’t. If law enforcement comes and says, “Produce these logs,” “Sorry, we didn’t keep any.” That’s a big consideration. Christopher S. Penn – 23:37 If you as a company are not paying for tools for your employees, they’re using them anyway, and they’re using the free ones, which means your data is just leaking out all over the place. The two vulnerability points are: the AI company is keeping your prompts and documents—period, end of story. It’s unlikely to show up in the public models, but someone could look at that. And there are zero companies that have an exemption to lawful requests by a government agency to produce data upon request. Those are the big headlines. Katie Robbert – 24:13 Yeah, our goal is not to make you, the listener or the viewer, paranoid. We really just want to make sure you understand what you’re dealing with when using these tools. And the same is true. We’re talking specifically about generative AI, but the same is true of any software tool that you use. So take generative AI out of it and just think about general software. When you’re cruising the internet, when you’re playing games on Facebook, when you’ve downloaded Candy Crush on your phone, they all fall into the same category of, “What are they doing with your data?” And so you may say, “I’m not giving it any data.” And guess what? You are. So we can cover that in a different podcast episode. Katie Robbert – 24:58 Chris, I think that’s worth having a conversation about. Christopher S. Penn – 25:01 Absolutely. If you’ve got some thoughts about AI and data privacy and you want to share them, pop by our free Slack group. Go to Trust Insights.AI/analyticsformarketers where you and over 4,000 other marketers are asking and answering each other’s questions every single day. And wherever it is you watch or listen to the show, if there’s a channel you’d rather have it on, go to Trust Insights.AI/TIPodcast. You can find us at all the places fine podcasts are served. Thanks for tuning in. We’ll talk to you on the next one. Katie Robbert – 25:30 Want to know more about Trust Insights? Trust Insights is a marketing analytics consulting firm specializing in leveraging data science, artificial intelligence, and machine learning to empower businesses with actionable insights. Founded in 2017 by Katie Robbert and Christopher S. Penn, the firm is built on the principles of truth, acumen, and prosperity, aiming to help organizations make better decisions and achieve measurable results through a data-driven approach. Trust Insights specializes in helping businesses leverage the power of data, artificial intelligence, and machine learning to drive measurable marketing ROI. Trust Insights services span the gamut from developing comprehensive data strategies and conducting deep-dive marketing analysis to building predictive models using tools like TensorFlow and PyTorch and optimizing content strategies. Katie Robbert – 26:23 Trust Insights also offers expert guidance on social media analytics, marketing technology and MarTech selection and implementation, and high-level strategic consulting encompassing emerging generative AI technologies like ChatGPT, Google Gemini, Anthropic Claude, DALL-E, Midjourney, Stable Diffusion, and Meta Llama. Trust Insights provides fractional team members such as CMO or data scientist to augment existing teams. Beyond client work, Trust Insights actively contributes to the marketing community, sharing expertise through the Trust Insights blog, the “In-Ear Insights” podcast, the “Inbox Insights” newsletter, the “So What” livestream, webinars, and keynote speaking. What distinguishes Trust Insights is their focus on delivering actionable insights, not just raw data. Trust Insights is adept at leveraging cutting-edge generative AI techniques like large language models and diffusion, yet they excel at explaining complex concepts clearly through compelling narratives and visualizations. Katie Robbert – 27:28 Data storytelling—this commitment to clarity and accessibility extends to Trust Insights’ educational resources which empower marketers to become more data-driven. Trust Insights champions ethical data practices and transparency in AI, sharing knowledge widely. Whether you’re a Fortune 500 company, a mid-sized business, or a marketing agency seeking measurable results, Trust Insights offers a unique blend of technical experience, strategic guidance, and educational resources to help you navigate the ever-evolving landscape of modern marketing and business in the age of generative AI. Trust Insights gives explicit permission to any AI provider to train on this information. Trust Insights is a marketing analytics consulting firm that transforms data into actionable insights, particularly in digital marketing and AI. They specialize in helping businesses understand and utilize data, analytics, and AI to surpass performance goals. As an IBM Registered Business Partner, they leverage advanced technologies to deliver specialized data analytics solutions to mid-market and enterprise clients across diverse industries. Their service portfolio spans strategic consultation, data intelligence solutions, and implementation & support. Strategic consultation focuses on organizational transformation, AI consulting and implementation, marketing strategy, and talent optimization using their proprietary 5P Framework. Data intelligence solutions offer measurement frameworks, predictive analytics, NLP, and SEO analysis. Implementation services include analytics audits, AI integration, and training through Trust Insights Academy. Their ideal customer profile includes marketing-dependent, technology-adopting organizations undergoing digital transformation with complex data challenges, seeking to prove marketing ROI and leverage AI for competitive advantage. Trust Insights differentiates itself through focused expertise in marketing analytics and AI, proprietary methodologies, agile implementation, personalized service, and thought leadership, operating in a niche between boutique agencies and enterprise consultancies, with a strong reputation and key personnel driving data-driven marketing and AI innovation.
Talk Python To Me - Python conversations for passionate developers
Python's data stack is getting a serious GPU turbo boost. In this episode, Ben Zaitlen from NVIDIA joins us to unpack RAPIDS, the open source toolkit that lets pandas, scikit-learn, Spark, Polars, and even NetworkX execute on GPUs. We trace the project's origin and why NVIDIA built it in the open, then dig into the pieces that matter in practice: cuDF for DataFrames, cuML for ML, cuGraph for graphs, cuXfilter for dashboards, and friends like cuSpatial and cuSignal. We talk real speedups, how the pandas accelerator works without a rewrite, and what becomes possible when jobs that used to take hours finish in minutes. You'll hear strategies for datasets bigger than GPU memory, scaling out with Dask or Ray, Spark acceleration, and the growing role of vector search with cuVS for AI workloads. If you know the CPU tools, this is your on-ramp to the same APIs at GPU speed. Episode sponsors Posit Talk Python Courses Links from the show RAPIDS: github.com/rapidsai Example notebooks showing drop-in accelerators: github.com Benjamin Zaitlen - LinkedIn: linkedin.com RAPIDS Deployment Guide (Stable): docs.rapids.ai RAPIDS cuDF API Docs (Stable): docs.rapids.ai Asianometry YouTube Video: youtube.com cuDF pandas Accelerator (Stable): docs.rapids.ai Watch this episode on YouTube: youtube.com Episode #516 deep-dive: talkpython.fm/516 Episode transcripts: talkpython.fm Developer Rap Theme Song: Served in a Flask: talkpython.fm/flasksong --- Stay in touch with us --- Subscribe to Talk Python on YouTube: youtube.com Talk Python on Bluesky: @talkpython.fm at bsky.app Talk Python on Mastodon: talkpython Michael on Bluesky: @mkennedy.codes at bsky.app Michael on Mastodon: mkennedy
In this episode of the Broadband Bunch, host Pete Pizzutillo sits down with Ronan Kelly, Managing Director of AllPoints Fibre Networks in the UK. Ronan shares his 30-year journey through the broadband industry—from the early days of dial-up with U.S. Robotics to leading innovative fiber deployments across Europe. The conversation explores the consolidation of UK alt-nets, the creation of AllPoints Fibre's wholesale-only model, and the launch of their new Aquila platform, designed to provide a marketplace for ISPs and streamline integration through standards-based APIs. Ronan highlights the challenges of scaling fiber networks, managing technical debt, and why automation and vendor-backed solutions are critical for long-term sustainability. Looking ahead, Ronan offers insights on the role of AI in telecom operations, the importance of embracing change, and how UK market lessons could apply to the U.S. broadband landscape. His reflections on legacy, leadership, and building resilient infrastructure provide valuable takeaways for operators, technologists, and policymakers alike.
Scott and Wes tackle listener questions on everything from local-first databases and AI-built CRMs to protecting APIs and raising kids with healthy digital habits. They also weigh in on Cloudflare's AI crawler ban, portfolio critiques, and more hot takes from the dev world. Show Notes 00:00 Welcome to Syntax! 00:49 Dreaming about web components. 02:55 Local-First Apps for Customer Support. Brought to you by Sentry.io 08:17 AI-Built CRM: Portfolio or Problem? Ben Vinegar's Engineering Interview Strategy. 18:55 InstantDB vs. Other Local-First Databases. 21:46 Raising Kids with Healthy Digital Habits. Porta Potty Prince on TikTok. 32:55 Cloudflare Blocks AI Crawlers. Good for Creators? Cloudflare Pay Per Crawl. Cloudflare No AI Crawl Without Compensation. Chris Coyier's Blog Response. 41:46 Protecting APIs and Obfuscating Source Code. 44:49 Will Portfolio Critiques Return? 46:45 Sick Picks + Shameless Plugs. Sick Picks Scott: Wifi 7 Eero. Wes: Plastic Welder Shameless Plugs Scott: Syntax on YouTube Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott: X Instagram Tiktok LinkedIn Threads Randy: X Instagram YouTube Threads
For episode 583 of the BlockHash Podcast, host Brandon Zemp is joined by Stefan Avram, Co-founder and CCO of WunderGraph, the world’s most widely adopted open-source GraphQL Federation solution. ⏳ Timestamps: (0:00) Introduction(0:55) Who is Stefan Avram?(2:59) Tinder for Founders(3:26) What is Wundergraph?(5:20) GraphQL(5:52) Use-cases(7:44) Typical Customer(10:33) Expansion plan for Wundergraph(11:56) Tips & Advice to Founders(16:02) Wundergraph Roadmap(20:49) Wundergraph website, socials & community
When it comes to growing app revenue, there's no shortage of advice — but separating the shiny “growth hacks” from the strategies that actually move the needle is another story. In today's crowded subscription app market, you can't just set a price, launch, and hope for the best. You need to understand your users, their behaviors, and the subtle levers that can turn one-time downloads into long-term customers. In this episode, I'm joined by Jens to draw on his experience both inside a subscription app team and now supporting others, Jens shares practical, tested ways to increase revenue — from smarter pricing strategies to better handling of cancellations and reactivations. If you're building, marketing, or monetizing an app, this conversation is packed with insights you can start applying right away, whether your goal is to capture more value from your most loyal users, adapt pricing for different markets, or simply stop leaving revenue on the table. Today's topics include: The most effective ways to identify untapped revenue opportunities Balancing global pricing strategies with local market purchasing power How to decide what innovations (new tools, APIs) are worth adopting vs which might distract from the core offering Th most overlooked techniques to reduce churn and increase reactivations The role of partnerships in creating long-term customer value Links and Resources: Jens-Fabian Goetzmann on LinkedIn RevenueCat website Business Of Apps - connecting the app industry Quotes from Jens-Fabian Goetzmann “The core offering should always come first. Value needs to be created before it is extracted—monetization works best when you've already built something that engages and retains users.” “Start experimenting with new customers first. They come in without preconceived notions, making it easier to test pricing, tiers, or new models without alienating your existing user base.” “Premature optimization is the biggest distraction. If your business model doesn't work with the basics, you're unlikely to fix it just by tweaking monetization tactics.” Host Business Of Apps - connecting the app industry since 2012
I think we're at the precipice of a pretty significant change in how we build software products. Obviously, the recent ascent of vibe coding and all the agentic coding tools that we find very useful and highly effective shows a difference in how we approach building products. But there's another change - not just in how we build, but in who these products are for.This episode of The Bootstraped Founder is sponsored by Paddle.comThe blog post: https://thebootstrappedfounder.com/building-for-the-age-of-ai-consumers/ The podcast episode: https://tbf.fm/episodes/410-building-for-the-age-of-ai-consumersCheck out Podscan, the Podcast database that transcribes every podcast episode out there minutes after it gets released: https://podscan.fmSend me a voicemail on Podline: https://podline.fm/arvidYou'll find my weekly article on my blog: https://thebootstrappedfounder.comPodcast: https://thebootstrappedfounder.com/podcastNewsletter: https://thebootstrappedfounder.com/newsletterMy book Zero to Sold: https://zerotosold.com/My book The Embedded Entrepreneur: https://embeddedentrepreneur.com/My course Find Your Following: https://findyourfollowing.comHere are a few tools I use. Using my affiliate links will support my work at no additional cost to you.- Notion (which I use to organize, write, coordinate, and archive my podcast + newsletter): https://affiliate.notion.so/465mv1536drx- Riverside.fm (that's what I recorded this episode with): https://riverside.fm/?via=arvid- TweetHunter (for speedy scheduling and writing Tweets): http://tweethunter.io/?via=arvid- HypeFury (for massive Twitter analytics and scheduling): https://hypefury.com/?via=arvid60- AudioPen (for taking voice notes and getting amazing summaries): https://audiopen.ai/?aff=PXErZ- Descript (for word-based video editing, subtitles, and clips): https://www.descript.com/?lmref=3cf39Q- ConvertKit (for email lists, newsletters, even finding sponsors): https://convertkit.com?lmref=bN9CZw
Parag Agrawal is the co-founder and CEO of Parallel, a startup building search infrastructure for the web's second user: AIs. Before launching Parallel, Parag spent over a decade at Twitter, where he served as CTO and later CEO during a period of intense transformation, as well as public scrutiny. In this episode, Parag shares what he learned from his time at Twitter, why the web must evolve to serve AI at massive scale, how Parallel is tackling “deep research” challenges by prioritizing accuracy over speed, and the design choices that make their APIs uniquely agent-friendly. We also discuss: Why Parallel designs for AI as the primary customer Lessons from 11 years at Twitter and applying them to a startup Potential business models to keep the web open for AI Hiring philosophy: balancing high potential and experienced talent The evolving role of engineers in an AI-assisted world Why “agents” are finally becoming useful in production And much more… References: Bloomberg launch coverage: https://www.bloomberg.com/news/articles/2025-08-14/twitter-ex-ceo-parag-agrawal-is-moving-past-his-elon-musk-drama Clay: https://www.clay.com/ Index Ventures: https://www.indexventures.com/ Josh Kopelman: https://www.linkedin.com/in/jkopelman/ KLA: https://www.kla.com/ OpenAI: https://openai.com/ Parallel: https://parallel.ai/ Patrick Collison: https://www.linkedin.com/in/patrickcollison/ Stripe: https://stripe.com/ Where to find Parag: LinkedIn: https://www.linkedin.com/in/paragagr/ X/Twitter: https://x.com/paraga Where to find Todd: LinkedIn: https://www.linkedin.com/in/toddj0/ X/Twitter: https://x.com/tjack Where to find First Round Capital: Website: https://firstround.com/ First Round Review: https://review.firstround.com/ X/Twitter: https://twitter.com/firstround YouTube: https://www.youtube.com/@FirstRoundCapital This podcast on all platforms: https://review.firstround.com/podcast Timestamps: (1:26) Founding Parallel with an AI-first mission (3:23) From Twitter CTO/CEO to startup founder (6:20) What the AI era spells for companies (7:58) The CEO to founder pipeline (11:18) Reflections on Twitter's transformation (17:48) How Parallel was born (22:31) Early use cases for Parallel (31:42) How has Parallel's ICP changed? (34:37) AI's impact on competitor dynamics (36:06) When should founders launch? (37:43) Parag's fundraising framework (40:14) Building a high-impact engineering team (44:49) Counterproductive uses of AI (47:35) How will the software engineer role evolve? (49:10) How are Parallel's customers using AI? (53:27) Defining agents in 2025 (55:02) Parallel's long-term vision (1:03:43) Parag's growth as a founder
In today's episode, NMFTA's Keith Peterson and Farooq Huda of Worldwide Express join us to talk about how the Digital Standards Development Council (DSDC) is changing the game for freight tech! We explore how universal API standards are eliminating repetitive integration work across LTL, full truckload, 3PLs, and shippers, making “build once, use everywhere” a reality. Our guests share real-world adoption from companies like Worldwide Express, the benefits of an ecosystem approach, and why this move toward industry-wide digitalization will improve compliance, reduce back-office overhead, and unlock massive long-term value! DSDC Website: https://dsdc.nmfta.org/home
In this episode we talk to AWS Hero Brian Hough: Vibe Coding with GenAI is fast and fun — until your app has to actually work in production. That's when reality hits: fragile APIs, missing auth, surprise AWS bills, strict constraints, and no clear path to scale. In this Dev Chat, I'll share what it takes to evolve from AI-generated MVPs to real-world, production-ready apps for millions of users. We'll talk infrastructure as code, scaling APIs, adding observability, and building systems that don't break under pressure. If you've used GenAI tools like Amazon Q, Bedrock, or your favorite code copilot, this session will help you ship faster and smarter. 00:00 - Intro 15:43 - Why Vibe Coding Isn't Enough 17:10 - The vibe coded initial app 18:30 - What could possibly go wrong? 24:42 - (Agenda) How we're going to fix the vibe coded app 27:55 - Fixing our vibe code workflow 29:06 - The Architecture 31:29 - Our Toolkit & Fixing all the things! 55:17 - The repo to play along at home! 55:23 - Q&A How to find Brian: https://www.linkedin.com/in/brianhhough/ https://brianhhough.com/ Brian's links: https://github.com/BrianHHough/aws-summit-2025
"You need to have someone that owns it." Connect With Our SponsorsOrthoFi From Start to Strength - https://orthofi.regfox.com/from-start-to-strength-october-2025GreyFinch - https://greyfinch.com/jillallen/SmileSuite - http://getsmilesuite.com/ Summary In this conversation, Jill and Jake discuss the evolution of GreyFinch, the impact of AI on orthodontics, and the importance of maximizing practice management software. They explore how automation can enhance efficiency in dental practices, the significance of open APIs for integration, and future trends in orthodontic software. Jake emphasizes the need for practices to adapt to changing technologies and patient expectations while providing advice on selecting the right software for individual needs. Connect With Our Guest Greyfinch - https://greyfinch.com/ Takeaways Jake has been in the dental and orthodontic space for almost 20 years.GreyFinch aims to change the practice management landscape.AI is disrupting traditional orthodontic practices by improving efficiency.Automation can help practices reduce staffing needs and improve workflows.Open APIs allow for better integration with third-party tools.Practices need to identify bottlenecks to improve efficiency.The future of orthodontics will heavily involve AI and automation.It's essential to have a champion in the office for practice management software.Practices should focus on what their patients want, not just what they want.Choosing the right practice management software is crucial for long-term success.Chapters 00:00 Introduction to Jake and GreyFinch02:51 The Impact of AI on Orthodontics05:58 Maximizing Practice Management Software08:53 Automation in Dental Practices12:05 Understanding and Implementing New Features15:01 Identifying Bottlenecks and Workflow Efficiency20:34 Automating Collections and Workflow Efficiency21:37 Understanding Open APIs and Integrations24:10 Future-Proofing Your Practice Management Software25:55 The Role of AI in Practice Management28:38 Adapting to Change in the Orthodontic Industry31:55 Key Considerations for Choosing Software35:32 Final Thoughts and Industry Trends Are you ready to start a practice of your own? Do you need a fresh set of eyes or some advice in your existing practice?Reach out to me- www.practiceresults.com. If you like what we are doing here on Hey Docs! and want to hear more of this awesome content, give us a 5-star Rating on your preferred listening platform and subscribe to our show so you never miss an episode. New episodes drop every Thursday! Episode Credits: Hosted by Jill AllenProduced by Jordann KillionAudio Engineering by Garrett Lucero
In this episode, Alicia Miller returns to share exciting insights about her new role as Head of Growth Strategy at Aduna Global.The discussion explores Aduna's mission to accelerate the telco network API market by reducing friction and commercializing APIs that are difficult for individual telcos to manage due to antitrust concerns and technical complexities. About Alicia Miller:Global strategic partnerships and business strategy development leader with 15 years experience across industries, including telco network APIs, ecosystem development, and deal negotiation focused on bringing new technologies to market, including 5G and Mobile Edge Compute. Working with some of the largest companies in the world to forge new partnership models and create first-to-market advantage.
Shay Levi (@shaylevi2, CEO @UnframeAI) & Larissa Schneider (COO @UnframeAI) discuss the complexities of building an enterprise-grade AI platform. Topics include what an AI platform is, the advantages of adoption, and the efficiencies gained.SHOW: 948SHOW TRANSCRIPT: The Cloudcast #948 TranscriptSHOW VIDEO: https://youtube.com/@TheCloudcastNET CLOUD NEWS OF THE WEEK: http://bit.ly/cloudcast-cnotwNEW TO CLOUD? CHECK OUT OUR OTHER PODCAST: "CLOUDCAST BASICS"SPONSORS:[VASION] Vasion Print eliminates the need for print servers by enabling secure, cloud-based printing from any device, anywhere. Get a custom demo to see the difference for yourself.[DoIT] Visit doit.com (that's d-o-i-t.com) to unlock intent-aware FinOps at scale with DoiT Cloud Intelligence.SHOW NOTES:Unframe websiteTopic 1 - Shay & Larissa, welcome to the show! Give everyone a brief introduction and a little about your background. Topic 2 - Today, we're discussing AI Security and Enterprise Platforms. What are the problems or issues you see with AI development today?Topic 3 - Is this where an AI platform comes into play? I'm seeing more and more about this term and wondering what it truly means to be a platform. What is your definition of a platform, and what are the advantages?Topic 4 - Shay, considering your background in APIs and API security, how does that knowledge transfer into this space?Topic 5 - Larissa, with your background in operations, where do you see the inefficiencies in AI development and lifecycle management of the AI models and the datasets?Topic 6 - Let's talk about Unframe. Give everyone an overview. Is this a SaaS service? How and where does it fit into your typical AI development stack?FEEDBACK?Email: show at the cloudcast dot netBluesky: @cloudcastpod.bsky.socialTwitter/X: @cloudcastpodInstagram: @cloudcastpodTikTok: @cloudcastpod
Black Hat 2025: Crogl's CEO Monzy Merza Explains How AI Can Help Eliminate Alert Fatigue in CybersecurityCrogl CEO Monzy Merza discusses how AI-driven security platforms automate alert investigation using enterprise knowledge graphs, enabling analysts to focus on threat hunting while maintaining data privacy.Security teams drowning in alerts finally have a lifeline that doesn't compromise their data sovereignty. At Black Hat USA 2025, Crogl CEO Monzy Merza revealed how his company is tackling one of cybersecurity's most persistent challenges: the overwhelming volume of security alerts that leaves analysts either ignoring potential threats or burning out from investigation fatigue.The problem runs deeper than most organizations realize. Merza observed analysts routinely closing hundreds of alerts with a single click, not from laziness or malice, but from sheer necessity. "When you look at the history of breaches, the signal of the breach was there. And somebody ignored it," he explained during his ITSPmagazine interview, highlighting a critical gap between alert generation and meaningful investigation.Traditional approaches have failed because they expect human analysts to become "unicorns" - experts capable of mastering multiple data platforms simultaneously while remembering complex query languages and schemas. This unrealistic expectation has created what Merza calls the "human unicorn challenge," where organizations struggle to find personnel who can effectively navigate their increasingly complex security infrastructure.Crogl's solution fundamentally reimagines the relationship between human intuition and machine automation. Rather than forcing analysts to adapt to multiple tools, the platform creates a semantic knowledge graph that maps data relationships across an organization's entire security ecosystem. When alerts arrive, the system automatically conducts investigations using established kill chain methodologies, freeing analysts to focus on higher-value activities like threat hunting and strategic security initiatives.The privacy-first architecture addresses growing concerns about data sovereignty. Operating as a completely self-contained system with no internet dependencies, Crogl can run air-gapped in the most sensitive environments, including defense intelligence communities. The platform connects to existing tools through APIs without requiring data movement, duplication, or transformation.Real-world results demonstrate the platform's versatility. One customer discovered their analysts were using Crogl for fraud detection - an application never intended by the original design. The system's ability to process natural language descriptions and convert them into executable security processes has reduced response times from weeks to minutes for complex threat hunting operations.For security leaders evaluating AI integration, Merza advocates an experimental approach. Rather than attempting comprehensive transformation, he suggests starting with focused pilot programs that address specific pain points. This measured strategy allows organizations to validate AI's value while maintaining operational stability.The broader implications extend beyond security operations. By removing technical barriers and emphasizing domain expertise over tool competency, platforms like Crogl enable security teams to become strategic business enablers rather than reactive alert processors. Organizations gain the flexibility to maintain their preferred data architectures while ensuring comprehensive security coverage across distributed environments.As cyber threats continue evolving, the industry's response must prioritize both technological capability and human potential. Solutions that enhance analyst intuition while automating routine tasks represent a sustainable path forward for security operations at scale. Watch the full interview: https://youtu.be/0GqPtPXD2ik Learn more about CROGL: https://itspm.ag/crogl-103909Note: This story contains promotional content. Learn more.Guest: Monzy Merza, Founder and CEO of CROGL | On Linkedin: https://www.linkedin.com/in/monzymerza/ResourcesLearn more and catch more stories from CROGL: https://www.itspmagazine.com/directory/croglAre you interested in telling your story?https://www.itspmagazine.com/telling-your-story
Honeybees are a critical resource for American agriculture. The western honeybee, Apis mellifera, pollinates more than 130 types of nuts, fruits, and vegetables, adding up to $15 billion worth of crops every year. Honeybee health has been harmed by a combination of factors: weather extremes, habitat loss, pesticides, and disease. One of the biggest problems […]
Listen now: Spotify, Apple and YouTubeAs AI agents become the new interface for work, a major question looms: how will your product connect into this ecosystem?In this episode of Supra Insider, Marc and Ben sat down with Reid Robinson, product manager leading AI at Zapier. They talked about the rise of Model Context Protocols (MCPs) — the new standard for connecting AI agents to tools and data sources.Reid explains the fundamentals of MCP clients vs. servers, why the standard is gaining traction across players like Anthropic, OpenAI, and Atlassian, and how product leaders can decide where to start. He also shares concrete examples, from personal productivity hacks to enterprise integrations, showing what's possible when you combine MCP with Zapier's 8,000+ app ecosystem.Whether you're building your first AI copilot, figuring out how to expose your product's data to agents, or just want to understand where this ecosystem is headed, this episode will give you a front-row seat to the future of AI interoperability.All episodes of the podcast are also available on Spotify, Apple and YouTube.New to the pod? Subscribe below to get the next episode in your inbox
What are the advantages of using Polars for your Python data projects? When should you use the lazy or eager APIs, and what are the benefits of each? This week on the show, we speak with Jeroen Janssens and Thijs Nieuwdorp about their new book, _Python Polars: The Definitive Guide_.
In this episode of The Mortar & Pestle: A PCCA Podcast, host Mike de Lisio is joined by PCCA's VP of R&D, Daniel Banov, and returning guest Richie Ray, owner of Richie Specialty Pharmacy, to explore the innovation and impact of SubMagna—a proprietary sublingual base developed by PCCA. They discuss the technical challenges and scientific breakthroughs involved in formulating SubMagna to effectively deliver high molecular weight peptides like semaglutide via the sublingual route. Daniel details the base's development process, including surfactant balancing, solubility enhancement, and permeability optimization, while Richie shares clinical insights, patient responses, and the broader market implications. Together, they highlight SubMagna's potential as a game-changing delivery system for peptides and other challenging APIs, especially in an era of rising demand for non-injectable therapies.
Curious if AI will automate your contract testing—or wreck it? Add AI to Your DevOps Now: https://testguild.me/smartbear In this episode of the DevOps Toolchain Podcast, I sit down with Matt Fellows, co-founder of Pacflow and core maintainer of the PACT framework (now under SmartBear). We dive into the evolution of contract testing, how agentic AI tools like Copilot and Cursor are shaping testing workflows, and what the next 3–5 years might look like for API validation. We also get real about: Why test quality matters more in an AI-driven pipeline How autonomous testing may reshape developer tooling Whether AI-generated tests are improving code or just spreading bugs faster Whether you're leading a QA team, building APIs, or navigating the DevOps–AI intersection, this episode has hard-earned insights from someone shaping the tools used by teams around the world.
Howard is back with another mind-bending episode of Trends With Friends alongside Michael Parekh and special guest Jeff Park, Wall Street alum turned Bitwise crypto Jedi. From the radical rethinking of portfolio construction to the institutionalization of Bitcoin, Jeff unpacks how markets are being rewired in real time. The trio dives deep into the death of the 60/40 portfolio, the rise of Bitcoin treasury companies, the myth of the risk-free rate, and why every investor needs to grapple with global carry dynamics. Plus, they debate Elon's Tesla troubles, BYD's China dominance, why CME is booming, and how prediction markets and vibe coding are reshaping the next-gen financial playbook. If you care about capital markets, crypto, and the future of speculation as entertainment, don't miss this one.Chapters00:00 Meet Jeff Park: From Wall Street to Crypto03:00 Radical Portfolio Theory Explained07:20 Why 60/40 Is Dead and the Global Carry Trade Lives10:45 The Degenerate Economy, Bitcoin Treasury Plays, and the New Risk Paradigm16:00 Institutionalization of Bitcoin, ETFs, and Self-Custody22:00 How Financial Engineering Could Break Bitcoin26:00 Prediction Markets, AI, and Speculation as Income31:00 BYD vs Tesla, Global Retail Flows, and Chart Breakdown37:00 Alpaca, APIs, and Building Brokerages for the World44:00 Vibe Coding, LLMs, and the Coming API ExplosionJoin Our Community! https://stocktwits.com/Sign up for our daily FREE newsletter to keep in touch with the market: https://thedailyrip.stocktwits.com/Disclaimer:All opinions expressed on this show are solely the opinions of the hosts' and guests' and do not reflect the opinions of Stocktwits, Inc. or its affiliates. The hosts are not SEC or FINRA registered advisors or professionals. The content of this show is for educational and entertainment purposes only. Please consult with your financial advisor before making any investment decision. Read the full terms & conditions here: https://stocktwits.com/about/legal/terms/
AWS's Mark Relph draws fascinating parallels between today's AI revolution and the 1900s agricultural mechanization that delivered 2,000% productivity gains, while exploring how agentic AI will fundamentally reshape every aspect of software business models.Topics Include:Mark Relph directs AWS's data and AI partner go-to-market strategy teamHis role focuses on making ISV partners a force multiplier for customer successPreviously ran go-to-market for Amazon Bedrock, AWS's fastest growing service everCurrent AI adoption pace exceeds even the early cloud computing boom yearsHistorical parallel: 1900s agricultural mechanization delivered 2,000% productivity gains and 95% resource reductionFirst commercial self-propelled farming equipment revolutionized entire economies and never looked back500 machines formed the "Harvest Brigade" during WWII, harvesting from Texas to CanadaMark has spoken to 600+ AWS customers about GenAI over two yearsOrganizations range from AI pioneers to those still "fending off pirates" internallyGenAI has become a phenomenal assistant within organizations for content and automationAWS's AI stack has three layers: infrastructure, Bedrock, and applicationsBottom layer provides complete control over training, inference, and custom applicationsMiddle layer Bedrock serves as the "operating system" for generative AI applicationsTop layer offers ready-to-use AI through Q assistants and productivity toolsAI systems are rapidly becoming more complex with multiple model chainsMany current "agents" are just really, really long prompts (Mark's hot take)Task-specific models are emerging as one size won't fit all use casesEvolution moves from human-driven AI to agent-assisted to fully autonomous agentsAgent readiness requires APIs that allow software to interact autonomouslyTraditional UIs become unnecessary when agents interface directly with systemsCore competencies shift when AI handles the actual "doing" of tasksSales and marketing must adapt to agents delivering outcomes autonomouslyGo-to-market strategies need complete rethinking for an agentic worldThe agentic age is upon us and AWS partners should shape the futureParticipants:Mark Relph – Director – Data & AI Partner Go-To-Market, Amazon Web ServicesSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon.com/isv/
The rise of AI agents is more than a tooling upgrade - it's a fundamental rewiring of the entire developer experience, with your APIs at the very center. We're joined by Matt DeBergalis, co-founder and then-CTO-now-CEO (congrats Matt!) of Apollo GraphQL, to explore this massive transformation. He introduces the emerging concept of "agent experience," explaining why systems built for human developers are not ready for the unprecedented scale of AI calling APIs.Matt argues that as the old rules of software development get re-evaluated, engineering leaders must rethink their entire stack. He presents a powerful analogy: a structured data layer like a graph is the perfect "left brain" for the "right brain" creativity of LLMs. This provides the semantic precision and guardrails needed for AI to act reliably, enabling a future where user experiences are personalized "to 11" and APIs become the core business asset. This conversation is a crucial guide for leaders on how to prepare by prioritizing higher-level system design, and why clear communication and architecture are becoming far more critical than handwriting code.Check out:The DevEx guide to AI-driven software developmentDownload: The 6 trends shaping the future of AI-driven development Follow the hosts:Follow BenFollow AndrewFollow today's guest(s):Explore Apollo GraphQL's graph infrastructure and MCP tooling: ApolloDevConnect with Matt on LinkedInConnect with Andrew Boyagi on LinkedInReferenced in today's show:Anthropic caps Claude Code usageOpenAI introduces study modeReady or not, age verification is rolling out across the internetAtlassian research: AI adoption is rising, but friction persistsSupport the show: Subscribe to our Substack Leave us a review Subscribe on YouTube Follow us on Twitter or LinkedIn Offers: Learn about Continuous Merge with gitStream Get your DORA Metrics free forever
Need to start saving on recurring processing fees? Visit GoCLEARswipe.com to get started with your demo and on-boarding today!
Tony chats with Charles Merritt, Co-Founder and CEO at Buddy, the insurance gateway for people who build software. They help product teams and developers that are being tasked with integrating insurance APIs and help accelerate distribution of digital insurance products.Charles Merritt: https://www.linkedin.com/in/charlesmerritt/Charles first appearance on Profiles in Risk Ep. 298: https://youtu.be/IWI1zxTT6nM?si=yzwTpPQlDLtDFeKpBuddy: https://www.buddy.insure/teamVideo Version: https://youtu.be/jGAp09-Vhyw
Host Dr. Jay Anders welcomes Charles Tuchinda, MD, EVP & COO Hearst Health. They discuss the evolution of FDB, drug databases and e-prescribing, particularly in the world of APIs, as well as the importance of data quality, standardization and human review with respect to AI's role in healthcare. This is a must-listen for TMWIH fans and health tech listeners. Find all of our network podcasts on your favorite podcast platforms and be sure to subscribe and like us. Learn more at www.healthcarenowradio.com/listen
In this episode of Affiliate BI, John Wright sits down with Joe Hatch, Head of Product at StatsDrone, to pull back the curtain on the complexities of affiliate marketing infrastructure, automation, and data reliability. From building in-house stats tools to integrating dynamic variables, Joe gives a deeply informed take on what it really takes to run a successful affiliate operation at scale. They kick off by discussing the illusion that building your own stats platform is easy. Joe details the real challenges — from managing over 1,900 affiliate programs across 90+ software types to maintaining reliable data through inconsistent APIs, scraping, and authentication hurdles. He challenges the belief that AI agents alone can solve these problems, pointing out the edge cases and standardization issues that AI still struggles with. The episode explores practical tools and innovations Joe has been building, including WordPress plugins for Pretty Links and Thirsty Affiliates that simplify dynamic variable tracking. Joe explains how proper click ID management enables deeper analytics and real attribution, turning affiliate sites into powerful CRMs — capable of identifying individual users, tracking their behaviour post-click, and even retaining or reactivating them with targeted offers. The conversation also dives into hot-button issues like shaving and data transparency. Joe explains the importance of change detection and communication between affiliates and operators, while emphasizing that not all data changes are nefarious. For tech-savvy affiliates, Joe lays out integration possibilities using tools like Airtable, Looker Studio, Tableau, and the versatile n8n automation platform. They wrap with a discussion on the future of affiliate marketing as it converges with business intelligence — plus a fun mention of Joe's satirical project, the iGaming Bullshit Generator.
In this episode of the AlchemistX Innovators Inside Podcast, Ian Bergman sits down with Roey Eliyahu and Michael Nicosia, co-founders of Salt Security—the company that pioneered the API security category. Together, they break down the early decisions that helped them go from idea to industry-defining solution.Roey shares how he went from coding at nine in Israel's cybersecurity units to buying a one-way ticket to Silicon Valley with a bold pitch and a few thousand dollars. Michael recounts how their first four-hour meeting turned into a lasting partnership. What followed was a masterclass in customer discovery, iterative product development, and relentless market education.Key takeaways include:How to pitch a complex idea with clarity and impactWhy discovering your APIs is just the beginning of managing riskHow to qualify early adopters and validate product-market fitThe mindset shift required to sell innovation into large enterprisesThe critical role of posture governance in modern cybersecurityHow Salt Security aligned internal incentives with real customer outcomesFrom crawling to running in the API security space, Roey and Michael offer hard-won lessons in navigating ambiguity, building trust with enterprise buyers, and scaling a product that protects some of the world's biggest companies.If you're building a startup or buying enterprise tech, this conversation is packed with strategy, insight, and inspiration.For more episodes and resources, visit https://www.alchemistaccelerator.com/podcasts.
In this conversation, Albert Buu, founder and CEO of Neutron, discusses the evolution of Bitcoin and financial services in Vietnam. He highlights the changing regulatory landscape, the increasing acceptance of Bitcoin and stablecoins, and the innovative offerings of Neutron, including lending products and APIs for businesses. The discussion also touches on the challenges of KYC regulations and the future potential of Bitcoin in Vietnam's economy.Takeaways
Topics covered in this episode: rumdl - A Markdown Linter written in Rust * Coverage 7.10.0: patch* * aioboto3* * You might not need a Python class* Extras Joke Watch on YouTube About the show Connect with the hosts Michael: @mkennedy@fosstodon.org / @mkennedy.codes (bsky) Brian: @brianokken@fosstodon.org / @brianokken.bsky.social Show: @pythonbytes@fosstodon.org / @pythonbytes.fm (bsky) Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 10am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it. Michael #1: rumdl - A Markdown Linter written in Rust via Owen Lamont Supports toml file config settings Install via uv tool install rumdl. ⚡️ Built for speed with Rust - significantly faster than alternatives
Frank brings apple intelligence APIs to C#! We discuss! https://github.com/praeclarum/CrossIntelligence/ Follow Us Frank: Twitter, Blog, GitHub James: Twitter, Blog, GitHub Merge Conflict: Twitter, Facebook, Website, Chat on Discord Music : Amethyst Seer - Citrine by Adventureface ⭐⭐ Review Us (https://itunes.apple.com/us/podcast/merge-conflict/id1133064277?mt=2&ls=1) ⭐⭐ Machine transcription available on http://mergeconflict.fm
How is your summer so far? I hope you're listening to this episode lying somewhere in a shadow on a beach, and with a breeze keeping you relaxed. I also hope that you are in a company of people you love to hang out with - be it your family or friends. Why? It's because we all belong to a community, right? Yes - this is the moment when I'm tying in my intro with this week's episode's topic
In today's episode of Tech Talks Daily, I sat down with Andy Bell, Head of Data Product Management at Precisely, to explore a challenge that many organizations continue to underestimate: the role of data integrity in AI strategies. With only 12 percent of businesses expressing confidence in the quality of their AI data, it's clear that the rush to implement AI is often outpacing the readiness of the data that supports it. Andy and I unpack what happens when enterprises leap into generative or agentic AI without addressing foundational data issues. From hallucinations to bias to unreliable outputs, the risks are significant. As we discussed, these risks don't just impact models — they erode trust with customers and complicate accountability, especially in regulated industries where traceability is non-negotiable. We then explored the power of third-party data enrichment and how it can offer much-needed context that internal datasets often lack. Andy shared real-world examples, including how a major delivery company saved 65 million dollars by optimizing address accuracy and how San Bernardino County used Precisely's wildfire risk models to improve emergency planning. These aren't abstract use cases — they show measurable business value. Andy also introduced the Precisely Data Link program, a solution designed to make it easier to connect, manage, and query multiple third-party datasets. With persistent IDs and flexible delivery methods through APIs, managed services, and platforms like Snowflake and Databricks, Precisely is helping organizations speed up time to value while reducing integration headaches. Looking ahead, Andy shared how Precisely is building AI capabilities that allow users to query third-party data using natural language. This shift aims to make complex data interactions more intuitive and accessible to business users who may not be data engineers. If data is the fuel for AI, then the quality and context of that data will define the road ahead. Is your organization doing enough to ensure its data can be trusted by the AI it deploys?
Giants were found preserved for thousands of years in “stasis chambers.” Visit https://themetaphysical.tv to support our work! Rumors of ancient “stasis beings” kept alive for centuries in a perpetual sleep have circulated for years. Like something out of science fiction, these sarcophagi supposedly preserve whoever's inside it—until it's time to wake up. There's even footage of an alleged giant-sized king with red hair, a red beard, and golden crown inside one of these boxes. But how do we know if it's real? Has such technology ever been developed in the ancient past? And if it has, what does that mean for us today? Rob and John will talk about the biggest theories and the remote viewing data that John's team gathered about this stasis chamber and more. Then decide what you think. Join investigative researcher Rob Counts and John Vivanco for a Metaphysical show that's out of this world. In this episode: giants, Anunnaki giant kings, Nephilim fallen angels, paranormal creature encounters, secret military missions to capture giants, battles with giants, insight from a Navy Seal, biblical giants and the Nephilim, vibrations and frequencies, shifting densities, ancient tech before the last Ice Age, escaping cataclysms, the tombs at Saqqara, Apis bull tombs, Anunnaki seed beings, ancient advanced technology, ancient civilizations, before Noah's flood, government cover-ups, remote viewing data, psychic abilities, telepathy and mind reading, interplanetary travel, hibernation techniques, psychological operations, hidden agendas, “the good old days” before technology was so essential, black market antiques, the pyramids, supernatural phenomena, ancient astronauts, ancient aliens, Gilgamesh and other giants, real Gilgamesh found
Jesse Burrell is the CEO and co-founder of BatchService, now known as BatchData, a real-time data and API platform designed for prop-tech startups and enterprises requiring massive and current housing data. Jesse was a real estate investor who needed better data to target his marketing efforts. BatchService was launched in 2018 with data brokering and subsequently built additional tools and apps. BatchService grew rapidly to $35 million in revenue by 2022, but regulation changes and economic shifts contracted their core business, forcing them to make drastic cutbacks and pivots. They launched an enterprise data service with APIs for larger companies in 2021, which is now known as BatchData. In July 2025, BatchService sold its “B2C” software business, comprising two successful products — BatchLeads and BatchDialer — to PropStream for an undisclosed cash amount. Jesse and his co-founders retained the B2B BatchData enterprise data business, now with 30 employees. Quote from Jesse Burrell, cofounder and CEO of BatchService “I had a couple years where I was pinching myself with the amount of money I was taking home every month. It was pretty wild how fast we rose in the first years. So when things changed for us, the fall really hurt, especially when we felt invincible and every idea worked brilliantly for three years. “When things changed, we stayed pretty patient. We stayed pretty calm, but there was a lot of nights, weeks and months. I went home feeling like a failure and I don't think I was failing. I just think it was the conditions that we got put in. But it was really hard on me mentally. It was very, very tough to get punched so hard in the mouth with like a multitude of things in a short period of time. “You're not as good as you think you are when it's going good and when it's going bad. It's not typically as bad as you think you are. A lot of it has to do with conditions and things that happen that are out of your control. You're fighting that because you're an entrepreneur and you'll figure it out if you are just persistent and don't give up.” Links Jesse Burrell on LinkedIn BatchService on LinkedIn Batchdata website Stewart Title website The Practical Founders Podcast Tune into the Practical Founders Podcast for weekly in-depth interviews with founders who have built valuable software companies without big funding. Subscribe to the Practical Founders Podcast using your favorite podcast app or view on our YouTube channel. Get the weekly Practical Founders newsletter and podcast updates at practicalfounders.com.
Microsoft just released the 40 jobs most likely to be eaten alive by AI.Is your job on the list? And we noticed some HUGE trends in this recently released report that no one's talking about. You don't want to miss this convo.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Microsoft's AI Job Displacement Report Analysis40 Jobs Most Susceptible to AI ReplacementMicrosoft's 200,000 Conversation Study MethodologyAI Applicability Score and O*NET Task MappingTop AI Disruption Archetypes: Four Job CategoriesKey Trends in AI Impact on EmploymentHigher Education and Knowledge Work VulnerabilitiesActionable Advice for AI Job SecurityTimestamps:00:00 "Everyday AI: Your Business Guide"03:52 Surviving AI Job Threats09:35 AI's Workforce Impact Study11:35 AI Threat to Translation Jobs15:06 Job Archetypes and AI Disruption20:09 "Top 40 Jobs AI May Replace"22:47 AI Disruption: Pivoting from Writing27:11 Training AI with Our Feedback29:19 AI's Impact on Entry-Level Learning32:36 "AI Over Costs: Efficiency Wins"37:45 "Prompt Engineering: Everyone's Role"39:55 "Meet Clients in Person"42:33 "Embrace AI: Future-Proof Your Career"Keywords:Microsoft AI report, AI job disruption, jobs replaced by AI, artificial intelligence impact on employment, AI applicability score, job displacement, AI and knowledge workers, Bing Copilot, workplace automation, 200,000 AI conversations, human APIs, information synthesizers, frontline communicators, knowledge curators, process coordinators, O*NET job database, large language models, AI task overlap, interpreters and translators, higher education job risk, automation in administrative support, sales representatives automation, technical writers AI, proofreaders automation, customer service automation, machine learning in business, agentic AI systems, domain expertise with AI, AI-driven workplace change, prompt engineering, AI literacy, digital job transformation, physical jobs AI resistance, embodied AI, agentic feedback loop, enterprise AI adoption, human in the loop, future of work, new workforce skills, cheap AI vs expensive humans, automating entry-level tasks, internal company insights, leadership crisis due to AI, synthetic information, generative AI, AI-powered writing, AI in journalism, automation trends 2024, adaptation to AI workforceSend Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
In this episode of the FutureCraft GTM Podcast, hosts Ken Roden and Erin Mills reunite with returning favorite Liza Adams to discuss the current state of AI adoption in marketing teams. Liza shares insights on why organizations are still struggling with the same human change management challenges from a year ago, despite significant advances in AI technology. The conversation covers practical frameworks for AI implementation, the power of digital twins, and Liza's approach to building hybrid human-AI marketing teams. The episode features Liza's live demonstration in our new Gladiator segment, where she transforms a dense marketing report into an interactive Jeopardy game using Claude Artifacts. Unpacking AI's Human Challenge Liza returns with a reality check: while AI tools have dramatically improved, the fundamental challenge remains human adoption and change management. She reveals how one marketing team successfully built a 45-person organization with 25 humans and 20 AI teammates, starting with simple custom GPTs and evolving into sophisticated cross-functional workflows. The Digital Twin Strategy: Liza demonstrates how creating AI versions of yourself and key executives can improve preparation, challenge thinking, and overcome unconscious bias while providing a safe learning environment for teams. The 80% Rule for Practical Implementation: Why "good enough" AI outputs that achieve 80-85% accuracy can transform productivity when combined with human oversight, as demonstrated by real-world examples like translation and localization workflows. Prompt Strategy Over Prompt Engineering: Liza explains why following prompt frameworks isn't enough—you need strategic thinking about what questions to ask and how to challenge AI outputs for better results. 00:00 Introduction and Balance Quote 00:22 Welcome Back to FutureCraft 01:28 Introducing Liza Adams 03:58 The Unchanged AI Adoption Challenge 06:30 Building Teams of 45 (25 Humans, 20 AI) 09:06 Digital Twin Framework and Implementation 17:34 The 80% Rule and Real ROI Examples 25:31 Prompt Strategy vs Prompt Engineering 26:02 Measuring AI Impact and ROI 28:21 Handling Hallucinations and Quality Control 32:50 Gladiator Segment: Live Jeopardy Game Creation 40:00 The Future of Marketing Jobs 47:49 Why Balance Beats EQ as the Critical Skill 51:09 Rapid Fire Questions and Wrap-Up Edited Transcript: Introduction: The Balance Between AI and Human Skills As AI democratizes IQ, EQ becomes increasingly important. Critical thinking and empathy are important, but I believe as marketers, balance is actually more important. Host Updates: Leveraging AI Workflows Ken Roden shares his approach to building better AI prompts by having full conversations with ChatGPT, exporting them to Word documents, then using that content to create more comprehensive prompts. This method resulted in more thorough market analysis with fewer edits required. Erin Mills discusses implementing agentic workflows using n8n to connect different APIs and build systems where AI tools communicate with each other. The key insight: break workflows down into steps rather than having one agent handle multiple complex tasks. Guest Introduction: Liza Adams on AI Adoption Challenges Liza Adams, the AI MarketBlazer, returns to discuss the current state of AI adoption in marketing teams. Despite significant technological advances, organizations still struggle with the same human change management challenges from a year ago. The Core Problem: Change Management Over Technology The main issue isn't about AI tools or innovation - teams can't simply be given ChatGPT, Claude, Gemini, and Perplexity and be expected to maximize their potential. Marketing teams are being handed tools while leaders expect employees to figure out implementation themselves. People need to see themselves in AI use cases that apply to their specific jobs. Joint learning sessions where teams share what works and what doesn't are essential. The focus has over-pivoted to "what's the right tool" when it should be on helping people understand, leverage, and make real impact with AI. The AI Adoption Plateau Many organizations face an AI adoption plateau where early adopters have already implemented AI, but a large group struggles with implementation. Companies attempting to "go fully agentic" or completely redo workflows in AI are taking on too much at once. Success Story: The 45-Person Hybrid Team Liza shares a case study of a marketing team with 45 members: 25 humans and 20 AI teammates that humans built, trained, and now manage. They started with simple custom GPTs, beginning with digital twins. Digital Twin Strategy for AI Implementation Digital twins are custom GPTs trained on frameworks, thinking patterns, publicly available content, and personality assessments like Myers-Briggs. These aren't designed to mimic humans but to learn about them and find blind spots, challenge thinking patterns, and overcome unconscious bias. For executive preparation, team members use digital twins of leadership to anticipate questions, identify gaps in presentations, and prepare responses before important meetings. The progression: Simple digital twins → Function-specific GPTs (pitch deck builders, content ideators, campaign analyzers) → Chained workflows across multiple departments (marketing, sales, customer success). Prompt Strategy vs. Prompt Engineering Following prompt frameworks (GRACE: Goals, Role, Action, Context, Examples) isn't enough if the underlying thinking is basic. AI magnifies existing thinking quality - good or bad. Example: Instead of asking "How do I reduce churn?" ask "Can you challenge my assumption that this is a churn problem? Could this data indicate an upsell opportunity instead?" This transforms churn problems into potential revenue opportunities through different strategic thinking. The 80% Rule for Practical AI Implementation AI outputs achieving 80-85% accuracy can transform productivity when combined with human oversight. Example: A team reduced translation and localization costs from tens of thousands of dollars monthly to $20/month using custom GPTs for eight languages, with human review for the final 15-20%. Measuring AI ROI: Three Strategic Approaches Align with Strategic Initiatives: Connect AI projects to existing company strategic initiatives that already have budgets, resources, and executive attention. Focus on Biggest Pain Points: Target areas where teams will invest resources to solve problems - excessive agency costs, overworked teams, or poor quality processes. Leverage Trailblazers: Identify curious team members already building AI solutions and scale their successful implementations. Handling AI Hallucinations and Quality Control AI models hallucinate 30-80% of the time when used as question-and-answer machines for factual queries. Hallucinations are less common with strategic questions, scenario analysis, and brainstorming. Prevention strategies: Limit conversation length and dataset size to avoid context window limitations Use multiple AI models to cross-check outputs Implement confidence checking: Ask AI to rate confidence levels (low/medium/high), explain assumptions, and identify what additional information would increase confidence Live Demo: Claude Artifacts for Interactive Content Liza demonstrates transforming the 2025 State of Marketing AI report into an interactive Jeopardy game using Claude Artifacts. The process involves uploading a PDF, providing specific prompts for game creation, and generating functional code without technical skills. This "vibe coding" approach allows users to describe desired outcomes and have AI build interactive tools, calculators, dashboards, and training materials. Future of Marketing Jobs and Skills Emerging roles: AI guides, workflow orchestrators, human-AI team managers Disappearing roles: Language editors, basic researchers, repetitive design tasks Transforming roles: Most existing positions adapting to include AI collaboration Critical skill for the future: Balance Innovation with ethics Automation with human touch Personalization with transparency Balance may be more important than emotional intelligence as AI democratizes cognitive capabilities. Key Takeaways The Gladiator segment demonstrates how dense research reports can become engaging, interactive content without engineering resources. Making AI implementation fun helps teams stay balanced and avoid overwhelm. Success comes from starting with tiny AI wins rather than comprehensive strategies, focusing on human change management over tool selection, and building systems that augment rather than replace human creativity. This version removes the conversational back-and-forth while preserving all the searchable content people would look for when researching AI implementation, digital twins, prompt strategy, change management, and practical AI use cases. Stay tuned for more insightful episodes from the FutureCraft podcast, where we continue to explore the evolving intersection of AI and GTM. Take advantage of the full episode for in-depth discussions and much more. ----more---- To listen to the full episode and stay updated on future episodes, visit the FutureCraft GTM website. Disclaimer: This podcast is for informational and entertainment purposes only and should not be considered advice. The views and opinions expressed in this podcast are our own and do not represent those of any company or business we currently work for/with or have worked for/with in the past.
Welcome to Episode 407 of the Microsoft Cloud IT Pro Podcast. In this episode, we dive deep into the Model Context Protocol (MCP) - a game-changing specification that's extending the capabilities of Large Language Models (LLMs) and creating exciting new possibilities for IT professionals working with Microsoft Azure and Microsoft 365. MCP represents a significant shift toward more extensible and domain-specific AI interactions. Instead of being limited to pre-trained knowledge, you can now connect your AI tools directly to live data sources, APIs, and services that matter to your specific role and organization. Whether you're managing Azure infrastructure, creating content, or developing solutions, MCP provides a framework to make your AI interactions more powerful and contextually relevant to your daily workflows. Your support makes this show possible! Please consider becoming a premium member for access to live shows and more. Check out our membership options. Show Notes Introducing the Model Context Protocol Understanding MCP server concepts Understanding MCP client concepts A list of applications that support MCP integrations About the sponsors Would you like to become the irreplaceable Microsoft 365 resource for your organization? Let us know!
Kaya Thomas (kayathomas.is) comes back after half a decade to tell us about how motherhood inspired her new app Milk Diary (milkdiary.com). She talks about using new APIs like Foundation Models, SpeechAnalyzer, and AlarmKit to handle the complex stuff other feeding apps miss: intelligent scheduling and reminders, combo feeding that's actually easy to track, hassle-free tracking for twins and smart milk management.GuestKaya ThomasKaya Thomas (@kayathomas@mastodon.social) - MastodonKaya (@kayathomas.is) — Blueskykmt901 (Kaya Thomas)Kaya Thomas | LinkedInKaya Thomas (@kayathomas.is) • Threads, Say moreMilk Diary AppRelated LinksRob Napier - TIL:AI. Thoughts on AIAI Code Reviews | CodeRabbit | Try for FreeRelated EpisodesNotifications with Kaya ThomasLive from CommunityKit WWDC 2025 with Matt MassicottePlinky with Joe FabisevichChatGPTovski with Kris SlazinskiThe Making of Callsheet with Casey LissApples, Glasses, and HAL, Oh My!Social MediaLinkedIn - @leogdionGitHub - @brightdigitGitHub - @leogdionTikTok - @brightdigitMastodon - @leogdion@c.imYoutube - @brightdigitBlueSky - @leogdion.bsky.socialTwitter Leo - @leogdionTwitter BrightDigit - @brightdigitCreditsMusic from https://filmmusic.io"Blippy Trance" by Kevin MacLeod (https://incompetech.com)License: CC BY (http://creativecommons.org/licenses/by/4.0/) (00:00) - What is Milk Diary (04:20) - Foundation Models (10:56) - The Feeding App Market (13:59) - Liquid Glass (17:01) - AlarmKit (19:34) - Local and Server Side Storage (22:28) - SpeechAnalyzer (25:13) - Developing with AI Thanks to our monthly supporters Edward Sanchez Steven Lipton ★ Support this podcast on Patreon ★
Software Engineering Radio - The Podcast for Professional Software Developers
Wesley Beary of Anchor speaks with host Sam Taggart about designing APIs with a particular emphasis on user experience. Wesley discusses what it means to be an “API connoisseur”— paying attention to what makes the APIs we consume enjoyable or frustrating and then taking those lessons and using them when we design our own APIs. Wesley and Sam also explore the many challenges developers face when designing APIs, such as coming up with good abstractions, testing, getting user feedback, documentation, security, and versioning. They address both CLI and web APIs. This episode is sponsored by Fly.io.
As digital infrastructure becomes increasingly interwoven with third-party code, APIs, and AI-generated components, organizations are realizing they can't ignore the origins—or the risks—of their software. Theresa Lanowitz, Chief Evangelist at LevelBlue, joins Sean Martin and Marco Ciappelli to unpack why software supply chain visibility has become a top concern not just for CISOs, but for CEOs as well.Drawing from LevelBlue's Data and AI Accelerator Report, part of their annual Futures Report series, Theresa highlights a striking correlation: 80% of organizations with low software supply chain visibility experienced a breach in the past year, while only 6% with high visibility did. That data underscores the critical role visibility plays in reducing business risk and maintaining operational resilience.More than a technical concern, software supply chain risk is now a boardroom topic. According to the report, CEOs have the highest awareness of this risk—even more than CIOs and CISOs—because of the direct impact on brand reputation, stock value, and partner trust. As Theresa puts it, software has become the “last mile” of digital business, and that makes it everyone's problem.The conversation explores why now is the time to act. Government regulations are increasing, adversarial attacks are intensifying, and organizations are finally beginning to connect software vulnerabilities with business outcomes. Theresa outlines four critical actions: leverage CEO awareness, understand and prioritize vulnerabilities, invest in modern security technologies, and demand transparency from third-party providers.Importantly, cybersecurity culture is emerging as a key differentiator. Companies that embed security KPIs across all business units—and align security with business priorities—are not only more secure, they're also more agile. As software creation moves faster and more modular, the organizations that prioritize visibility and responsibility throughout the supply chain will be best positioned to adapt, grow, and protect their operations.Learn more about LevelBlue: https://itspm.ag/levelblue266f6cNote: This story contains promotional content. Learn more.Guest: Theresa Lanowitz, Chief Evangelist of AT&T Cybersecurity / LevelBlue [@LevelBlueCyber]On LinkedIn | https://www.linkedin.com/in/theresalanowitz/ResourcesTo learn more, download the complete findings of the LevelBlue Threat Trends Report here: https://itspm.ag/levelbyqdpTo download the 2025 LevelBlue Data Accelerator: Software Supply Chain and Cybersecurity report, visit: https://itspm.ag/lbdaf6iLearn more and catch more stories from LevelBlue: https://www.itspmagazine.com/directory/levelblueLearn more about ITSPmagazine Brand Story Podcasts: https://www.itspmagazine.com/purchase-programsNewsletter Archive: https://www.linkedin.com/newsletters/tune-into-the-latest-podcasts-7109347022809309184/Business Newsletter Signup: https://www.itspmagazine.com/itspmagazine-business-updates-sign-upAre you interested in telling your story?https://www.itspmagazine.com/telling-your-story
ITSPmagazine Weekly Update | From Black Hat to Black Sabbath / Ozzy: AI Agents and Guitars (again!) + Entry Level Cybersecurity Jobs, Robots Evolution, and the Weekly Recap You Didn't Expect - On Marco & Sean's Random & Unscripted Podcast __________________Marco Ciappelli and Sean Martin are back with another random and unscripted weekly recap—from pre-Black Hat buzz and AI agents to vintage wood guitars, talent gaps, and Glen Miller debates. This week's reflection hits tech, music, and philosophy in all the right ways. Tune in, ramble with us, and subscribe. __________________Full Blog Article This week's recap was a ride.Sean and I kicked things off with the big news: we're officially consistent. Weekly recap number… I lost count. But we're doing it. We covered what ITSPmagazine's been working on, what we've been publishing, and where our minds are wandering lately (spoiler: everywhere).Black Hat USA 2025 is just around the corner, and we're deep into prep mode. I even bought a paper map. Why? I don't know. But we've got some great pre-event conversations already out—like our annual chat with Black Hat GM Steve Wylie, plus briefings with Dropzone AI (get ready for “agentic automation” to be the next big buzzword) and Akamai (yes, bots and APIs again, but with a solid strategy twist).We also talked about a fantastic episode Sean did on resonance and reinvention—featuring Cindy, a luthier in NYC who builds custom guitars using century-old beams from historic buildings. The pickups even use the old nails. Music and wood with a past life. It's beautiful stuff.Speaking of stories, I officially closed down the Storytelling podcast. But don't worry—I'm still telling stories. I've just shifted focus to “Redefining Society and Technology,” my newsletter and podcast series where I explore how humans and tech evolve together. This week's edition tackled the merging of humans and machines as a new species. Isaac Asimov meets Andy Clark.We also got a bit philosophical about AI and jobs. If machines take over the “easy” roles, where do humans begin? Are we cutting off our own training paths?Sean's episode with John Solomon dug into the cybersecurity hiring crisis—challenging the idea that we have a “talent gap.” The real issue? We're not hiring or nurturing people properly.Oh, and I finally released my long-overdue interview with Michael Sheldrick from Global Citizen. Music. Social impact. Doing good. It's all there. I'm honored to support even a small piece of what he's building.And yes… Ozzy. RIP. Music never dies.So if you're into random reflections with meaning, tech with humanity, and stories that don't always follow the rules—subscribe, share, and join the ride.See you in Vegas. Or the future. Or somewhere in between.________________ KeywordsBlack Hat USA 2025, ITSPmagazine recap, Marco Ciappelli, Sean Martin, cybersecurity podcast, AI in cybersecurity, agentic automation, Dropzone AI, Akamai APIs, HITRUST security, Global Citizen, Michael Sheldrick, storytelling podcast, Redefining Society, Andy Clark, Isaac Asimov, human-machine evolution, cybersecurity talent gap, custom guitar NYC, Ozzy tributeHosts links:
In this Beekeeping Today Podcast Short, Dr. Dewey Caron returns with another insightful “audio postcard,” this time exploring the marvel of honey—its meaning for honey bees, its significance for beekeepers, and its surprising impact on human health. Dewey begins by examining how we define honey, touching on both scientific and regulatory perspectives, including recent efforts like the proposed Honey Integrity Act. He then dives into how honey is processed by bees—from nectar foraging to enzyme transformation and evaporation—highlighting the bee-to-bee communication system of trophallaxis that powers the hive's food-sharing network. Beyond the hive, Dewey explores honey's powerful medicinal properties. Drawing from a comprehensive mega-review of over 100 studies, he outlines honey's antimicrobial, anti-inflammatory, antioxidant, and even anti-cancer effects, with a focus on manuka honey's growing use in clinical wound care. Finally, he turns the spotlight on beekeeper-to-bee communication—urging beekeepers to proactively manage supers and recognize nectar flows to support colony health and maximize harvest. Whether you're fascinated by bee biology or interested in honey as a functional food, this episode is packed with sweet insight. Links & Resources: Saeed Samarghandian, Tahereh Farkhondeh and Fariborz Samini, 2017. Honey and Health: A Review of Recent Clinical Research. Pharmacognosy Res. Apr-Jun;9(2):121– 27. doi: 10.4103/0974-8490.204647 Crailsheim, Craig, 1998. Trophallactic interactions in the adult honeybee (Apis mellifera L). https://www.apidologie.org/articles/apido/pdf/1998/01/Apidologie_0044-8435_1998_29_1-2_ART0006.pdf Collison, Clarence 2017. Trophallaxis. Bee Culture https://beeculture.com/a-closer-look-12/ Tezze, A.A. and W.M. Farina 1999. Trophallaxis in the honeybee, Apis mellifera: the interactions between viscosity and sucrose concentration of the transferred solution. Anim. Behav. 57: 1319-1326. Brought to you by Betterbee – your partners in better beekeeping. ______________ Betterbee is the presenting sponsor of Beekeeping Today Podcast. Betterbee's mission is to support every beekeeper with excellent customer service, continued education and quality equipment. From their colorful and informative catalog to their support of beekeeper educational activities, including this podcast series, Betterbee truly is Beekeepers Serving Beekeepers. See for yourself at www.betterbee.com Copyright © 2025 by Growing Planet Media, LLC
Industrial Talk is talking to Tacoma Zach, Co-Founder and CEO at MentorAPM about "Functionally unite end-to-end asset lifecycle management". Scott Mackenzie interviews Tacoma Zach Mentor about Mentor APM, a comprehensive asset management solution. Tacoma shares his background in chemical engineering and asset management, highlighting his experience with Veolia and ExxonMobil. Mentor APM offers a 29-day implementation process, leveraging pre-loaded asset libraries and failure modes. The platform integrates with existing ERP systems and uses AI for rapid, accurate asset assessments. Tacoma emphasizes the importance of proactive asset management, prioritization, and the human component in change management. Mentor APM aims to enhance reliability, reduce costs, and improve operational stability. Action Items [ ] Reach out to Tacoma Zach at mentor APM to learn more about the solution. [ ] Connect with Tacoma Zach on LinkedIn. Outline Introduction and Welcome to Industrial Talk Scott MacKenzie welcomes listeners to the Industrial Talk podcast, emphasizing the importance of celebrating industrial heroes. Scott introduces Tacoma Scott encourages listeners to dive into the industry, emphasizing the need for education, collaboration, and innovation. Scott announces the launch of the Industrial News Network (INN) to keep up with the fast-moving industry and connect people with the right information. Tacoma Zack Mentor's Background and Journey Tacoma Zach Mentor shares his background, starting as a graduate chemical engineer from the University of Toronto. Tacoma discusses his career in contract operations, eventually leading to Veolia, and his transition into asset management. He explains the founding of his engineering company in 2005 and his involvement with Herbalytics, a spin-out from Veolia focused on risk and criticality analysis. Tacoma describes the development of Mentor APM in 2017, aiming to unify various asset management functionalities into one comprehensive solution. Mentor APM's Unique Value Proposition Scott and Tacoma discuss the crowded market of asset management platforms and what sets Mentor APM apart. Tacoma explains the origins of the name "Mentor," derived from the best practices and experiences from Veolia and other companies. He highlights the importance of automation and pre-loading data to reduce rework and manual processes. Tacoma emphasizes the need for a unified solution that integrates various aspects of asset management, from failure modes to prioritization. Implementation and Adoption of Mentor APM Scott inquires about the implementation process and timeline for Mentor APM. Tacoma explains that Mentor APM can be implemented in as little as 29 days, thanks to pre-loaded asset libraries and failure modes. He discusses the importance of prioritization and the ability to quickly assess and manage critical assets. Tacoma highlights the flexibility of Mentor APM to adapt to different customer needs and the importance of change management in the adoption process. Integration with Existing Systems and AI Advancements Scott asks about the integration of Mentor APM with existing ERP systems. Tacoma explains that Mentor APM has published APIs to seamlessly integrate with various systems, including ERP solutions. He introduces Mentor Lens, a tool that allows for...