POPULARITY
My day-to-day coding looks very different from what it was a few years ago. Today, you'll learn about my voice-to-code workflow and how I leverage smart tools to have so much free time that I feel guilty for "not working enough." Seriously.The blog post: https://thebootstrappedfounder.com/from-code-writer-to-code-editor-my-ai-assisted-development-workflow/The podcast episode: https://tbf.fm/episodes/395-from-code-writer-to-code-editor-my-ai-assisted-development-workflowCheck out Podscan, the Podcast database that transcribes every podcast episode out there minutes after it gets released: https://podscan.fmSend me a voicemail on Podline: https://podline.fm/arvidYou'll find my weekly article on my blog: https://thebootstrappedfounder.comPodcast: https://thebootstrappedfounder.com/podcastNewsletter: https://thebootstrappedfounder.com/newsletterMy book Zero to Sold: https://zerotosold.com/My book The Embedded Entrepreneur: https://embeddedentrepreneur.com/My course Find Your Following: https://findyourfollowing.comHere are a few tools I use. Using my affiliate links will support my work at no additional cost to you.- Notion (which I use to organize, write, coordinate, and archive my podcast + newsletter): https://affiliate.notion.so/465mv1536drx- Riverside.fm (that's what I recorded this episode with): https://riverside.fm/?via=arvid- TweetHunter (for speedy scheduling and writing Tweets): http://tweethunter.io/?via=arvid- HypeFury (for massive Twitter analytics and scheduling): https://hypefury.com/?via=arvid60- AudioPen (for taking voice notes and getting amazing summaries): https://audiopen.ai/?aff=PXErZ- Descript (for word-based video editing, subtitles, and clips): https://www.descript.com/?lmref=3cf39Q- ConvertKit (for email lists, newsletters, even finding sponsors): https://convertkit.com?lmref=bN9CZw
Diskrepanzen tëschent DP an CSV, de Referendum virun 10 Joer, an de Match vum Gerson Rodrigues gëschter - dat si Sujeten an der Press vum Weekend.
What's holding back enterprise adoption of blockchain?In this episode, Sergio from Sonar X joins Sam to explain why the missing piece is data infrastructure. Drawing from experience at Bloomberg and AWS (where he led Amazon Managed Blockchain), Sergio breaks down why most blockchain systems are broken at the data layer—and how Sonar X is building a scalable, multi-chain backend to fix it.From audit-grade historical data to real-time indexing across 30+ chains, Sonar X is laying the foundation for Web3's next growth wave.Key Timestamps[00:00:00] Introduction: Sergio's background and what Sonar X is solving.[00:01:30] Growing up as a fixer: From Italian banking to Bloomberg and AWS.[00:04:00] Falling in love with blockchain: MIT program and lightbulb moment.[00:05:30] The problem: Enterprise-grade infrastructure for blockchain data doesn't exist.[00:07:00] What Sonar X does: Reliable, multi-chain data infra for coverage, quality, and access.[00:09:00] Use cases: From DeFi indexes to forensics, custody, fund admin, and compliance.[00:12:00] The architecture: How Sonar X solves the CAP theorem limitations of blockchain.[00:15:00] Data standardization: Making 30+ chains interoperable via common schemas.[00:17:30] Indexing like Bloomberg: Creating a “market data” layer for all chains.[00:19:00] Data delivery: Snowflake, Databricks, CSV exports, and multi-cloud support.[00:20:00] Business model: Simple annual chain-based subscriptions, no usage limits.[00:22:00] Custom support: Engineering advisory to reduce compute costs for clients.[00:23:00] Challenges ahead: Scaling to meet 1M+ TPS chains and occasional-use customers.[00:25:00] Traditional finance: How blockchain will upgrade, not replace, infrastructure like DTCC or SWIFT.[00:27:00] Blockchain = the ultimate value exchange machine.[00:28:00] Data scale: Every new asset, chain, or protocol creates exponential complexity.[00:30:00] Final ask: Keep investing in product, preparing for GenAI, and expanding chain support.[00:33:00] The future: RWA tokenization, AI agents, and why reliable data will be the cornerstone.Connecthttps://www.sonarx.com/https://www.linkedin.com/company/sonarxhttps://x.com/sonarx_hqhttps://www.linkedin.com/in/sergiocapannaDisclaimerNothing mentioned in this podcast is investment advice and please do your own research. Finally, it would mean a lot if you can leave a review of this podcast on Apple Podcasts or Spotify and share this podcast with a friend.Be a guest on the podcast or contact us - https://www.web3pod.xyz/
Profit Cleaners: Grow Your Cleaning Company and Redefine Profit
Are you still manually reviewing competitor feedback? You run the risk of falling behind because your biggest competition is leveraging AI to gain market insights in just minutes.In this episode of the Profit Cleaners Podcast, Brandon Schoen and Brandon Condrey share a practical and forward-thinking approach to competitive research using artificial intelligence. Recorded live at a recent Profit Cleaners event, this episode introduces two highly effective tools that are transforming how cleaning business owners analyze the market and make data-driven decisions.You'll discover how to extract and evaluate Google reviews from competing businesses, identify customer sentiment trends, and generate clear, actionable insights without sifting through spreadsheets for hours. The Brandons also discuss how these tools are being used in their own business to track KPIs, visualize growth metrics, and improve team efficiency—all while keeping operations streamlined and focused.This episode is your step-by-step guide to modernizing your market research process and staying ahead of the curve in a rapidly evolving industry. Listen now and start making smarter, faster business decisions with the power of AI!Highlights:(00:51) Why you should read your competitors' reviews — and how to automate it with AI(04:26) How PhantomLocal converts Google reviews into usable CSV files.(07:51) Leveraging Julius AI to uncover patterns and business insights(09:57) Recognizing what customers love through 5-star review patterns(14:57) Creating review trends and customer sentiment graphs in minutes.(18:22) Automating reporting and KPI analysis for better team visibility(22:18) Staying up to date with AI through curated tools and channels like Skill Leap AI(24:32) Empowering your team with AI tools to increase efficiency and customer satisfactionLinks/Resources Mentioned:Profit Cleaners Website Watch the FREE Masterclass: https://profitcleaners.com/masterclass)Join the FREE Facebook community: https://www.facebook.com/groups/profitcleaners/
In this episode of Tales From Tech Support, one user thinks a blank screen means the internet is down while another blames updates for breaking the VPN. A simple quotation mark nearly derails an entire finance department, and someone loads 100GB of files onto their work laptop. From misunderstood mice to vanished software windows, it's a parade of digital chaos. Stick around for the one user who left tech support truly speechless.Submit your own stories to KarmaStoriesPod@gmail.com.Karma Stories is available on all major Podcasting Platforms and on YouTube under the @KarmaStoriesPodcast handle. We cover stories from popular Reddit Subreddits like Entitled Parents, Tales From Tech Support, Pro Revenge and Malicious Compliance. You can find new uploads here every single day of the week!Rob's 3D Printing Site: https://Dangly3D.comBecome a supporter of this podcast: https://www.spreaker.com/podcast/karma-stories--5098578/support.
In this week's episode, we take a look at the major self-publishing platforms that I use, and examine the pros and cons of each. This coupon code will get you 50% off the audiobook of Dragonskull: Doom of the Sorceress, Book #8 in the Dragonskull series, (as excellently narrated by Brad Wills) at my Payhip store: DOOM50 The coupon code is valid through June 24, 2025. So if you need a new audiobook this summer, we've got you covered! TRANSCRIPT 00:00:00 Introduction and Writing Updates Hello, everyone. Welcome to Episode 253 of The Pulp Writer Show. My name is Jonathan Moeller. Today is May 30th, 2025, and today we are looking at the current major self-publishing platforms and what they offer indie authors. Before we get to our main topic, we'll have Coupon of the Week and an update on my current writing projects. So let's start with Coupon of the Week. This week's coupon code will get you 50% off the audiobook of Dragonskull: Doom of the Sorceress (book number eight in the Dragonskull series, as excellently narrated by Brad Wills) at my Payhip store. That code is DOOM50. And as always, we will have the coupon code and the links to the store in the show notes. This coupon code is valid through June 24th, 2025. So if you are setting out on summer travels this summer and you need an audiobook to listen to while you're in the car or plane, we have got you covered. So now for an update on my current writing and audiobook projects. Ghost in the Corruption (as I mentioned last week) is now out and available at all the ebook stores: Amazon, Barnes & Noble, Kobo, Google Play, Apple Books, Smashwords, and Payhip. It is selling well. So thank you all for that. Now that Ghost in the Corruption is finished, what am I working on next? Well, back in 2023, I finished the Dragonskull and The Silent Order series back to back, so I declared Summer 2023 to be my Summer of Finishing Things. Well, it looks like Summer 2025 is going to be the Super Summer of Finishing Things because I intend to finish three series back to back. First up is Shield of Power, the sixth and final book of The Shield War series. As of this publishing, I am 26,000 words into it, which puts me on Chapter 6 of 29. So I think it's going to end up being around 100,000-110,000 words long, and I am hoping it will be out in June, though it might slip to July depending on how things go. Once that is done, the next one up will be Stealth and Spells Online: Final Quest, which will be the third and very definitely final book of the Stealth and Spells Online trilogy. Believe it or not, I have been working on Final Quest on the side for so long that I passed the 100,000 word mark in that book this week. In fact, it's been a side project for so long that I don't remember how long I've been working on it, and I had to look up the metadata to check that I indeed started chipping away on it on October 18th, 2024. So I am very pleased that I'm nearly done with the rough draft and because of that reason, if all goes well, it'll come out very quickly after Shield of Power, since I think the rough draft will end up at about 125,000 to 130,000 words or in that neighborhood. Once Stealth and Spells Online: Final Quest is finished, I will then write Ghost in the Siege, which will be the sixth and final book of the Ghost Armor series. I am 1,500 words into that and hoping for that to come out in August or September, if all goes well. Once The Shield War, Stealth and Spells Online, and Ghost Armor are finished, I will finally be free to return to the Rivah and Nadia series. I realized that through all of 2024 and the first half of 2025, I had five unfinished series at the same time, and that was just too much for me to keep track of as a writer, and I think it may have been too much for the readers because it was too much of a wait between the different series as I worked my way through them. So five series at the same time is too much, so hence the Super Summer of Finishing Things. Going forward, I've decided that three unfinished series at the same time will be my maximum, which after the Super Summer of Finishing Things will be Cloak Mage, Half-Elven Thief, and a new epic fantasy series that I will set in the realm of Owyllain. In audiobook news, Brad Wills started working on Shield of Battle this week and Hollis McCarthy started working on Ghost in the Corruption, so hopefully before probably about July or thereabouts, we will have those audiobooks available for you to listen to. So that is where I'm at with my current writing projects. 00:03:49 Main Topic of the Week: Self-Publishing Platforms for Ebooks [Note: Information in this Episode is Very Likely to Change] So now let's move on to our main topic for the week, which is the main self-publishing platforms for ebooks. Today we will do a brief overview of the self-publishing platforms I currently use: Amazon/KDP, Barnes & Noble, Kobo, Google Play, and Draft2Digital/Smashwords. The reason I wanted to do this is because there are many scammy platforms for self-publishing out there, but fortunately there are also many legitimate ones. Today we'll compare several of the most popular ones for ebooks. Just to make things easier for comparison, we'll be using the term platform to discuss both retailers and aggregators and we're not going to talk about options for self-publishing print or audio formats today. We're going to focus solely on ebooks. First of all, what should you look for in a publishing platform? The first thing is to make sure you retain complete ownership of your content in all formats. Some of these scammer ones try to claim all rights to anything you try to post or sell through them, so that is definitely a red flag to watch out for. Make sure that you understand any exclusivity requirements of any programs that you sign up for such as KDP Select, such as if other formats like audio are also included in their requirements, how long exclusivity lasts, et cetera. If the platform requires exclusivity, that is definitely something to pay attention to. Make sure you do your research carefully to understand how pricing, royalties, and payments work on each individual platform. Sometimes some of them will pay quarterly, some of them pay monthly, and some of them pay you last month's royalties at the end of the month. Some of them like Amazon run like two months behind. Finally, and this is a big one, you should not have to pay any money in order to upload your work. If they are asking for money upfront, it is probably a scam. Now, there are some aggregators that don't take a percentage and instead charge you a yearly fee. I'm not talking about them in this podcast episode because I don't use them, but they are out there. One example would be Book Funnel, which does charge a yearly fee for you to use but provides a valuable service in being a backend for running your own store on like Payhip or Shopify, and there's a couple of other useful services in that way, but they're not a storefront and they don't take a percentage of any royalties. They just charge a yearly fee. So they're not the topic with this episode. All the platforms I've talked about today do not have any fees in order to upload. Reputable sites like Amazon or Kobo will instead take a percentage of each book's sale. It's also good to have a few realistic expectations before you start using self-publishing platforms, and one of them is that the platform is not a marketer. For example, many people complain that KDP doesn't showcase their books and they get lost in the millions of books available. However, none of these services are promising that you'll make the front page of their site just by publishing there. It's a common delusion among new indie authors that when you publish your first book, that's all you have to do and people will flock to it. Unfortunately, it doesn't work that way. In fact, since Amazon makes a small fortune off book ads, it's not in their interest to give away screen space for free, and this isn't to knock on Amazon, that's just the way the retail industry works. For example, if you go into a Target or a Walmart or another big box retailer, note the products that are prominently displayed on the aisle displays or the endcaps of the aisles. They didn't just get there randomly. The manufacturers of those products paid big money to Amazon and Target and Walmart and the other big box retailers to have their products featured there. In many cases, online commerce is no different. Getting your book uploaded onto a platform is just the first step. Promoting and marketing the book is up to you and strategies for those will vary based on which ones you choose to use. For example, if you choose to make your work exclusive to just one platform, it's not a good idea to run Facebook ads in countries where that platform either doesn't exist or where it's not terribly popular. Today we're going to be just focusing on comparing the platforms, not how to best to market from them. So what are the options? #1: First up is the most common platform people use and it's the 800 pound gorilla in the self-publishing space, and that is Amazon's Kindle Direct Publishing. And what are the pros and cons of KDP? Pro: They are the biggest force in ebook publishing in many countries, including the United States. Some authors find that as much as 80 to 95% of their ebook sales come from Amazon, even if they are not exclusive with Amazon. For myself, it's usually about 50 to 60% of my sales on any given month are from Amazon and the rest come from the other retailers. Heavy readers are generally very familiar with the Kindle Store interface and Library setup, and many readers are kind of locked into Amazon because they own Kindle devices, subscribe to Kindle Unlimited, and have large Kindle Libraries. So those are all the pros of publishing with KDP. Cons: If you're expecting a large portion of your sales to come from the print version of your book or if print sales are very important to you, be aware that many bookstores and libraries either can't or won't buy print books from Amazon, so you should find an additional platform for the print version such as Ingram Spark or maybe Barnes & Noble's print division. One big concern about going exclusive with Amazon is that you're losing readers who don't have Kindle books in their countries, people who are boycotting Amazon for a variety of reasons, people who are locked into another platform such as Apple or Kobo, or people who want to self-archive their ebooks since Amazon doesn't allow that anymore. If you're already wide, you'll have to look carefully at what percentage of your sales are non-Amazon and if this percentage is an amount you'd be comfortable risking losing in order to be exclusive. Occasionally authors do complain about the customer service available to KDP, especially if it's urgent. For myself, I've not personally had any huge problems with KDP customer service. That said, I think you should expect a lead time of about one to two business days on anything you ask because I usually go through the email form. Does KDP offer a subscription service? Yes. Kindle Unlimited (KU) readers pay a set amount and can read an unlimited number of books each month, although they're limited as to how many they can have in their library at any one time. Promotions happen regularly, usually based around big sales like Prime Day, and it can make a subscription as cheap as $0.99 for a three month period. Some also receive free subscriptions by buying certain Amazon products such as a new Kindle or Kindle Fire. The downside of being in Kindle Unlimited is the exclusivity. You can't be in KU without being exclusive with Amazon, or at least the specific book in question has to be exclusive. Not all of your books have to be exclusive, and many authors such as myself will usually put one series in KU and then make sure everything else is wide. You must agree to be exclusive with them for ninety days and that time period is renewable. What does KDP pay in terms of royalty? For $2.99 to $9.99, they give you 70% of the sale price. Under $2.99 and above $9.99, it's 35%. So that is sort of an encouragement from Amazon to price your ebooks in the $2.99 to $9.99 range. Currently I price new novels at $4.99 and do short stories at $0.99 cents. What do I do? I have all of my titles available through KDP. I have a smaller portion of my collection exclusive through KDP Select/KU, and I have only recently increased that amount of Select titles due to the economic downturn. I suspect that KU users are likely to hold onto their subscriptions while cutting other expenses because honestly, KU is a pretty good deal for readers and the monthly subscription costs is about the same as one tradpub frontlist ebook, but with a KU subscription, they could read thousands of books for the same price. The value of KU is really very strong for frequent romance, LitRPG, science fiction, and fantasy readers. There's a strong population in the KU subscriber base often referred to as binge readers. They care more about variety, discovering new books, and the ability to read a lot over the ability to read specific authors or stories. So overall, I think if you are self-publishing and even if you don't like Amazon very much or don't plan to go exclusive, it's still in your best interest to publish your ebook with them, even if you are wide and intend to do all the other retailers just because Amazon really is the biggest ebook platform out there at the moment. #2: Now, the next self-publishing platform we're going to look at is Barnes & Noble Press, which as the name implies, belongs to Barnes & Noble. The Pros: some people are never, ever going to let go of their Nooks or they already have a large personal ebook library through the Nook so they feel locked into that platform. These readers are the majority of people buying ebooks through Barnes & Noble, but fortunately that group tends to read a lot. There's also a lot of trust in Barnes & Noble as a brand, and that inspires people to continue buying from them. In fact, for a while in the indie author space at the end of the 2010s and the start of the 2020s, it was a regular prediction that Barnes & Noble was going to go out of business soon, but then the company was bought by a private equity firm, and while private equity firms often have a deserved bad reputation for stripping a company of assets and then selling it off at a bargain basement price (such as the fate of Red Lobster), that does not seem to be the case of what happened with Barnes & Noble and the company really has been strengthening in recent years. So they may be here to stay for a while. The downsides of publishing with Barnes & Noble Press is that Barnes & Noble is relatively a minor player in the ebook market, though usually in the top four of most indie author ebook sales if they're wide. They have shifted their focus to selling print books instead of Nook devices, especially in the retail space. Do they offer a subscription service? They do not. However, nothing about Barnes & Noble requires exclusivity, which is nice, and the royalty structure is pretty good. It's 70% over all titles over $0.99. So if you want, you could price your ebook at $0.99 or $19.99 and still make 70%, which you couldn't do with those prices on Amazon. #3: The next self-publishing platform we'll look at is Kobo Writing Life, which is the ebook platform to publish on Kobo, which is owned by Rakuten. Pros: Kobo is strong in the international market and will help you to reach readers in many countries. Based on my sales data, in Canada and Australia, Kobo is significantly bigger than Amazon for ebook sales. Kobo has also had a surge of recent media attention in the US as people seek out alternatives to Amazon and Kindle devices. The Con of Kobo, and this is a fairly small one, is that their US market share is still fairly small compared to Amazon or Barnes & Noble or some of the others. But as I mentioned, they're a lot stronger in Canada and Australia, and they do reach a lot of different countries, more than Amazon does. Does Kobo have a subscription service? Yes, Kobo Plus. Kobo Plus is significantly less expensive than Kindle Unlimited, and there's an additional tier that allows you to add audiobook content to the plan. The library isn't quite as extensive as KU though, though. I should note that in the years since Kobo has been introduced, I'd say about half of my revenue from Kobo (sometimes 60% of my revenue from Kobo) comes from Kobo Plus and not from direct ebook sales. So it's getting to the point where the majority of their ebook revenue I suspect, is coming from Kobo Plus and not direct Kobo sales. Do they require exclusivity? No, which is another strong selling point for Kobo Plus. For their royalty structure, ebooks over $2.99, you get 70% and any books over below $2.99, you get 45%, which is a more generous term than Amazon in terms of the royalty rate for below $2.99 and above $9.99. So what do I do? I currently use it as one of the platforms for my ebooks. It's been a pretty strong seller for me consistently over the years, and every Kobo book that I have is also available in Kobo Plus, which probably explains the revenue split I was talking about earlier. #4: The next platform we'll look at is Draft2Digital/Smashwords, which we'll do as one because Draft2Digital and Smashwords are in the process of merging. Draft2Digital is technically what's called an aggregator, where you upload your book and then they can publish on a variety of different platforms for you, and in exchange, they take a small cut of the sales. Draft2Digital is, in my opinion, probably the most effective way to get your ebooks through Apple and Smashwords. Apple does have its own direct uploading service, but I've never used it because there are a bit too many hoops to jump through. Draft2Digital does, as I mentioned, have a way to publish on multiple storefronts at once while managing uploads and sales reporting through just one interface. They're not a storefront in and of themselves, although since Draft2Digital does own Smashwords, Smashwords essentially acts as their storefront for them. Although Draft2Digital lists Amazon, Kobo, and Barnes & Noble as an option, most authors will upload to these sites separately, and in fact, that's what I do for myself. The Pros of Draft2Digital is that it's a definite time savings using Draft2Digital to publish across multiple platforms, especially with platforms like Apple that are more difficult or time consuming to learn. This is also a convenient way to make your work accessible to library platforms like Overdrive/Libby, Hoopla, and Bibliotheca, if that is important to you. Library sales have never been a huge priority of mine, but I've never been opposed to them either, so I usually just flip those switches on and then don't think about it again. The Cons for Draft2Digital are that there was a period after the Smashwords migration where they received complaints about customer service and difficulty in setting up tax information, though I think that is mostly ironed out now. One potential hazard for Draft2Digital with a very specific subset of writers is that if you are a writer of, shall we say, very hard erotica, the sort that ends up in very restricted categories on most stores, you will probably have trouble publishing through Draft2Digital. This is not, however, a problem that's unique to Draft2Digital. Amazon has what is called the “erotica dungeon”, where if you publish certain kinds of, like we said, very harsh erotica, your book isn't searchable on the Amazon store. You can link to it directly, but it will never show up on any search results. Kobo in particular has had problems with erotica. Back in the 2010s, Kobo was also distributing ebooks to some British retailers, and these British retailers suddenly got upset when they noticed that these kinds of hard erotica were showing up on their store pages, which was not a good look for the company. And so there was a kerfuffle until that was all sorted out. My frank opinion with that is if you are writing these kinds of erotica, the big stores and Draft2Digital will never be on your side, and so you are better off pursuing a sort of a Patreon/running your own store on Shopify or Payhip strategy, but that is a bit of a digression. So in terms of royalties, Draft2Digital takes 10% of the book's retail price per copy sold, which is in addition to whatever amount is taken by the specific storefront. So you are paying a bit of money in exchange for convenience for just uploading your book to Draft2Digital and having it push out the book to all the different stores for you. What I do is I use Draft2Digital for Apple mainly because for a while I was using Smashwords, but Smashwords in the 2010s was a bit more persnickety than is now, and you needed to prepare a specially formatted doc file to publish on Smashwords and sometimes getting it through the Smashwords processing onto Apple was a bit of a pain. Draft2Digital took epub files, which are much easier to work with, and after a while I switched over all my Apple publishing to Draft2Digital entirely. So that's why I use Draft2Digital for Apple and for various library services that tend to be a minor amount of sales. Because of the difficulties on publishing direct to Apple, I do find that that 10% is good trade off in terms of selling books on Apple for me. #5: Now onto Google Play's ebook self-publishing platform, which is, I think its full name is the Google Books Partner Center, which lets you publish books to the Google Play Store for sale on Android devices. The Pros are that for writers interested in the international market, Google Play is another strong choice for a platform since the international mobile device market is very Android heavy. The iPhone (Apple) tends to be concentrated mainly in the US and a few of the wealthier countries like the UK and Canada, but Android has a much more international reach in general than the iPhone. Google Play also has some interesting promotional options for ebooks, such as offering the buyer a chance to subscribe to a specific series. The cons are that some authors report that their sales reporting doesn't always consistently generate reports, and others are annoyed that it only generates a CSV file, (which isn't that much of a hardship for people who are familiar with Excel). For myself, I found that there is a bit of a reporting lag on Google Play where it will sometimes take as long as five or six days for sales to show up on the dashboard, though usually it's only a delay of two days, though sometimes during the month you'll get these bigger lags and sometimes processing new material on the Google Play Store can be slow, and it can sometimes take two to three days for things to appear, though it usually gets worked out in the end. Does Google Play have a subscription service? It does not, nor does it require exclusivity, which is another point in its favor. And the royalties, the data is quite nice here. It is 70% for all price points in the countries listed on their support page, which only excludes a handful of countries like India, South Korea, and Japan (because of currency conversion regions or other local laws). So those are the ebook publishing platforms that I currently use, and because I use them myself, I would recommend them. Hopefully that is helpful to you as you are looking for places to self-publish your book as you set out to become an indie author. So that is it for this week. Thank you for listening to The Pulp Writer Show. I hope you found the show useful. A reminder that you can listen to all the backup episodes at https://thepulpwritershow.com. If you enjoyed the podcast, please leave a review on your podcasting platform of choice. Stay safe and stay healthy and see you all next week.
In this episode, we dive deep into ScubaGear, an open-source tool developed by the Cybersecurity and Infrastructure Security Agency (CISA) as part of the Secure Cloud Business Applications (SCuBA) project. Designed to assess Microsoft 365 (M365) tenant configurations, ScubaGear helps organizations align with CISA's Secure Configuration Baselines (SCBs) to prevent costly misconfigurations. From setup to real-world applications, we unpack how ScubaGear strengthens M365 security and share practical tips for IT admins and security teams. What You'll Learn: Why ScubaGear Matters: Learn how ScubaGear addresses the growing threat of cloud misconfigurations, which accounted for 30% of cloud attacks in early 2024. We discuss its origins in CISA's SCuBA project, and its value for US federal agencies, private organizations, and critical infrastructure. How ScubaGear Works: A technical breakdown of ScubaGear's PowerShell-based workflow, using Microsoft Graph APIs and Open Policy Agent (OPA) to compare tenant settings against SCBs. We cover setup requirements. Common Misconfigurations: Examples like disabled MFA or weak DLP policies, and how ScubaGear's HTML, JSON, and CSV reports provide actionable remediation steps. Best Practices: Tips for integrating ScubaGear into security workflows, including regular scans, policy customization, and combining with tools like Microsoft Secure Score. Real-World Insights: Sam shares experiences from consulting. What did you think of this episode? Give us some feedback via our contact form, Or leave us a voice message in the bottom right corner of our site.Read transcript
In this episode of the PowerShell Podcast, we welcome back Justin Grote, a Microsoft MVP and open-source powerhouse, for an in-depth and fast-paced conversation. Fresh off his PowerShell Wednesday presentation, Justin shares the thinking behind his latest innovations, including the creation of the high-performance ExcelFast module and his evangelism for dev containers and modern development workflows. Key topics in this episode include: Getting the most from VS Code – Justin shares power-user tips, favorite settings, and the evolution of his 1,000-line configuration file. GitHub Copilot and real-world developer productivity – How Justin's approach to AI tooling shifted after experiencing measurable value in his PowerShell workflows. Dev containers and runtime containers – A detailed breakdown of the difference, practical use cases, and how they transform collaboration, onboarding, and consistency. Excel Fast – A brand-new module optimized for high-performance reading, writing, and streaming of large Excel and CSV datasets, developed with dev containers from day one. Open-source contributions to PowerShell – Including enhanced logging for Invoke-RestMethod and building a dev container for the PowerShell repo itself. PowerShell Conf EU previews – From a 90-minute VS Code optimization deep dive to a hands-on runspaces lab with GitHub Codespaces integration. This episode is packed with practical advice, philosophy on tooling, and Justin's trademark blend of performance focus and community-first thinking. Whether you're a seasoned developer or looking to up your scripting game, you'll walk away with new ideas and resources to explore. Guest Bio – Justin Grote Justin Grote is a Microsoft MVP, PowerShell advocate, and open-source contributor with a deep focus on automation, performance, and developer productivity. Known for tools like ModuleFast and his work improving PowerShell workflows, Justin blends real-world experience with a passion for teaching and sharing. Whether he's optimizing VS Code, contributing to the PowerShell repo, or speaking at global conferences, Justin empowers the community with practical solutions and thoughtful insight. Links: Find Justin on GitHub, BlueSky, or on Discord (@JustinGrote): https://github.com/JustinGrote Try out ExcelFast: https://github.com/JustinGrote/ExcelFast PSConfEU Announcement: https://www.linkedin.com/feed/update/urn:li:activity:7328093268225806337/ Create Dev Container Docs: https://code.visualstudio.com/docs/devcontainers/create-dev-container SecretManagement.DpapiNG: https://github.com/jborean93/SecretManagement.DpapiNG Connect with Andrew on Socials: https://andrewpla.tech/links Catch PowerShell Wednesdays weekly at 2 PM EST on discord.gg/pdq The PowerShell Podcast hub: https://pdq.com/the-powershell-podcast The PowerShell Podcast on YouTube: https://youtu.be/dHbWFUyUaOE
In this episode of Talking Drupal, we dive into the intricacies of the Drupal marketplace initiative with our guest, Tiffany Farriss, CEO and co-owner of Palantir.net and long-time board member of the Drupal Association. We explore the goals and challenges of creating a trusted Drupal marketplace, discuss how site templates can lower the barrier to entry for new users, and examine the importance of maintaining community trust and the sustainability of Drupal. This episode also includes a spotlight on the Views CSV Source module and an in-depth discussion on community feedback, the potential value and business models for site templates, and the steps needed to make a go/no-go decision on the marketplace by the upcoming Vienna event. For show notes visit: https://www.talkingDrupal.com/504 Topics Meet Our Guest: Tiffany Farriss Module of the Week: Views CSV Source Deep Dive into Views CSV Source Introduction to the Drupal Marketplace Goals and Challenges of the Marketplace Working Group Community Feedback and Sustainability Monetization and Fairness in the Marketplace Risk Mitigation and Future Plans Exploring the Impact of Releases and Usage Challenges and Successes of the Drupal Marketplace Defining the MVP for the Drupal Marketplace Addressing Community Concerns and Governance Engaging the Community and Next Steps Final Thoughts and Contact Information Resources Marketplace initiative Guests Tiffany Farriss - palantir.net farriss Hosts Nic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Norah Medlin - tekNorah MOTW Correspondent Martin Anderson-Clutz - mandclu.com mandclu Brief description: Have you ever wanted to present data within your Drupal website that comes from a CSV flat file, without having to import that data to your Drupal database? There's a module for that. Module name/project name: Views CSV Source Brief history How old: created in March 2024 by Daniel Cothran (andileco) of JSI, though I met Danieal at Midcamp earlier this week and he was emphatic that his colleague and co-maintainer Nia Kathoni (nikathone) deserves significant credit Versions available: 1.0.11, which works with Drupal 8.8, 9, 10, and 11 Maintainership Actively maintained, latest release was last month Security coverage Test coverage Documentation - a robust README Number of open issues: 4 open issues, none of which are bugs Usage stats: 56 sites Module features and usage With Views CSV Source installed, you can create a view that uses a CSV as a source instead of the Drupal site's data. You can point to a file within your site's filesystem, or it can be a remotely hosted CSV. If the file requires authentication for access, it is also possible to include encoded credentials in a header. Now you can use CSV Fields to specify the columns you want to pull into the view, and you can use the “group by” to specify datasets to represent, for example to plot as lines in a chart You can also create filters, either a CSV Field that acts a standard text filter, or a CSV Field Options filter that creates a dropdown of all the unique values in a specified column Your assembled data can be shown in tables or charts, and can also be manipulated using standard view configuration, or using contributed modules like Views Simple Math Field The module also comes with sort and a contextual filter plugins It was impressed by a demo of Views CSV Source in a lightning talk at Midcamp yesterday, so I thought it would be fun to talk about today
¡Regalo GRATIS en nuestra LISTA DE CORREO! ➡️https://www.letraminuscula.com/suscribirse-lista-de-correo/ Visita nuestra WEB https://www.letraminuscula.com/ SI deseas PUBLICAR escríbenos : contacto@letraminuscula.com Llámanos☎ o escríbenos por WhatsApp:+34640667855 ¡SUSCRÍBETE al canal! CLIC AQUÍ: https://bit.ly/2Wv1fdX RESUMEN: Amazon KDP cambia las regalías de libros en papel del 60 % al 50 % para precios menores a 99,99 USD a partir del 10 de junio de 2025. Este vídeo explica cómo ajustar los precios para no perder ingresos, muestra ejemplos prácticos y detalla beneficios adicionales como la reducción en costos de impresión en color en algunas tiendas. ¡No te lo pierdas si autopublicas en Amazon! ⏲MARCAS DE TIEMPO: ▶️00:13 Cambios importantes en regalías KDP ▶️01:26 Ajustar precios para no perder ingresos ▶️02:38 El cambio solo afecta a precios bajos ▶️04:15 Todos deberán subir precios ▶️05:38 Subida mínima que clientes no notan ▶️06:53 Libros en color serán más baratos ▶️08:11 Regalías se pierden si no subes precios ▶️09:36 Pequeños cambios garantizan regalías ▶️11:14 Reducción en costes de impresión color ▶️12:47 El cambio de regalías afecta a todos ▶️14:12 Cómo ver libros afectados en archivo CSV ▶️15:42 Ejemplo práctico de ajuste de precio ▶️17:19 Cómo calcular el nuevo precio exacto ▶️18:47 Comparación con regalías editoriales ▶️20:31 50% sigue siendo mejor que editorial ▶️21:58 El 60% real es menor por coste impresión ▶️23:23 Recomendación final: ajustar precios ya ▶️24:39 Contacto y despedida del canal
Mikah Sargent takes viewers on a comprehensive tour of the Passwords app in macOS Sequoia, demonstrating how this robust tool serves as a complete credential management system. From passkeys to verification codes and shared passwords, Mikah explores how Apple has created a secure yet user-friendly solution for managing all your login information across devices. Passkeys - These are created automatically when you set up passkey authentication on websites, with limited editing options but the ability to add notes or modify the associated website. Verification codes - Users can add two-factor authentication codes either by entering setup keys manually or scanning QR codes. Wi-Fi - The app stores Wi-Fi network credentials, displays network security information (WPA2/WPA3), and lets users generate QR codes for easy sharing. Security recommendations - The app alerts users when passwords may be compromised in data breaches using Apple's differential privacy techniques that protect user privacy. Password sharing feature - Users can create groups to share specific login credentials with family members or others, with granular control over which passwords are shared. Password importing - The app supports importing passwords from CSV files, though Mikah strongly recommends deleting these files immediately after import for security. Cross-device synchronization - All passwords sync across Apple devices with end-to-end encryption via iCloud. Windows compatibility - Even Windows users can access their passwords through the iCloud Passwords app, making it a versatile solution. Passwords User Guide - Apple Support - https://support.apple.com/guide/passwords/welcome/1.1/mac/15.4.1 Host: Mikah Sargent Download or subscribe to Hands-On Mac at https://twit.tv/shows/hands-on-mac Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
Mikah Sargent takes viewers on a comprehensive tour of the Passwords app in macOS Sequoia, demonstrating how this robust tool serves as a complete credential management system. From passkeys to verification codes and shared passwords, Mikah explores how Apple has created a secure yet user-friendly solution for managing all your login information across devices. Passkeys - These are created automatically when you set up passkey authentication on websites, with limited editing options but the ability to add notes or modify the associated website. Verification codes - Users can add two-factor authentication codes either by entering setup keys manually or scanning QR codes. Wi-Fi - The app stores Wi-Fi network credentials, displays network security information (WPA2/WPA3), and lets users generate QR codes for easy sharing. Security recommendations - The app alerts users when passwords may be compromised in data breaches using Apple's differential privacy techniques that protect user privacy. Password sharing feature - Users can create groups to share specific login credentials with family members or others, with granular control over which passwords are shared. Password importing - The app supports importing passwords from CSV files, though Mikah strongly recommends deleting these files immediately after import for security. Cross-device synchronization - All passwords sync across Apple devices with end-to-end encryption via iCloud. Windows compatibility - Even Windows users can access their passwords through the iCloud Passwords app, making it a versatile solution. Passwords User Guide - Apple Support - https://support.apple.com/guide/passwords/welcome/1.1/mac/15.4.1 Host: Mikah Sargent Download or subscribe to Hands-On Mac at https://twit.tv/shows/hands-on-mac Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
Mikah Sargent takes viewers on a comprehensive tour of the Passwords app in macOS Sequoia, demonstrating how this robust tool serves as a complete credential management system. From passkeys to verification codes and shared passwords, Mikah explores how Apple has created a secure yet user-friendly solution for managing all your login information across devices. Passkeys - These are created automatically when you set up passkey authentication on websites, with limited editing options but the ability to add notes or modify the associated website. Verification codes - Users can add two-factor authentication codes either by entering setup keys manually or scanning QR codes. Wi-Fi - The app stores Wi-Fi network credentials, displays network security information (WPA2/WPA3), and lets users generate QR codes for easy sharing. Security recommendations - The app alerts users when passwords may be compromised in data breaches using Apple's differential privacy techniques that protect user privacy. Password sharing feature - Users can create groups to share specific login credentials with family members or others, with granular control over which passwords are shared. Password importing - The app supports importing passwords from CSV files, though Mikah strongly recommends deleting these files immediately after import for security. Cross-device synchronization - All passwords sync across Apple devices with end-to-end encryption via iCloud. Windows compatibility - Even Windows users can access their passwords through the iCloud Passwords app, making it a versatile solution. Passwords User Guide - Apple Support - https://support.apple.com/guide/passwords/welcome/1.1/mac/15.4.1 Host: Mikah Sargent Download or subscribe to Hands-On Mac at https://twit.tv/shows/hands-on-mac Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
Allen Wyma talks with WindSoilder, a contributor to Nushell, a shell that treats data as structured tables. WindSoilder shares his journey into programming, his work on Nushell, and how Rust has shaped his development experience. Contributing to Rustacean Station Rustacean Station is a community project; get in touch with us if you'd like to suggest an idea for an episode or offer your services as a host or audio editor! Twitter: @rustaceanfm Discord: Rustacean Station Github: @rustacean-station Email: hello@rustacean-station.org Timestamps [@00:00] - Meet WindSoilder: Python developer and Rust enthusiast [@04:15] - Discovering Rust and starting with Nushell [@09:30] - Structured data pipelines in Nushell [@15:20] - Using Nushell for CSV, JSON, and HTTP tasks [@20:45] - Integrating Nushell with external commands and plugins [@27:35] - From contributor to core team member [@33:10] - Learning Rust through Nushell: Challenges and rewards [@38:50] - Upcoming features and improvements in Nushell [@44:25] - Advice for new contributors and Rust beginners [@47:40] - Final thoughts and community resources Credits Intro Theme: Aerocity Audio Editing: Plangora Hosting Infrastructure: Jon Gjengset Show Notes: Plangora Hosts: Allen Wyma
This week, we discuss the rise of MCP, Google's Agent2Agent protocol, and 20 years of Git. Plus, lazy ways to get rid of your junk. Watch the YouTube Live Recording of Episode (https://www.youtube.com/live/o2bmkzXOzHE?si=bPrbuPlKYODQj88s) 514 (https://www.youtube.com/live/o2bmkzXOzHE?si=bPrbuPlKYODQj88s) Runner-up Titles They like to keep it tight, but I'll distract them Bring some SDT energy Salesforce is where AI goes to struggle I like words Rundown MCP The Strategy Behind MCP (https://fintanr.com/links/2025/03/31/mcp-strategy.html?utm_source=substack&utm_medium=email) Google's Agent2Agent Protocol Helps AI Agents Talk to Each Other (https://thenewstack.io/googles-agent2agent-protocol-helps-ai-agents-talk-to-each-other/) Announcing the Agent2Agent Protocol (A2A)- Google Developers Blog (https://developers.googleblog.com/en/a2a-a-new-era-of-agent-interoperability/) MCP: What It Is and Why It Matters (https://addyo.substack.com/p/mcp-what-it-is-and-why-it-matters) 20 years of Git. Still weird, still wonderful. (https://blog.gitbutler.com/20-years-of-git/) A love letter to the CSV format (https://github.com/medialab/xan/blob/master/docs/LOVE_LETTER.md?ref=labnotes.org) Relevant to your Interests JFrog Survey Surfaces Limited DevSecOps Gains - DevOps.com (https://substack.com/redirect/dc38a19b-484e-47bc-83ec-f0413af42718?j=eyJ1IjoiMmw5In0.XyGUvWHNbIDkkVfjKDkxiDWJVFXc4dKUhxHaMrlgmdI) Raspberry Pi's sliced profits are easier to swallow than its valuation (https://on.ft.com/42d3mol) 'I begin spying for Deel': (https://www.yahoo.com/news/begin-spying-deel-rippling-employee-151407449.html) Bill Gates Publishes Original Microsoft Source Code in a Blog Post (https://www.cnet.com/tech/computing/bill-gates-publishes-original-microsoft-source-code-in-a-blog-post/) WordPress.com owner Automattic is laying off 16 percent of workers (https://www.theverge.com/news/642187/automattic-wordpress-layoffs-matt-mullenweg) Intel, TSMC recently discussed chipmaking joint venture (https://www.reuters.com/technology/intel-tsmc-tentatively-agree-form-chipmaking-joint-venture-information-reports-2025-04-03/) TikTok deal scuttled because of Trump's tariffs on China (https://www.nbcnews.com/politics/politics-news/trump-tiktok-ban-extension-rcna199394) NVIDIA Finally Adds Native Python Support to CUDA (https://thenewstack.io/nvidia-finally-adds-native-python-support-to-cuda/) Cloudflare Acquires Outerbase (https://www.cloudflare.com/press-releases/2025/cloudflare-acquires-outerbase-to-expand-developer-experience/) UK loses bid to keep Apple appeal against demand for iPhone 'backdoor' a secret (https://www.cnbc.com/2025/04/07/uk-loses-bid-to-keep-apple-appeal-against-iphone-backdoor-a-secret.html) Cloud Asteroids | Wiz (https://www.wiz.io/asteroids) Unpacking Google Cloud Platform's Acquisition Of Wiz (https://moorinsightsstrategy.com/unpacking-google-cloud-platforms-acquisition-of-wiz/) Trade, Tariffs, and Tech (https://stratechery.com/2025/trade-tariffs-and-tech/?access_token=eyJhbGciOiJSUzI1NiIsImtpZCI6InN0cmF0ZWNoZXJ5LnBhc3Nwb3J0Lm9ubGluZSIsInR5cCI6IkpXVCJ9.eyJhdWQiOiJzdHJhdGVjaGVyeS5wYXNzcG9ydC5vbmxpbmUiLCJhenAiOiJIS0xjUzREd1Nod1AyWURLYmZQV00xIiwiZW50Ijp7InVyaSI6WyJodHRwczovL3N0cmF0ZWNoZXJ5LmNvbS8yMDI1L3RyYWRlLXRhcmlmZnMtYW5kLXRlY2gvIl19LCJleHAiOjE3NDY2MjA4MTAsImlhdCI6MTc0NDAyODgxMCwiaXNzIjoiaHR0cHM6Ly9hcHAucGFzc3BvcnQub25saW5lL29hdXRoIiwic2NvcGUiOiJmZWVkOnJlYWQgYXJ0aWNsZTpyZWFkIGFzc2V0OnJlYWQgY2F0ZWdvcnk6cmVhZCBlbnRpdGxlbWVudHMiLCJzdWIiOiJDS1RtckdldHdmM1lYa3FCYkpKaUgiLCJ1c2UiOiJhY2Nlc3MifQ.pVeppxFZcYy960AbHM--oz5gzQdMEa_mv3ZPrqrZmbw9PhwL3iCEQ7_PtfPEKgInTfvSGWofXW0ZjAN-G_Eug5BlvwlF8T6HhXOCNJlwJJeqkWKvNdjvVz0t6bc5fOjn4Tbt_JobtrwxIEe-4-L7QRMhzFj9ajiiRqU6KNi3qYxWScg3XWfYmuhRdItQsgWINcSyW9iLaTkDLga_m95MMBNAat-CXDhEeKKCrAApZBM_RoNFaQ3s679vslz2IbJuCIAN1jVvZYR2Vg18lDbwubPiddDQAOkjs77PZRX_tCnMSwVXtOq0S1cCn4GZIw1qPY8j0qWWmkUck_izqPAveg) Google Workspace gets automation flows, podcast-style summaries (https://techcrunch.com/2025/04/09/google-workspace-gets-automation-flows-podcast-style-summaries/?guccounter=1&guce_referrer=aHR0cHM6Ly9uZXdzLmdvb2dsZS5jb20v&guce_referrer_sig=AQAAAAm5axmZnaAYjPgnDoqozIFkZHFPG8FHWa9y8pWwoQMN-oJ8MvJjY0IOg7Ej35bBB1Y2Ej192X3dHr5Q8PZ4i8WP_VNeXKj4f1n-KXFgqrpjfjUbiUvE4eGIl1j1VPWIg62ApISVGhYQ-__bXdIteBex8_k5-wxcpSYtfmlAFxsk) Zelle is shutting down its app. Here's how you can still use the service (https://www.cnn.com/2025/04/03/business/zelle-cash-transferring-app-shuts-down/index.html) One year ago Redis changed its license – and lost most of its external contributors (https://devclass.com/2025/04/01/one-year-ago-redis-changed-its-license-and-lost-most-of-its-external-contributors/?ck_subscriber_id=512840665&utm_source=convertkit&utm_medium=email&utm_campaign=[Last%20Week%20in%20AWS]%20Issue%20#417:%20Way%20of%20the%20Weasel,%20RDS%20and%20SageMaker%20Edition%20-%2017192200) Tailscale raises $160 Million (USD) Series C to build the New Internet (https://tailscale.com/blog/series-c) Nonsense NFL announces use of virtual measurement technology for first downs (https://www.nytimes.com/athletic/6247338/2025/04/01/nfl-announces-virtual-first-down-measurement-technology/?source=athletic_scoopcity_newsletter&campaign=13031970&userId=56655) Listener Feedback GitJobs (https://gitjobs.dev/) Freecycle (https://www.freecycle.org) Conferences Tanzu Annual Update AI PARTY! (https://go-vmware.broadcom.com/april-moment-2025?utm_source=cote&utm_campaign=devrel&utm_medium=newsletter), April 16th, Coté speaking DevOps Days Atlanta (https://devopsdays.org/events/2025-atlanta/welcome/), April 29th-30th Cloud Foundry Day US (https://events.linuxfoundation.org/cloud-foundry-day-north-america/), May 14th, Palo Alto, CA, Coté speaking Fr (https://vmwarereg.fig-street.com/051325-tanzu-workshop/)ee AI workshop (https://vmwarereg.fig-street.com/051325-tanzu-workshop/), May 13th. day before C (https://events.linuxfoundation.org/cloud-foundry-day-north-america/)loud (https://events.linuxfoundation.org/cloud-foundry-day-north-america/) (https://events.linuxfoundation.org/cloud-foundry-day-north-america/)Foundry (https://events.linuxfoundation.org/cloud-foundry-day-north-america/) Day (https://events.linuxfoundation.org/cloud-foundry-day-north-america/) NDC Oslo (https://ndcoslo.com/), May 21st-23th, Coté speaking SDT News & Community Join our Slack community (https://softwaredefinedtalk.slack.com/join/shared_invite/zt-1hn55iv5d-UTfN7mVX1D9D5ExRt3ZJYQ#/shared-invite/email) Email the show: questions@softwaredefinedtalk.com (mailto:questions@softwaredefinedtalk.com) Free stickers: Email your address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) Follow us on social media: Twitter (https://twitter.com/softwaredeftalk), Threads (https://www.threads.net/@softwaredefinedtalk), Mastodon (https://hachyderm.io/@softwaredefinedtalk), LinkedIn (https://www.linkedin.com/company/software-defined-talk/), BlueSky (https://bsky.app/profile/softwaredefinedtalk.com) Watch us on: Twitch (https://www.twitch.tv/sdtpodcast), YouTube (https://www.youtube.com/channel/UCi3OJPV6h9tp-hbsGBLGsDQ/featured), Instagram (https://www.instagram.com/softwaredefinedtalk/), TikTok (https://www.tiktok.com/@softwaredefinedtalk) Book offer: Use code SDT for $20 off "Digital WTF" by Coté (https://leanpub.com/digitalwtf/c/sdt) Sponsor the show (https://www.softwaredefinedtalk.com/ads): ads@softwaredefinedtalk.com (mailto:ads@softwaredefinedtalk.com) Recommendations Brandon: KONNWEI KW208 12V Car Battery Tester (https://www.amazon.com/dp/B08MPXGSGN?ref=ppx_yo2ov_dt_b_fed_asin_title) Matt: Search Engine: The Memecoin Casino (https://www.searchengine.show/planet-money-the-memecoin-casino/) Coté: :Knipex Cobra High-Tech Water Pump Pliers (https://www.amazon.com/atramentized-125-self-service-87-01/dp/B098D1HNGY/) Photo Credits Header (https://unsplash.com/photos/a-bicycle-parked-on-the-side-of-a-road-next-to-a-traffic-sign-wPv1QV_i8ek)
D'Roll vum Marc Spautz bannent der CSV an d'Fro vun der politescher Verantwortung, an de Koalitiounsaccord an Däitschland sinn haut Theemen an der Press vun haut.
Brandon Liu is an open source developer and creator of the Protomaps basemap project. We talk about how static maps help developers build sites that last, the PMTiles file format, the role of OpenStreetMap, and his experience funding and running an open source project full time. Protomaps Protomaps PMTiles (File format used by Protomaps) Self-hosted slippy maps, for novices (like me) Why Deploy Protomaps on a CDN User examples Flickr Pinball Map Toilet Map Related projects OpenStreetMap (Dataset protomaps is based on) Mapzen (Former company that released details on what to display based on zoom levels) Mapbox GL JS (Mapbox developed source available map rendering library) MapLibre GL JS (Open source fork of Mapbox GL JS) Other links HTTP range requests (MDN) Hilbert curve Transcript You can help correct transcripts on GitHub. Intro [00:00:00] Jeremy: I'm talking to Brandon Liu. He's the creator of Protomaps, which is a way to easily create and host your own maps. Let's get into it. [00:00:09] Brandon: Hey, so thanks for having me on the podcast. So I'm Brandon. I work on an open source project called Protomaps. What it really is, is if you're a front end developer and you ever wanted to put maps on a website or on a mobile app, then Protomaps is sort of an open source solution for doing that that I hope is something that's way easier to use than, um, a lot of other open source projects. Why not just use Google Maps? [00:00:36] Jeremy: A lot of people are gonna be familiar with Google Maps. Why should they worry about whether something's open source? Why shouldn't they just go and use the Google maps API? [00:00:47] Brandon: So Google Maps is like an awesome thing it's an awesome product. Probably one of the best tech products ever right? And just to have a map that tells you what restaurants are open and something that I use like all the time especially like when you're traveling it has all that data. And the most amazing part is that it's free for consumers but it's not necessarily free for developers. Like if you wanted to embed that map onto your website or app, that usually has an API cost which still has a free tier and is affordable. But one motivation, one basic reason to use open source is if you have some project that doesn't really fit into that pricing model. You know like where you have to pay the cost of Google Maps, you have a side project, a nonprofit, that's one reason. But there's lots of other reasons related to flexibility or customization where you might want to use open source instead. Protomaps examples [00:01:49] Jeremy: Can you give some examples where people have used Protomaps and where that made sense for them? [00:01:56] Brandon: I follow a lot of the use cases and I also don't know about a lot of them because I don't have an API where I can track a hundred percent of the users. Some of them use the hosted version, but I would say most of them probably use it on their own infrastructure. One of the cool projects I've been seeing is called Toilet Map. And what toilet map is if you're in the UK and you want find a public restroom then it maps out, sort of crowdsourced all of the public restrooms. And that's important for like a lot of people if they have health issues, they need to find that information. And just a lot of different projects in the same vein. There's another one called Pinball Map which is sort of a hobby project to find all the pinball machines in the world. And they wanted to have a customized map that fit in with their theme of pinball. So these sorts of really cool indie projects are the ones I'm most excited about. Basemaps vs Overlays [00:02:57] Jeremy: And if we talk about, like the pinball map as an example, there's this concept of a basemap and then there's the things that you lay on top of it. What is a basemap and then is the pinball locations is that part of it or is that something separate? [00:03:12] Brandon: It's usually something separate. The example I usually use is if you go to a real estate site, like Zillow, you'll open up the map of Seattle and it has a bunch of pins showing all the houses, and then it has some information beneath it. That information beneath it is like labels telling, this neighborhood is Capitol Hill, or there is a park here. But all that information is common to a lot of use cases and it's not specific to real estate. So I think usually that's the distinction people use in the industry between like a base map versus your overlay. The overlay is like the data for your product or your company while the base map is something you could get from Google or from Protomaps or from Apple or from Mapbox that kind of thing. PMTiles for hosting the basemap and overlays [00:03:58] Jeremy: And so Protomaps in particular is responsible for the base map, and that information includes things like the streets and the locations of landmarks and things like that. Where is all that information coming from? [00:04:12] Brandon: So the base map information comes from a project called OpenStreetMap. And I would also, point out that for Protomaps as sort of an ecosystem. You can also put your overlay data into a format called PMTiles, which is sort of the core of what Protomaps is. So it can really do both. It can transform your data into the PMTiles format which you can host and you can also host the base map. So you kind of have both of those sides of the product in one solution. [00:04:43] Jeremy: And so when you say you have both are you saying that the PMTiles file can have, the base map in one file and then you would have the data you're laying on top in another file? Or what are you describing there? [00:04:57] Brandon: That's usually how I recommend to do it. Oftentimes there'll be sort of like, a really big basemap 'cause it has all of that data about like where the rivers are. Or while, if you want to put your map of toilets or park benches or pickleball courts on top, that's another file. But those are all just like assets you can move around like JSON or CSV files. Statically Hosted [00:05:19] Jeremy: And I think one of the things you mentioned was that your goal was to make Protomaps or the, the use of these PMTiles files easy to use. What does that look like for, for a developer? I wanna host a map. What do I actually need to, to put on my servers? [00:05:38] Brandon: So my usual pitch is that basically if you know how to use S3 or cloud storage, that you know how to deploy a map. And that, I think is the main sort of differentiation from most open source projects. Like a lot of them, they call themselves like, like some sort of self-hosted solution. But I've actually avoided using the term self-hosted because I think in most cases that implies a lot of complexity. Like you have to log into a Linux server or you have to use Kubernetes or some sort of Docker thing. What I really want to emphasize is the idea that, for Protomaps, it's self-hosted in the same way like CSS is self-hosted. So you don't really need a service from Amazon to host the JSON files or CSV files. It's really just a static file. [00:06:32] Jeremy: When you say static file that means you could use any static web host to host your HTML file, your JavaScript that actually renders the map. And then you have your PMTiles files, and you're not running a process or anything, you're just putting your files on a static file host. [00:06:50] Brandon: Right. So I think if you're a developer, you can also argue like a static file server is a server. It's you know, it's the cloud, it's just someone else's computer. It's really just nginx under the hood. But I think static storage is sort of special. If you look at things like static site generators, like Jekyll or Hugo, they're really popular because they're a commodity or like the storage is a commodity. And you can take your blog, make it a Jekyll blog, hosted on S3. One day, Amazon's like, we're charging three times as much so you can move it to a different cloud provider. And that's all vendor neutral. So I think that's really the special thing about static storage as a primitive on the web. Why running servers is a problem for resilience [00:07:36] Jeremy: Was there a prior experience you had? Like you've worked with maps for a very long time. Were there particular difficulties you had where you said I just gotta have something that can be statically hosted? [00:07:50] Brandon: That's sort of exactly why I got into this. I've been working sort of in and around the map space for over a decade, and Protomaps is really like me trying to solve the same problem I've had over and over again in the past, just like once and forever right? Because like once this problem is solved, like I don't need to deal with it again in the future. So I've worked at a couple of different companies before, mostly as a contractor, for like a humanitarian nonprofit for a design company doing things like, web applications to visualize climate change. Or for even like museums, like digital signage for museums. And oftentimes they had some sort of data visualization component, but always sort of the challenge of how to like, store and also distribute like that data was something that there wasn't really great open source solutions. So just for map data, that's really what motivated that design for Protomaps. [00:08:55] Jeremy: And in those, those projects in the past, were those things where you had to run your own server, run your own database, things like that? [00:09:04] Brandon: Yeah. And oftentimes we did, we would spin up an EC2 instance, for maybe one client and then we would have to host this server serving map data forever. Maybe the client goes away, or I guess it's good for business if you can sign some sort of like long-term support for that client saying, Hey, you know, like we're done with a project, but you can pay us to maintain the EC2 server for the next 10 years. And that's attractive. but it's also sort of a pain, because usually what happens is if people are given the choice, like a developer between like either I can manage the server on EC2 or on Rackspace or Hetzner or whatever, or I can go pay a SaaS to do it. In most cases, businesses will choose to pay the SaaS. So that's really like what creates a sort of lock-in is this preference for like, so I have this choice between like running the server or paying the SaaS. Like businesses will almost always go and pay the SaaS. [00:10:05] Jeremy: Yeah. And in this case, you either find some kind of free hosting or low-cost hosting just to host your files and you upload the files and then you're good from there. You don't need to maintain anything. [00:10:18] Brandon: Exactly, and that's really the ideal use case. so I have some users these, climate science consulting agencies, and then they might have like a one-off project where they have to generate the data once, but instead of having to maintain this server for the lifetime of that project, they just have a file on S3 and like, who cares? If that costs a couple dollars a month to run, that's fine, but it's not like S3 is gonna be deprecated, like it's gonna be on an insecure version of Ubuntu or something. So that's really the ideal, set of constraints for using Protomaps. [00:10:58] Jeremy: Yeah. Something this also makes me think about is, is like the resilience of sites like remaining online, because I, interviewed, Kyle Drake, he runs Neocities, which is like a modern version of GeoCities. And if I remember correctly, he was mentioning how a lot of old websites from that time, if they were running a server backend, like they were running PHP or something like that, if you were to try to go to those sites, now they're like pretty much all dead because there needed to be someone dedicated to running a Linux server, making sure things were patched and so on and so forth. But for static sites, like the ones that used to be hosted on GeoCities, you can go to the internet archive or other websites and they were just files, right? You can bring 'em right back up, and if anybody just puts 'em on a web server, then you're good. They're still alive. Case study of news room preferring static hosting [00:11:53] Brandon: Yeah, exactly. One place that's kind of surprising but makes sense where this comes up, is for newspapers actually. Some of the users using Protomaps are the Washington Post. And the reason they use it, is not necessarily because they don't want to pay for a SaaS like Google, but because if they make an interactive story, they have to guarantee that it still works in a couple of years. And that's like a policy decision from like the editorial board, which is like, so you can't write an article if people can't view it in five years. But if your like interactive data story is reliant on a third party, API and that third party API becomes deprecated, or it changes the pricing or it, you know, it gets acquired, then your journalism story is not gonna work anymore. So I have seen really good uptake among local news rooms and even big ones to use things like Protomaps just because it makes sense for the requirements. Working on Protomaps as an open source project for five years [00:12:49] Jeremy: How long have you been working on Protomaps and the parts that it's made up of such as PMTiles? [00:12:58] Brandon: I've been working on it for about five years, maybe a little more than that. It's sort of my pandemic era project. But the PMTiles part, which is really the heart of it only came in about halfway. Why not make a SaaS? [00:13:13] Brandon: So honestly, like when I first started it, I thought it was gonna be another SaaS and then I looked at it and looked at what the environment was around it. And I'm like, uh, so I don't really think I wanna do that. [00:13:24] Jeremy: When, when you say you looked at the environment around it what do you mean? Why did you decide not to make it a SaaS? [00:13:31] Brandon: Because there already is a lot of SaaS out there. And I think the opportunity of making something that is unique in terms of those use cases, like I mentioned like newsrooms, was clear. Like it was clear that there was some other solution, that could be built that would fit these needs better while if it was a SaaS, there are plenty of those out there. And I don't necessarily think that they're well differentiated. A lot of them all use OpenStreetMap data. And it seems like they mainly compete on price. It's like who can build the best three column pricing model. And then once you do that, you need to build like billing and metrics and authentication and like those problems don't really interest me. So I think, although I acknowledge sort of the indie hacker ethos now is to build a SaaS product with a monthly subscription, that's something I very much chose not to do, even though it is for sure like the best way to build a business. [00:14:29] Jeremy: Yeah, I mean, I think a lot of people can appreciate that perspective because it's, it's almost like we have SaaS overload, right? Where you have so many little bills for your project where you're like, another $5 a month, another $10 a month, or if you're a business, right? Those, you add a bunch of zeros and at some point it's just how many of these are we gonna stack on here? [00:14:53] Brandon: Yeah. And honestly. So I really think like as programmers, we're not really like great at choosing how to spend money like a $10 SaaS. That's like nothing. You know? So I can go to Starbucks and I can buy a pumpkin spice latte, and that's like $10 basically now, right? And it's like I'm able to make that consumer choice in like an instant just to spend money on that. But then if you're like, oh, like spend $10 on a SaaS that somebody put a lot of work into, then you're like, oh, that's too expensive. I could just do it myself. So I'm someone that also subscribes to a lot of SaaS products. and I think for a lot of things it's a great fit. Many open source SaaS projects are not easy to self host [00:15:37] Brandon: But there's always this tension between an open source project that you might be able to run yourself and a SaaS. And I think a lot of projects are at different parts of the spectrum. But for Protomaps, it's very much like I'm trying to move maps to being it is something that is so easy to run yourself that anyone can do it. [00:16:00] Jeremy: Yeah, and I think you can really see it with, there's a few SaaS projects that are successful and they're open source, but then you go to look at the self-hosting instructions and it's either really difficult to find and you find it, and then the instructions maybe don't work, or it's really complicated. So I think doing the opposite with Protomaps. As a user, I'm sure we're all appreciative, but I wonder in terms of trying to make money, if that's difficult. [00:16:30] Brandon: No, for sure. It is not like a good way to make money because I think like the ideal situation for an open source project that is open that wants to make money is the product itself is fundamentally complicated to where people are scared to run it themselves. Like a good example I can think of is like Supabase. Supabase is sort of like a platform as a service based on Postgres. And if you wanted to run it yourself, well you need to run Postgres and you need to handle backups and authentication and logging, and that stuff all needs to work and be production ready. So I think a lot of people, like they don't trust themselves to run database backups correctly. 'cause if you get it wrong once, then you're kind of screwed. So I think that fundamental aspect of the product, like a database is something that is very, very ripe for being a SaaS while still being open source because it's fundamentally hard to run. Another one I can think of is like tailscale, which is, like a VPN that works end to end. That's something where, you know, it has this networking complexity where a lot of developers don't wanna deal with that. So they'd happily pay, for tailscale as a service. There is a lot of products or open source projects that eventually end up just changing to becoming like a hosted service. Businesses going from open source to closed or restricted licenses [00:17:58] Brandon: But then in that situation why would they keep it open source, right? Like, if it's easy to run yourself well, doesn't that sort of cannibalize their business model? And I think that's really the tension overall in these open source companies. So you saw it happen to things like Elasticsearch to things like Terraform where they eventually change the license to one that makes it difficult for other companies to compete with them. [00:18:23] Jeremy: Yeah, I mean there's been a number of cases like that. I mean, specifically within the mapping community, one I can think of was Mapbox's. They have Mapbox gl. Which was a JavaScript client to visualize maps and they moved from, I forget which license they picked, but they moved to a much more restrictive license. I wonder what your thoughts are on something that releases as open source, but then becomes something maybe a little more muddy. [00:18:55] Brandon: Yeah, I think it totally makes sense because if you look at their business and their funding, it seems like for Mapbox, I haven't used it in a while, but my understanding is like a lot of their business now is car companies and doing in dash navigation. And that is probably way better of a business than trying to serve like people making maps of toilets. And I think sort of the beauty of it is that, so Mapbox, the story is they had a JavaScript renderer called Mapbox GL JS. And they changed that to a source available license a couple years ago. And there's a fork of it that I'm sort of involved in called MapLibre GL. But I think the cool part is Mapbox paid employees for years, probably millions of dollars in total to work on this thing and just gave it away for free. Right? So everyone can benefit from that work they did. It's not like that code went away, like once they changed the license. Well, the old version has been forked. It's going its own way now. It's quite different than the new version of Mapbox, but I think it's extremely generous that they're able to pay people for years, you know, like a competitive salary and just give that away. [00:20:10] Jeremy: Yeah, so we should maybe look at it as, it was a gift while it was open source, and they've given it to the community and they're on continuing on their own path, but at least the community running Map Libre, they can run with it, right? It's not like it just disappeared. [00:20:29] Brandon: Yeah, exactly. And that is something that I use for Protomaps quite extensively. Like it's the primary way of showing maps on the web and I've been trying to like work on some enhancements to it to have like better internationalization for if you are in like South Asia like not show languages correctly. So I think it is being taken in a new direction. And I think like sort of the combination of Protomaps and MapLibre, it addresses a lot of use cases, like I mentioned earlier with like these like hobby projects, indie projects that are almost certainly not interesting to someone like Mapbox or Google as a business. But I'm happy to support as a small business myself. Financially supporting open source work (GitHub sponsors, closed source, contracts) [00:21:12] Jeremy: In my previous interview with Tom, one of the main things he mentioned was that creating a mapping business is incredibly difficult, and he said he probably wouldn't do it again. So in your case, you're building Protomaps, which you've admitted is easy to self-host. So there's not a whole lot of incentive for people to pay you. How is that working out for you? How are you supporting yourself? [00:21:40] Brandon: There's a couple of strategies that I've tried and oftentimes failed at. Just to go down the list, so I do have GitHub sponsors so I do have a hosted version of Protomaps you can use if you don't want to bother copying a big file around. But the way I do the billing for that is through GitHub sponsors. If you wanted to use this thing I provide, then just be a sponsor. And that definitely pays for itself, like the cost of running it. And that's great. GitHub sponsors is so easy to set up. It just removes you having to deal with Stripe or something. 'cause a lot of people, their credit card information is already in GitHub. GitHub sponsors I think is awesome if you want to like cover costs for a project. But I think very few people are able to make that work. A thing that's like a salary job level. It's sort of like Twitch streaming, you know, there's a handful of people that are full-time streamers and then you look down the list on Twitch and it's like a lot of people that have like 10 viewers. But some of the other things I've tried, I actually started out, publishing the base map as a closed source thing, where I would sell sort of like a data package instead of being a SaaS, I'd be like, here's a one-time download, of the premium data and you can buy it. And quite a few people bought it I just priced it at like $500 for this thing. And I thought that was an interesting experiment. The main reason it's interesting is because the people that it attracts to you in terms of like, they're curious about your products, are all people willing to pay money. While if you start out everything being open source, then the people that are gonna be try to do it are only the people that want to get something for free. So what I discovered is actually like once you transition that thing from closed source to open source, a lot of the people that used to pay you money will still keep paying you money because like, it wasn't necessarily that that closed source thing was why they wanted to pay. They just valued that thought you've put into it your expertise, for example. So I think that is one thing, that I tried at the beginning was just start out, closed source proprietary, then make it open source. That's interesting to people. Like if you release something as open source, if you go the other way, like people are really mad if you start out with something open source and then later on you're like, oh, it's some other license. Then people are like that's so rotten. But I think doing it the other way, I think is quite valuable in terms of being able to find an audience. [00:24:29] Jeremy: And when you said it was closed source and paid to open source, do you still sell those map exports? [00:24:39] Brandon: I don't right now. It's something that I might do in the future, you know, like have small customizations of the data that are available, uh, for a fee. still like the core OpenStreetMap based map that's like a hundred gigs you can just download. And that'll always just be like a free download just because that's already out there. All the source code to build it is open source. So even if I said, oh, you have to pay for it, then someone else can just do it right? So there's no real reason like to make that like some sort of like paywall thing. But I think like overall if the project is gonna survive in the long term it's important that I'd ideally like to be able to like grow like a team like have a small group of people that can dedicate the time to growing the project in the long term. But I'm still like trying to figure that out right now. [00:25:34] Jeremy: And when you mentioned that when you went from closed to open and people were still paying you, you don't sell a product anymore. What were they paying for? [00:25:45] Brandon: So I have some contracts with companies basically, like if they need a feature or they need a customization in this way then I am very open to those. And I sort of set it up to make it clear from the beginning that this is not just a free thing on GitHub, this is something that you could pay for if you need help with it, if you need support, if you wanted it. I'm also a little cagey about the word support because I think like it sounds a little bit too wishy-washy. Pretty much like if you need access to the developers of an open source project, I think that's something that businesses are willing to pay for. And I think like making that clear to potential users is a challenge. But I think that is one way that you might be able to make like a living out of open source. [00:26:35] Jeremy: And I think you said you'd been working on it for about five years. Has that mostly been full time? [00:26:42] Brandon: It's been on and off. it's sort of my pandemic era project. But I've spent a lot of time, most of my time working on the open source project at this point. So I have done some things that were more just like I'm doing a customization or like a private deployment for some client. But that's been a minority of the time. Yeah. [00:27:03] Jeremy: It's still impressive to have an open source project that is easy to self-host and yet is still able to support you working on it full time. I think a lot of people might make the assumption that there's nothing to sell if something is, is easy to use. But this sort of sounds like a counterpoint to that. [00:27:25] Brandon: I think I'd like it to be. So when you come back to the point of like, it being easy to self-host. Well, so again, like I think about it as like a primitive of the web. Like for example, if you wanted to start a business today as like hosted CSS files, you know, like where you upload your CSS and then you get developers to pay you a monthly subscription for how many times they fetched a CSS file. Well, I think most developers would be like, that's stupid because it's just an open specification, you just upload a static file. And really my goal is to make Protomaps the same way where it's obvious that there's not really some sort of lock-in or some sort of secret sauce in the server that does this thing. How PMTiles works and building a primitive of the web [00:28:16] Brandon: If you look at video for example, like a lot of the tech for how Protomaps and PMTiles works is based on parts of the HTTP spec that were made for video. And 20 years ago, if you wanted to host a video on the web, you had to have like a real player license or flash. So you had to go license some server software from real media or from macromedia so you could stream video to a browser plugin. But now in HTML you can just embed a video file. And no one's like, oh well I need to go pay for my video serving license. I mean, there is such a thing, like YouTube doesn't really use that for DRM reasons, but people just have the assumption that video is like a primitive on the web. So if we're able to make maps sort of that same way like a primitive on the web then there isn't really some obvious business or licensing model behind how that works. Just because it's a thing and it helps a lot of people do their jobs and people are happy using it. So why bother? [00:29:26] Jeremy: You mentioned that it a tech that was used for streaming video. What tech specifically is it? [00:29:34] Brandon: So it is byte range serving. So when you open a video file on the web, So let's say it's like a 100 megabyte video. You don't have to download the entire video before it starts playing. It streams parts out of the file based on like what frames... I mean, it's based on the frames in the video. So it can start streaming immediately because it's organized in a way to where the first few frames are at the beginning. And what PMTiles really is, is it's just like a video but in space instead of time. So it's organized in a way where these zoomed out views are at the beginning and the most zoomed in views are at the end. So when you're like panning or zooming in the map all you're really doing is fetching byte ranges out of that file the same way as a video. But it's organized in, this tiled way on a space filling curve. IIt's a little bit complicated how it works internally and I think it's kind of cool but that's sort of an like an implementation detail. [00:30:35] Jeremy: And to the person deploying it, it just looks like a single file. [00:30:40] Brandon: Exactly in the same way like an mp3 audio file is or like a JSON file is. [00:30:47] Jeremy: So with a video, I can sort of see how as someone seeks through the video, they start at the beginning and then they go to the middle if they wanna see the middle. For a map, as somebody scrolls around the map, are you seeking all over the file or is the way it's structured have a little less chaos? [00:31:09] Brandon: It's structured. And that's kind of the main technical challenge behind building PMTiles is you have to be sort of clever so you're not spraying the reads everywhere. So it uses something called a hilbert curve, which is a mathematical concept of a space filling curve. Where it's one continuous curve that essentially lets you break 2D space into 1D space. So if you've seen some maps of IP space, it uses this crazy looking curve that hits all the points in one continuous line. And that's the same concept behind PMTiles is if you're looking at one part of the world, you're sort of guaranteed that all of those parts you're looking at are quite close to each other and the data you have to transfer is quite minimal, compared to if you just had it at random. [00:32:02] Jeremy: How big do the files get? If I have a PMTiles of the entire world, what kind of size am I looking at? [00:32:10] Brandon: Right now, the default one I distribute is 128 gigabytes, so it's quite sizable, although you can slice parts out of it remotely. So if you just wanted. if you just wanted California or just wanted LA or just wanted only a couple of zoom levels, like from zero to 10 instead of zero to 15, there is a command line tool that's also called PMTiles that lets you do that. Issues with CDNs and range queries [00:32:35] Jeremy: And when you're working with files of this size, I mean, let's say I am working with a CDN in front of my application. I'm not typically accustomed to hosting something that's that large and something that's where you're seeking all over the file. is that, ever an issue or is that something that's just taken care of by the browser and, and taken care of by, by the hosts? [00:32:58] Brandon: That is an issue actually, so a lot of CDNs don't deal with it correctly. And my recommendation is there is a kind of proxy server or like a serverless proxy thing that I wrote. That runs on like cloudflare workers or on Docker that lets you proxy those range requests into a normal URL and then that is like a hundred percent CDN compatible. So I would say like a lot of the big commercial installations of this thing, they use that because it makes more practical sense. It's also faster. But the idea is that this solution sort of scales up and scales down. If you wanted to host just your city in like a 10 megabyte file, well you can just put that into GitHub pages and you don't have to worry about it. If you want to have a global map for your website that serves a ton of traffic then you probably want a little bit more sophisticated of a solution. It still does not require you to run a Linux server, but it might require (you) to use like Lambda or Lambda in conjunction with like a CDN. [00:34:09] Jeremy: Yeah. And that sort of ties into what you were saying at the beginning where if you can host on something like CloudFlare Workers or Lambda, there's less time you have to spend keeping these things running. [00:34:26] Brandon: Yeah, exactly. and I think also the Lambda or CloudFlare workers solution is not perfect. It's not as perfect as S3 or as just static files, but in my experience, it still is better at building something that lasts on the time span of years than being like I have a server that is on this Ubuntu version and in four years there's all these like security patches that are not being applied. So it's still sort of serverless, although not totally vendor neutral like S3. Customizing the map [00:35:03] Jeremy: We've mostly been talking about how you host the map itself, but for someone who's not familiar with these kind of tools, how would they be customizing the map? [00:35:15] Brandon: For customizing the map there is front end style customization and there's also data customization. So for the front end if you wanted to change the water from the shade of blue to another shade of blue there is a TypeScript API where you can customize it almost like a text editor color scheme. So if you're able to name a bunch of colors, well you can customize the map in that way you can change the fonts. And that's all done using MapLibre GL using a TypeScript API on top of that for customizing the data. So all the pipeline to generate this data from OpenStreetMap is open source. There is a Java program using a library called PlanetTiler which is awesome, which is this super fast multi-core way of building map tiles. And right now there isn't really great hooks to customize what data goes into that. But that's something that I do wanna work on. And finally, because the data comes from OpenStreetMap if you notice data that's missing or you wanted to correct data in OSM then you can go into osm.org. You can get involved in contributing the data to OSM and the Protomaps build is daily. So if you make a change, then within 24 hours you should see the new base map. Have that change. And of course for OSM your improvements would go into every OSM based project that is ingesting that data. So it's not a protomap specific thing. It's like this big shared data source, almost like Wikipedia. OpenStreetMap is a dataset and not a map [00:37:01] Jeremy: I think you were involved with OpenStreetMap to some extent. Can you speak a little bit to that for people who aren't familiar, what OpenStreetMap is? [00:37:11] Brandon: Right. So I've been using OSM as sort of like a tools developer for over a decade now. And one of the number one questions I get from developers about what is Protomaps is why wouldn't I just use OpenStreetMap? What's the distinction between Protomaps and OpenStreetMap? And it's sort of like this funny thing because even though OSM has map in the name it's not really a map in that you can't... In that it's mostly a data set and not a map. It does have a map that you can see that you can pan around to when you go to the website but the way that thing they show you on the website is built is not really that easily reproducible. It involves a lot of c++ software you have to run. But OpenStreetMap itself, the heart of it is almost like a big XML file that has all the data in the map and global. And it has tagged features for example. So you can go in and edit that. It has a web front end to change the data. It does not directly translate into making a map actually. Protomaps decides what shows at each zoom level [00:38:24] Brandon: So a lot of the pipeline, that Java program I mentioned for building this basemap for protomaps is doing things like you have to choose what data you show when you zoom out. You can't show all the data. For example when you're zoomed out and you're looking at all of a state like Colorado you don't see all the Chipotle when you're zoomed all the way out. That'd be weird, right? So you have to make some sort of decision in logic that says this data only shows up at this zoom level. And that's really what is the challenge in optimizing the size of that for the Protomaps map project. [00:39:03] Jeremy: Oh, so those decisions of what to show at different Zoom levels those are decisions made by you when you're creating the PMTiles file with Protomaps. [00:39:14] Brandon: Exactly. It's part of the base maps build pipeline. and those are honestly very subjective decisions. Who really decides when you're zoomed out should this hospital show up or should this museum show up nowadays in Google, I think it shows you ads. Like if someone pays for their car repair shop to show up when you're zoomed out like that that gets surfaced. But because there is no advertising auction in Protomaps that doesn't happen obviously. So we have to sort of make some reasonable choice. A lot of that right now in Protomaps actually comes from another open source project called Mapzen. So Mapzen was a company that went outta business a couple years ago. They did a lot of this work in designing which data shows up at which Zoom level and open sourced it. And then when they shut down, they transferred that code into the Linux Foundation. So it's this totally open source project, that like, again, sort of like Mapbox gl has this awesome legacy in that this company funded it for years for smart people to work on it and now it's just like a free thing you can use. So the logic in Protomaps is really based on mapzen. [00:40:33] Jeremy: And so the visualization of all this... I think I understand what you mean when people say oh, why not use OpenStreetMaps because it's not really clear it's hard to tell is this the tool that's visualizing the data? Is it the data itself? So in the case of using Protomaps, it sounds like Protomaps itself has all of the data from OpenStreetMap and then it has made all the decisions for you in terms of what to show at different Zoom levels and what things to have on the map at all. And then finally, you have to have a separate, UI layer and in this case, it sounds like the one that you recommend is the Map Libre library. [00:41:18] Brandon: Yeah, that's exactly right. For Protomaps, it has a portion or a subset of OSM data. It doesn't have all of it just because there's too much, like there's data in there. people have mapped out different bushes and I don't include that in Protomaps if you wanted to go in and edit like the Java code to add that you can. But really what Protomaps is positioned at is sort of a solution for developers that want to use OSM data to make a map on their app or their website. because OpenStreetMap itself is mostly a data set, it does not really go all the way to having an end-to-end solution. Financials and the idea of a project being complete [00:41:59] Jeremy: So I think it's great that somebody who wants to make a map, they have these tools available, whether it's from what was originally built by Mapbox, what's built by Open StreetMap now, the work you're doing with Protomaps. But I wonder one of the things that I talked about with Tom was he was saying he was trying to build this mapping business and based on the financials of what was coming in he was stressed, right? He was struggling a bit. And I wonder for you, you've been working on this open source project for five years. Do you have similar stressors or do you feel like I could keep going how things are now and I feel comfortable? [00:42:46] Brandon: So I wouldn't say I'm a hundred percent in one bucket or the other. I'm still seeing it play out. One thing, that I really respect in a lot of open source projects, which I'm not saying I'm gonna do for Protomaps is the idea that a project is like finished. I think that is amazing. If a software project can just be done it's sort of like a painting or a novel once you write, finish the last page, have it seen by the editor. I send it off to the press is you're done with a book. And I think one of the pains of software is so few of us can actually do that. And I don't know obviously people will say oh the map is never finished. That's more true of OSM, but I think like for Protomaps. One thing I'm thinking about is how to limit the scope to something that's quite narrow to where we could be feature complete on the core things in the near term timeframe. That means that it does not address a lot of things that people want. Like search, like if you go to Google Maps and you search for a restaurant, you will get some hits. that's like a geocoding issue. And I've already decided that's totally outta scope for Protomaps. So, in terms of trying to think about the future of this, I'm mostly looking for ways to cut scope if possible. There are some things like better tooling around being able to work with PMTiles that are on the roadmap. but for me, I am still enjoying working on the project. It's definitely growing. So I can see on NPM downloads I can see the growth curve of people using it and that's really cool. So I like hearing about when people are using it for cool projects. So it seems to still be going okay for now. [00:44:44] Jeremy: Yeah, that's an interesting perspective about how you were talking about projects being done. Because I think when people look at GitHub projects and they go like, oh, the last commit was X months ago. They go oh well this is dead right? But maybe that's the wrong framing. Maybe you can get a project to a point where it's like, oh, it's because it doesn't need to be updated. [00:45:07] Brandon: Exactly, yeah. Like I used to do a lot of c++ programming and the best part is when you see some LAPACK matrix math library from like 1995 that still works perfectly in c++ and you're like, this is awesome. This is the one I have to use. But if you're like trying to use some like React component library and it hasn't been updated in like a year, you're like, oh, that's a problem. So again, I think there's some middle ground between those that I'm trying to find. I do like for Protomaps, it's quite dependency light in terms of the number of hard dependencies I have in software. but I do still feel like there is a lot of work to be done in terms of project scope that needs to have stuff added. You mostly only hear about problems instead of people's wins [00:45:54] Jeremy: Having run it for this long. Do you have any thoughts on running an open source project in general? On dealing with issues or managing what to work on things like that? [00:46:07] Brandon: Yeah. So I have a lot. I think one thing people point out a lot is that especially because I don't have a direct relationship with a lot of the people using it a lot of times I don't even know that they're using it. Someone sent me a message saying hey, have you seen flickr.com, like the photo site? And I'm like, no. And I went to flickr.com/map and it has Protomaps for it. And I'm like, I had no idea. But that's cool, if they're able to use Protomaps for this giant photo sharing site that's awesome. But that also means I don't really hear about when people use it successfully because you just don't know, I guess they, NPM installed it and it works perfectly and you never hear about it. You only hear about people's negative experiences. You only hear about people that come and open GitHub issues saying this is totally broken, and why doesn't this thing exist? And I'm like, well, it's because there's an infinite amount of things that I want to do, but I have a finite amount of time and I just haven't gone into that yet. And that's honestly a lot of the things and people are like when is this thing gonna be done? So that's, that's honestly part of why I don't have a public roadmap because I want to avoid that sort of bickering about it. I would say that's one of my biggest frustrations with running an open source project is how it's self-selected to only hear the negative experiences with it. Be careful what PRs you accept [00:47:32] Brandon: 'cause you don't hear about those times where it works. I'd say another thing is it's changed my perspective on contributing to open source because I think when I was younger or before I had become a maintainer I would open a pull request on a project unprompted that has a hundred lines and I'd be like, Hey, just merge this thing. But I didn't realize when I was younger well if I just merge it and I disappear, then the maintainer is stuck with what I did forever. You know if I add some feature then that person that maintains the project has to do that indefinitely. And I think that's very asymmetrical and it's changed my perspective a lot on accepting open source contributions. I wanna have it be open to anyone to contribute. But there is some amount of back and forth where it's almost like the default answer for should I accept a PR is no by default because you're the one maintaining it. And do you understand the shape of that solution completely to where you're going to support it for years because the person that's contributing it is not bound to those same obligations that you are. And I think that's also one of the things where I have a lot of trepidation around open source is I used to think of it as a lot more bazaar-like in terms of anyone can just throw their thing in. But then that creates a lot of problems for the people who are expected out of social obligation to continue this thing indefinitely. [00:49:23] Jeremy: Yeah, I can totally see why that causes burnout with a lot of open source maintainers, because you probably to some extent maybe even feel some guilt right? You're like, well, somebody took the time to make this. But then like you said you have to spend a lot of time trying to figure out is this something I wanna maintain long term? And one wrong move and it's like, well, it's in here now. [00:49:53] Brandon: Exactly. To me, I think that is a very common failure mode for open source projects is they're too liberal in the things they accept. And that's a lot of why I was talking about how that choice of what features show up on the map was inherited from the MapZen projects. If I didn't have that then somebody could come in and say hey, you know, I want to show power lines on the map. And they open a PR for power lines and now everybody who's using Protomaps when they're like zoomed out they see power lines are like I didn't want that. So I think that's part of why a lot of open source projects eventually evolve into a plugin system is because there is this demand as the project grows for more and more features. But there is a limit in the maintainers. It's like the demand for features is exponential while the maintainer amount of time and effort is linear. Plugin systems might reduce need for PRs [00:50:56] Brandon: So maybe the solution to smash that exponential down to quadratic maybe is to add a plugin system. But I think that is one of the biggest tensions that only became obvious to me after working on this for a couple of years. [00:51:14] Jeremy: Is that something you're considering doing now? [00:51:18] Brandon: Is the plugin system? Yeah. I think for the data customization, I eventually wanted to have some sort of programmatic API to where you could declare a config file that says I want ski routes. It totally makes sense. The power lines example is maybe a little bit obscure but for example like a skiing app and you want to be able to show ski slopes when you're zoomed out well you're not gonna be able to get that from Mapbox or from Google because they have a one size fits all map that's not specialized to skiing or to golfing or to outdoors. But if you like, in theory, you could do this with Protomaps if you changed the Java code to show data at different zoom levels. And that is to me what makes the most sense for a plugin system and also makes the most product sense because it enables a lot of things you cannot do with the one size fits all map. [00:52:20] Jeremy: It might also increase the complexity of the implementation though, right? [00:52:25] Brandon: Yeah, exactly. So that's like. That's really where a lot of the terrifying thoughts come in, which is like once you create this like config file surface area, well what does that look like? Is that JSON? Is that TOML, is that some weird like everything eventually evolves into some scripting language right? Where you have logic inside of your templates and I honestly do not really know what that looks like right now. That feels like something in the medium term roadmap. [00:52:58] Jeremy: Yeah and then in terms of bug reports or issues, now it's not just your code it's this exponential combination of whatever people put into these config files. [00:53:09] Brandon: Exactly. Yeah. so again, like I really respect the projects that have done this well or that have done plugins well. I'm trying to think of some, I think obsidian has plugins, for example. And that seems to be one of the few solutions to try and satisfy the infinite desire for features with the limited amount of maintainer time. Time split between code vs triage vs talking to users [00:53:36] Jeremy: How would you say your time is split between working on the code versus issue and PR triage? [00:53:43] Brandon: Oh, it varies really. I think working on the code is like a minority of it. I think something that I actually enjoy is talking to people, talking to users, getting feedback on it. I go to quite a few conferences to talk to developers or people that are interested and figure out how to refine the message, how to make it clearer to people, like what this is for. And I would say maybe a plurality of my time is spent dealing with non-technical things that are neither code or GitHub issues. One thing I've been trying to do recently is talk to people that are not really in the mapping space. For example, people that work for newspapers like a lot of them are front end developers and if you ask them to run a Linux server they're like I have no idea. But that really is like one of the best target audiences for Protomaps. So I'd say a lot of the reality of running an open source project is a lot like a business is it has all the same challenges as a business in terms of you have to figure out what is the thing you're offering. You have to deal with people using it. You have to deal with feedback, you have to deal with managing emails and stuff. I don't think the payoff is anywhere near running a business or a startup that's backed by VC money is but it's definitely not the case that if you just want to code, you should start an open source project because I think a lot of the work for an opensource project has nothing to do with just writing the code. It is in my opinion as someone having done a VC backed business before, it is a lot more similar to running, a tech company than just putting some code on GitHub. Running a startup vs open source project [00:55:43] Jeremy: Well, since you've done both at a high level what did you like about running the company versus maintaining the open source project? [00:55:52] Brandon: So I have done some venture capital accelerator programs before and I think there is an element of hype and energy that you get from that that is self perpetuating. Your co-founder is gungho on like, yeah, we're gonna do this thing. And your investors are like, you guys are geniuses. You guys are gonna make a killing doing this thing. And the way it's framed is sort of obvious to everyone that it's like there's a much more traditional set of motivations behind that, that people understand while it's definitely not the case for running an open source project. Sometimes you just wake up and you're like what the hell is this thing for, it is this thing you spend a lot of time on. You don't even know who's using it. The people that use it and make a bunch of money off of it they know nothing about it. And you know, it's just like cool. And then you only hear from people that are complaining about it. And I think like that's honestly discouraging compared to the more clear energy and clearer motivation and vision behind how most people think about a company. But what I like about the open source project is just the lack of those constraints you know? Where you have a mandate that you need to have this many customers that are paying by this amount of time. There's that sort of pressure on delivering a business result instead of just making something that you're proud of that's simple to use and has like an elegant design. I think that's really a difference in motivation as well. Having control [00:57:50] Jeremy: Do you feel like you have more control? Like you mentioned how you've decided I'm not gonna make a public roadmap. I'm the sole developer. I get to decide what goes in. What doesn't. Do you feel like you have more control in your current position than you did running the startup? [00:58:10] Brandon: Definitely for sure. Like that agency is what I value the most. It is possible to go too far. Like, so I'm very wary of the BDFL title, which I think is how a lot of open source projects succeed. But I think there is some element of for a project to succeed there has to be somebody that makes those decisions. Sometimes those decisions will be wrong and then hopefully they can be rectified. But I think going back to what I was talking about with scope, I think the overall vision and the scope of the project is something that I am very opinionated about in that it should do these things. It shouldn't do these things. It should be easy to use for this audience. Is it gonna be appealing to this other audience? I don't know. And I think that is really one of the most important parts of that leadership role, is having the power to decide we're doing this, we're not doing this. I would hope other developers would be able to get on board if they're able to make good use of the project, if they use it for their company, if they use it for their business, if they just think the project is cool. So there are other contributors at this point and I want to get more involved. But I think being able to make those decisions to what I believe is going to be the best project is something that is very special about open source, that isn't necessarily true about running like a SaaS business. [00:59:50] Jeremy: I think that's a good spot to end it on, so if people want to learn more about Protomaps or they wanna see what you're up to, where should they head? [01:00:00] Brandon: So you can go to Protomaps.com, GitHub, or you can find me or Protomaps on bluesky or Mastodon. [01:00:09] Jeremy: All right, Brandon, thank you so much for chatting today. [01:00:12] Brandon: Great. Thank you very much.
Elliot Cohen, cofounder of PillPack, joins Julie Yoo, a16z Bio + Health general partner. Together, they discuss Elliot's experience designing and building a consumer-first pharmacy alongside TJ Parker.Elliot's journey with PillPack began when he noticed his father struggling with a mail-order pharmacy that couldn't get the simplest thing right: the correct version of a pill.Elliot shares the nitty-gritty of building a consumer-centric business in healthcare, including how they had to adapt CSV files and cake boxes to get their initial product off the ground, and how they balanced the needs of a healthcare system with the desires of their customers. Learn more about a16z Bio+HealthLearn more about & Subscribe to Raising HealthFind a16z Bio+Health on LinkedInFind a16z Bio+Health on X
School calendar creation can be much easier with this method. Start with a spreadsheet. Export to CSV and import into Google Calendar.For more, visit the blog post: https://frankbuck.org/school-calendar-creation/
Theodore Morley wonders why tech workers so frequently point our wanderlust toward hands-on trades, Eduardo Bouças explains why he's lost confidence in Vercel's handling of Next.js, "xan" is a command line tool that can be used to process CSV files directly from the shell, Pawel Brodzinski takes us back to Kanban's roots & Sergey Tselovalnikov weighs in on vibe coding.
Theodore Morley wonders why tech workers so frequently point our wanderlust toward hands-on trades, Eduardo Bouças explains why he's lost confidence in Vercel's handling of Next.js, "xan" is a command line tool that can be used to process CSV files directly from the shell, Pawel Brodzinski takes us back to Kanban's roots & Sergey Tselovalnikov weighs in on vibe coding.
Theodore Morley wonders why tech workers so frequently point our wanderlust toward hands-on trades, Eduardo Bouças explains why he's lost confidence in Vercel's handling of Next.js, "xan" is a command line tool that can be used to process CSV files directly from the shell, Pawel Brodzinski takes us back to Kanban's roots & Sergey Tselovalnikov weighs in on vibe coding.
https://www.pinterest.com/easymedicaldevice In this episode, Tibor Zechmeister will challenge us on what we would answer if the Notified Body asks if our software is validated. CSV or Computer System Validation will become one of the major topics So stay tuned. Who is Tibor Zechmeister? Passionate about Creating Maximum Efficiency in MedTech Regulatory | Head of Regulatory and Quality Flinn.ai | Notified Body Auditor | MedTech Entrepreneur | Software Solutions for Regulatory Automation with AI Who is Monir El Azzouzi? Monir El Azzouzi is the founder and CEO of Easy Medical Device a Consulting firm that is supporting Medical Device manufacturers for any Quality and Regulatory affairs activities all over the world. Monir can help you to create your Quality Management System, Technical Documentation or he can also take care of your Clinical Evaluation, Clinical Investigation through his team or partners. Easy Medical Device can also become your Authorized Representative and Independent Importer Service provider for EU, UK and Switzerland. Monir has around 16 years of experience within the Medical Device industry working for small businesses and also big corporate companies. He has now supported around 100 clients to remain compliant on the market. His passion to the Medical Device filed pushed him to create educative contents like, blog, podcast, YouTube videos, LinkedIn Lives where he invites guests who are sharing educative information to his audience. Visit easymedicaldevice.com to know more. Link Tibor Zechmeister LinkedIn: https://www.linkedin.com/in/tibor-zechmeister/ Flinn.ai Website: https://www.flinn.ai/ ISO 13485:2016 https://www.iso.org/standard/59752.html ISO/TR 80002-2:2017: https://www.iso.org/standard/60044.html Social Media to follow Monir El Azzouzi Linkedin: https://linkedin.com/in/melazzouzi Twitter: https://twitter.com/elazzouzim Pinterest: https://www.pinterest.com/easymedicaldevice Instagram: https://www.instagram.com/easymedicaldevice
What if the key to mortgage success was simply having the right conversations with the right people? In this episode of the Local Marketing Lab, Steve Kyles, partner at Mortgage Marketing Animals and host of the Loan Officer Leadership Podcast, reveals the data-driven approach that's transforming mortgage professionals' results. Steve breaks down exactly why most loan officers struggle despite working hard and shares the precise formula for consistent mortgage success in today's market.Topics discussed in this episode: 1️⃣ How to generate 8+ deals monthly through consistent outbound contact2️⃣ Identify truly qualified real estate agent partners3️⃣ Powerful scripts for asking for business that increase referral rates4️⃣ Three-step priority system for creating a sustainable work routine5️⃣ Using modern technology to boost conversion rates by 15%ResourcesConnect with Steve Kyles on LinkedIn.Learn more about Success Mortgage Partners.Listen to an episode of the Loan Officer Leadership Podcast.Check out Mortgage Marketing Animals.Other shout-outsCarl White — Partner at Mortgage Marketing AnimalsCovve — Export phone numbers from your phone as a CSV file
✏️ Suscribirse https://youtu.be/A7foWqzrf8Q Bricks 2.0 Alpha: Automatización, FourSquare y Obsidian, TablePress y Novedades en el Mundo de WordPress Bienvenido a un nuevo episodio de Negocios y WordPress, donde hoy analizamos las novedades de Bricks 2.0 Alpha, la automatización con herramientas como FourSquare y Obsidian, además de mejoras con TablePress. Este artículo está diseñado para emprendedores y profesionales que desean optimizar sus proyectos digitales y comprender las tecnologías que pueden ayudar a maximizar la eficiencia. Si buscas descubrir las últimas tendencias en automatización y gestión de contenido en WordPress, ¡estás en el lugar correcto! 1. Bricks 2.0 Alpha: Novedades y Mejoras 1.1 Gestor de Elementos y Optimización La actualización de Bricks 2.0 Alpha trae consigo un gestor de elementos que permite a los desarrolladores desactivar funcionalidades que no utilizan. Esto no solo optimiza el rendimiento del sitio, sino que también mejora el flujo de trabajo. La capacidad de personalizar qué elementos se cargan en cada proyecto es un avance significativo para aquellos que trabajan en entornos de desarrollo web con WordPress. 1.2 Visual CSS Grid Builder Otra innovación es el Visual CSS Grid Builder, herramienta que facilitará la creación de layouts complejos sin necesitar programación avanzada. Permite a los diseñadores y desarrolladores visualizar la estructura de grid, haciendo más accesible la creación de diseños responsivos. 1.3 Webhooks para Formularios La adición de webhooks para formularios representa un gran paso hacia la automatización en WordPress. Ahora podrás enviar datos de tus formularios a plataformas externas como Zapier, facilitando la conexión entre diferentes aplicaciones y optimizando tus flujos de trabajo. 2. Automatización con FourSquare y Obsidian 2.1 Integrando FourSquare con Obsidian La integración de FourSquare con Obsidian permite a los usuarios automatizar el registro de check-ins en sus diarios. Gracias a esta automatización, cada vez que realices un check-in, tus datos se guardarán automáticamente en Obsidian, facilitando el seguimiento de experiencias y actividades. Este tipo de automatización no solo ahorra tiempo, sino que también crea un archivo organizado de tus movimientos y lugares visitados, lo que puede ser valioso para análisis posteriores o simplemente para recordar momentos importantes. 2.2 Perspectiva Personalizada Automatizar estos procesos muestra cómo la tecnología puede ser utilizada para personalizar nuestras interacciones. No solo se trata de almacenar datos, sino de crear un diario que refleje nuestras preferencias y vivencias diarias. 3. Mejoras con TablePress 3.1 Eficiencia en la Gestión de Tablas TablePress es uno de los plugins más utilizados para gestionar tablas en WordPress. Las recientes optimizaciones permiten simplificar la creación y edición de tablas dentro de los proyectos. Con el uso de shortcodes, puedes insertar tablas en cualquier entrada o página sin complicaciones. La posibilidad de importar y exportar tablas en diferentes formatos (CSV, HTML, etc.) también facilita la migración de datos. 3.2 Integración con el Custom Post Type Además, puedes combinar TablePress con ACF (Advanced Custom Fields) para crear tablas dinámicas que se adapten a tus necesidades. Al asociar un custom post type con TablePress, los datos se pueden presentar de forma más integrada y armoniosa en tus proyectos. Conclusión Hemos explorado algunas de las nuevas funcionalidades de Bricks 2.0 Alpha, cómo la automatización con FourSquare y Obsidian puede mejorar nuestro flujo de trabajo, y las ventajas de utilizar TablePress para manejar tablas en WordPress. La combinación de estas herramientas no solo facilita la gestión de proyectos, sino que también optimiza el trabajo diario de los desarrolladores y emprendedores. Te invitamos a interactuar Si deseas compartir tu experiencia con alguna de estas herramientas o tienes preguntas sobre la automatización en WordPress, ¡déjanos un comentario! No olvides suscribirte a nuestro boletín para recibir más contenido sobre WordPress y negocios digitales. FAQs ¿Qué es Bricks 2.0 Alpha? Una nueva versión de un constructor de sitios web para WordPress que incluye herramientas innovadoras para optimizar el diseño y desarrollo web. ¿Cómo puedo automatizar mis check-ins en Obsidian? Utilizando FourSquare y configurando una automatización con herramientas de integración como Zapier. ¿TablePress es un buen plugin para tablas en WordPress? Sí, es altamente valorado por su facilidad de uso y funcionalidades avanzadas para gestionar y mostrar tablas. Enlaces Packs de horas de Elías Página "Ahora" de Elías Canal de YouTube de Elías
An den Zeitunge ginn haut d'Riede vun den zwee Kongresser vum Weekend, dee vun der CSV an dee vun déi gréng, analyséiert an et gëtt festgestallt, datt och zu Lëtzebuerg d'Oprüstungshysterie ukomm ass.
In this episode I share my exact framework for identifying profitable AI SaaS business opportunities by focusing on manual workflows that could be automated. We focus on export buttons and other manual processes in enterprise software as indicators of workflow breakdowns that AI could solve. The framework breaks down to solving niche problems, charging immediately for solutions, and focusing on quantifiable ROI.Timestamps:00:00 - Intro02:50 - The Export Button Theory of AI Opportunity04:03 - Step 1: Identifying Repetitive Pain Points08:31 - Step 2: Adding Intelligence to Manual Processes10:53 - Step 3: Identifying Data Silos that Need Bridging12:47 - Step 4: Finding Missing Connections Between Tools14:12 - Step 5: Start Small, Grow Naturally16:55 - Exploring Additional Manual Buttons for Startup Ideas19:03 - The QuickBooks Export Gold Mine20:43 - Your First 30 Days: Getting Started with Your AI SaaS Startup24:02 - Final Thoughts on AI Startup OpportunitiesKey Points:• The "Export Button Theory" - Every export button in software represents a business opportunity worth $10,000-30,000/month• Five-step framework for finding AI SaaS opportunities• Manual buttons in software (like "generate report," "schedule meeting," "upload CSV") represent AI automation opportunities1) The Export Button Theory of AI Opportunity Every time a user clicks "export" in software, they're signaling:• A workflow breakdown• Manual labor that could be automated• A potential $10-30K/month feature2) The 5-step framework for finding these opportunities:Step 1: Identify repetitive pain points Watch how people use enterprise software daily:• Exporting data to reformat it (Salesforce → Excel → PowerPoint)• Copying between tools (Jira → Slack)• Building the same reports weekly• Maintaining spreadsheets manually3) Step 2: Add intelligence to manual processes Every manual task is an LLM opportunity:• Turn Stripe exports into AI-powered revenue analysis ($50-100K MRR)• Convert CRM data into AI-formatted presentations ($80-120K MRR)• Generate sentiment trends from support tickets ($30-70K MRR)4) Step 3: Bridge data silos Look for phrases like:"I need to pull this data every week""I wish I could see this alongside that""We keep this in a separate spreadsheet"5) Step 4: Find missing connections between tools Watch for "I wish these two things worked together":• HR system + Payroll → AI opportunity: automatic sync with anomaly detection• CRM + Marketing automation → AI opportunity: bi-directional sync with AI prioritization6) Step 5: Start small, grow naturally The MOST successful AI SaaS businesses:• Pick a specific niche big players ignore• Focus on ONE painful workflow• Make it 10x better with AI• Let AI suggest next actions• Charge immediately (if solving real pain, people will pay day one) 7) Beyond the export button, look for these manual buttons in software:• "Generate Report" → AI opportunity: automatic insight generation ($2.5B market)• "Schedule Meeting" → AI opportunity: context-aware scheduling ($1.8B market)• "Upload CSV" → AI opportunity: intelligent data processing ($3.2B market)8) The QuickBooks goldmine • 250M financial reports exported annually• Each export = 45-90 mins of manual work• Value of time: $75-150 per export• Total addressable market: $12-18B annuallyThis is just ONE platform with massive opportunity!9) Your first 30 days roadmap:Days 1-5: Select software with high export volume, research communitiesDays 6-10: Interview power users about export habitsDays 11-20: Build minimal prototype (using V0, Lovable, Bolt, etc.)Days 21-30: Get 3-5 PAYING beta usersNotable Quotes:"Every export button in software represents a business opportunity. When a user clicks export, what are they saying to us? They're basically saying this software doesn't do what I need to, so I'm taking my data elsewhere to do manual work.""The best AI opportunities aren't where everyone is looking. They're hiding in these mundane, repetitive tasks that knowledge workers are doing every single day."Want more free ideas? I collect the best ideas from the pod and give them to you for free in a database. Most of them cost $0 to start (my fav)Get access: https://www.gregisenberg.com/30startupideasLCA helps Fortune 500s and fast-growing startups build their future - from Warner Music to Fortnite to Dropbox. We turn 'what if' into reality with AI, apps, and next-gen products https://latecheckout.agency/BoringAds — ads agency that will build you profitable ad campaigns http://boringads.com/BoringMarketing — SEO agency and tools to get your organic customers http://boringmarketing.com/Startup Empire - a membership for builders who want to build cash-flowing businesses https://www.startupempire.coFIND ME ON SOCIALX/Twitter: https://twitter.com/gregisenbergInstagram: https://instagram.com/gregisenberg/LinkedIn: https://www.linkedin.com/in/gisenberg/
-- How can Heat Map Analytics reveal the most engaging areas within a Matterport tour? -- What insights can Tag Analytics provide about how visitors interact with Tags? -- How can these new analytics tools help Matterport Service Providers add more value for your clients? Stay tuned ... Join us on WGAN-TV Live at 5 at 5 pm ET on Thursday, 27 February 2025 for: ✔ WGAN-TV: Introduction to Matterport + CAPTUR3D Heat Map Analytics and Tag Analytics Our guest: ✔ CAPTUR3D Product Owner Alex Hitchcock (for PHORIA) | @AlexHitchcock In this WGAN-TV Live at 5 show, we'll explore the CAPTUR3D latest analytics updates, including Heat Maps Analytics and Tags Analytics, which provide powerful insights into user interactions within a Matterport digital twins. What You'll Learn NEW: Heat Map Analytics – Identify how visitors engage with each scan point and room in Matterport tours: ✔ Most-viewed rooms and scan points ✔ Time spent per room and scan ✔ Average scans/visits per session ✔ Percentage-based comparisons across rooms NEW: Tags Analytics – Gain insights into how users interact with Tags: ✔ Clicks and hovers on tags ✔ Content engagement tracking ✔ CSV data export for analysis & reporting Bonus: CSV Exports ✔ Easily download a detailed report of engagement metrics for in-depth analysis and team collaboration. Why This Matters For Matterport Service Providers, real estate professionals, and digital twin creators, the CAPTUR3D analytics tools provide data-driven insights to: ✔ Optimize tour layouts by understanding visitor engagement patterns ✔ Improve tag effectiveness by tracking content clicks and hovers ✔ Deliver more value to clients with quantifiable engagement reports Questions I'll Ask Alex ✔ How do Heat Maps Analytics enhance the way we analyze Matterport tour engagement? ✔ What trends have emerged from early CAPTUR3D Analytics data? ✔ How do Tags Analytics help businesses optimize their Mattertags & interactions? ✔ What industries benefit most from these advanced analytics tools? ✔ What's next for CAPTUR3D AI-driven insights and Matterport integrations? Exclusive Offer for WGAN Members Get $90 in free CAPTUR3D credits for high-quality floor plans & photo retouching via this special WGAN link! Must be used within first 30 days. Other questions that I should ask Alex during WGAN-TV Live at 5? Best, Dan About CAPTUR3D is an all-in-one 3D Virtual Tour Software & Management Platform. This powerful CMS enhances your Matterport Virtual Tours and streamlines your workflow. Get everything from Floor Plans to Virtual Staging and easily create eye-catching content for residential, commercial and cultural spaces. ✔ CAPTUR3D Website ✔ Matterport website
In this episode, Lois Houston and Nikita Abraham chat with MySQL expert Perside Foster on the importance of keeping MySQL performing at its best. They discuss the essential tools for monitoring MySQL, tackling slow queries, and boosting overall performance. They also explore HeatWave, the powerful real-time analytics engine that brings machine learning and cross-cloud flexibility into MySQL. MySQL 8.4 Essentials: https://mylearn.oracle.com/ou/course/mysql-84-essentials/141332/226362 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ---------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me today is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hey everyone! In our last two episodes, we spoke about MySQL backups, exploring their critical role in data recovery, error correction, data migration, and more. Lois: Today, we're switching gears to talk about monitoring MySQL instances. We'll also explore the features and benefits of HeatWave with Perside Foster, a MySQL Principal Solution Engineer at Oracle. 01:02 Nikita: Hi, Perside! We're thrilled to have you here for one last time this season. So, let's start by discussing the importance of monitoring systems in general, especially when it comes to MySQL. Perside: Database administrators face a lot of challenges, and these sometimes appear in the form of questions that a DBA must answer. One of the most basic question is, why is the database slow? To address this, the next step is to determine which queries are taking the longest. Queries that take a long time might be because they are not correctly indexed. Then we get to some environmental queries or questions. How can we find out if our replicas are out of date? If lag is too much of a problem? Can I restore my last backup? Is the database storage likely to fill up any time soon? Can and should we consider adding more servers and scaling out the system? And when it comes to users and making sure they're behaving correctly, has the database structure changed? And if so, who did it and what did they do? And more generally, what security issues have arisen? How can I see what has happened and how can I fix it? Performance is always at the top of the list of things a DBA worries about. The underlying hardware will always be a factor but is one of the things a DBA has the least flexibility with changing over the short time. The database structure, choice of data types and the overall size of retained data in the active data set can be a problem. 03:01 Nikita: What are some common performance issues that database administrators encounter? Perside: The sort of SQL queries that the application runs can be an issue. 90% of performance problems come from the SQL index and schema group. 03:18 Lois: Perside, can you give us a checklist of the things we should monitor? Perside: Make sure your system is working. Monitor performance continually. Make sure replication is working. Check your backup. Keep an eye on disk space and how it grows over time. Check when long running queries block your application and identify those queries. Protect your database structure from unauthorized changes. Make sure the operating system itself is working fine and check that nothing unusual happened at that level. Keep aware of security vulnerabilities in your software and operating system and ensure that they are kept updated. Verify that your database memory usage is under control. 04:14 Lois: That's a great list, Perside. Thanks for that. Now, what tools can we use to effectively monitor MySQL? Perside: The slow query log is a simple way to monitor long running queries. Two variables control the log queries. Long_query_time. If a query takes longer than this many seconds, it gets logged. And then there's min_exam_row_limit. If a query looks at more than this many rows, it gets logged. The slow query log doesn't ordinarily record administrative statements or queries that don't use indexes. Two variables control this, log_slow_admin_statements and log_queries_not_using_indexes. Once you have found a query that takes a long time to run, you can focus on optimizing the application, either by limiting this type of query or by optimizing it in some way. 05:23 Nikita: Perside, what tools can help us optimize slow queries and manage data more efficiently? Perside: To help you with processing the slow query log file, you can use the MySQL dump slow command to summarize slow queries. Another important monitoring feature of MySQL is the performance schema. It's a system database that provides statistics of how MySQL executes at a low level. Unlike user databases, performance schema does not persist data to disk. It uses its own storage engine that is flushed every time we start MySQL. And it has almost no interaction with the storage media, making it very fast. This performance information belongs only to the specific instance, so it's not replicated to other systems. Also, performance schema does not grow infinitely large. Instead, each row is recorded in a fixed size ring buffer. This means that when it's full, it starts again at the beginning. The SYS schema is another system database that's strongly related to performance schema. 06:49 Nikita: And how can the SYS schema enhance our monitoring efforts in MySQL? Perside: It contains helper objects like views and stored procedures. They help simplify common monitoring tasks and can help monitor server health and diagnose performance issues. Some of the views provide insights into I/O hotspots, blocking and locking issues, statements that use a lot of resources in various statistics on your busiest tables and indexes. 07:26 Lois: Ok… can you tell us about some of the features within the broader Oracle ecosystem that enhance our ability to monitor MySQL? Perside: As an Oracle customer, you also have access to Oracle Enterprise Manager. This tool supports a huge range of Oracle products. And for MySQL, it's used to monitor performance, system availability, your replication topology, InnoDB performance characteristics and locking, bad queries caught by the MySQL Enterprise firewall, and events that are raised by the MySQL Enterprise audit. 08:08 Nikita: What would you say are some of the standout features of Oracle Enterprise Manager? Perside: When you use MySQL in OCI, you have access to some really powerful features. HeatWave MySQL enables continuous monitoring of query statistics and performance. The health monitor is part of the MySQL server and gathers raw data about the performance of queries. You can see summaries of this information in the Performance Hub in the OCI Console. For example, you can see average statement latency or top 100 statements executed. MySQL metrics lets you drill in with your own custom monitoring queries. This works well with existing OCI features that you might already know. The observability and management framework lets you filter by resource type and across several dimensions. And you can configure OCI alarms to be notified when some condition is reached. 09:20 Lois: Perside, could you tell us more about MySQL metrics? Perside: MySQL metrics uses the raw performance data gathered by the health monitor to measure the important characteristic of your servers. This includes CPU and storage usage and information relevant to your database connection and queries executed. With MySQL metrics, you can create your own custom monitoring queries that you can use to feed graphics. This gives you an up to the minute representation of all the performance characteristics that you're interested in. You can also create alarms that trigger on some performance condition. And you can be notified through the OCI alarms framework so that you can be aware instantly when you need to deal with some issue. 10:22 Are you keen to stay ahead in today's fast-paced world? We've got your back! Each quarter, Oracle rolls out game-changing updates to its Fusion Cloud Applications. And to make sure you're always in the know, we offer New Features courses that give you an insider's look at all of the latest advancements. Don't miss out! Head over to mylearn.oracle.com to get started. 10:47 Nikita: Welcome back! Now, let's dive into the key features of HeatWave, the cloud service that integrates with MySQL. Can you tell us what HeatWave is all about? Perside: HeatWave is the cloud service for MySQL. MySQL is the world's leading database for web applications. And with HeatWave, you can run your online transaction processing or OLTP apps in the cloud. This gives you all the benefits of cloud deployments while keeping your MySQL-based web application running just like they would on your own premises. As well as OLTP applications, you need to run reports with Business Intelligence and Analytics Dashboards or Online Analytical Processing, or OLAP reports. The HeatWave cluster provides accelerated analytics queries without requiring extraction or transformation to a separate reporting system. This is achieved with an in-memory analytics accelerator, which is part of the HeatWave service. In addition, HeatWave enables you to create Machine Learning models to embed artificial intelligence right there in the database. The ML accelerator performs classification, regression, time-series forecasting, anomaly detection, and other functions provided by the various models that you can embed in your architecture. HeatWave can also work directly with storage outside the database. With HeatWave Lakehouse, you can run queries directly on data stored in object storage in a variety of formats without needing to import that data into your MySQL database. 12:50 Lois: With all of these exciting features in HeatWave, Perside, what core MySQL benefits can users continue to enjoy? Perside: The reason why you chose MySQL in the first place, it's still a relational database and with full transactional support, low latency, and high throughput for your online transaction processing app. It has encryption, compression, and high availability clustering. It also has the same large database support with up to 256 terabytes support. It has advanced security features, including authentication, data masking, and database firewall. But because it's part of the cloud service, it comes with automated patching, upgrades, and backup. And it is fully supported by the MySQL team. 13:50 Nikita: Ok… let's get back to what the HeatWave service entails. Perside: The HeatWave service is a fully managed MySQL. Through the web-based console, you can deploy your instances and manage backups, enable high availability, resize your instances, create read replicas, and perform many common administration tasks without writing a single line of SQL. It brings with it the power of OCI and MySQL Enterprise Edition. As a managed service, many routine DBA tests are automated. This includes keeping the instances up to date with the latest version and patches. You can run analytics queries right there in the database without needing to extract and transform your databases, or load them in another dedicated analytics system. 14:52 Nikita: Can you share some common use cases for HeatWave? Perside: You have your typical OLTP workloads, just like you'd run on prem, but with the benefit of being managed in the cloud. Analytic queries are accelerated by HeatWave. So your reporting applications and dashboards are way faster. You can run both OLTP and analytics workloads from the same database, keeping your reports up to date without needing a separate reporting infrastructure. 15:25 Lois: I've heard a lot about HeatWave AutoML. Can you explain what that is? Perside: HeatWave AutoML enables in-database artificial intelligence and Machine Learning. Externally sourced data stores, such as sensor data exported to CSV, can be read directly from object store. And HeatWave generative AI enables chatbots and LLM content creation. 15:57 Lois: Perside, tell us about some of the key features and benefits of HeatWave. Perside: Autopilot is a suite of AI-powered tools to improve the performance and applicability of your HeatWave queries. Autopilot includes two features that help cut costs when you provision your service. There's auto provisioning and auto shape prediction. They analyze your existing use case and tell you exactly which shape you must provision for your nodes and how many nodes you need. Auto parallel loading is used when you import data into HeatWave. It splits the import automatically into an optimum number of parallel streams to speed up your import. And then there's auto data placement. It distributes your data across the HeatWave cluster node to improve your query retrieval performance. Auto encoding chooses the correct data storage type for your string data, cutting down storage and retrieval time. Auto error recovery automatically recovers a fail node and reloads data if that node becomes unresponsive. Auto scheduling prioritizes incoming queries intelligently. An auto change propagation brings data optimally from your DB system to the acceleration cluster. And then there's auto query time estimation and auto query plan improvement. They learn from your workload. They use those statistics to perform on node adaptive optimization. This optimization allows each query portion to be executed on every local node based on that node's actual data distribution at runtime. Finally, there's auto thread pooling. It adjusts the enterprise thread pool configuration to maximize concurrent throughput. It is workload-aware, and minimizes resource contention, which can be caused by too many waiting transactions. 18:24 Lois: How does HeatWave simplify analytics within MySQL and with external data sources? Perside: HeatWave in Oracle Cloud Infrastructure provides all the features you need for analytics, all in one system. Your classic OLTP application run on the MySQL database that you know and love, provision in a DB system. On-line analytical processing is done right there in the database without needing to extract and load it to another analytic system. With HeatWave Lakehouse, you can even run your analytics queries against external data stores without loading them to your DB system. And you can run your machine learning models and LLMs in the same HeatWave service using HeatWave AutoML and generative AI. HeatWave is not just available in Oracle Cloud Infrastructure. If you're tied to another cloud vendor, such as AWS or Azure, you can use HeatWave from your applications in those cloud too, and at a great price. 19:43 Nikita: That's awesome! Thank you, Perside, for joining us throughout this season on MySQL. These conversations have been so insightful. If you're interested in learning more about the topics we discussed today, head over to mylearn.oracle.com and search for the MySQL 8.4: Essentials course. Lois: This wraps up our season on the essentials of MySQL. But before we go, we just want to remind you to write to us if you have any feedback, questions, or ideas for future episodes. Drop us an email at ou-podcast_ww@oracle.com. That's ou-podcast_ww@oracle.com. Nikita: Until next time, this is Nikita Abraham… Lois: And Lois Houston, signing off! 20:33 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
Neetu Rajpal, the CEO, and Co-Founder of Lilac Software, brings data and analytics tools to healthcare payers with a specific focus on addressing health disparities and improving benefits for Medicare Advantage members. The Medicare Advantage Star Rating program is a key incentive for health plans to be more innovative about data analysis to improve quality measures and receive financial incentives. Timely, reliable information in an accessible format and better data analytics improve decision-making, patient outcomes, and engagement with providers and patients. Neetu explains, "Health plans tend to have lots of data. They have lots of valuable tools, but those tools have data that is locked inside those tools themselves. So if you're, for example, part of an actuarial team, you may actually be getting PDFs, you may be getting spreadsheets, you may be getting access directly to a CSV file. You may get all these things on a one-off basis, and you have to make sense of all this data. The burden of cleaning and making use of this data falls on you." "This is exactly where lots of energy is lost, lots of labor costs are lost, and lots of efficiency is lost. So with Lilac, we're trying to make sure that all of this stuff of banal value is just available behind the scenes. This is exactly what tech is supposed to be doing for you. Help you operate at the top of your license. If you're an actuary, do actuarial things and let tech make sure that the data you need to do your work is available to you when you need it, and you can just rely on and trust it." "Yes, unfortunately, there is still a lot of paper in the process. Some of the paper is regulatorily required. So if you're a payer, you're required to send any of your members a directory of providers that are available. If they ask for it in paper, you are required to send them ID cards that can be paper. We all know about the fax machines and all of those things on paper." #LilacSoftware #DataAnalytics #HealthAI #AI #MedicareAdvantage #MedicareStarRating #HealthPlan #HealthcarePayers #HealthcareInsurance lilacsoftware.com Download the transcript here
Neetu Rajpal, the CEO, and Co-Founder of Lilac Software, brings data and analytics tools to healthcare payers with a specific focus on addressing health disparities and improving benefits for Medicare Advantage members. The Medicare Advantage Star Rating program is a key incentive for health plans to be more innovative about data analysis to improve quality measures and receive financial incentives. Timely, reliable information in an accessible format and better data analytics improve decision-making, patient outcomes, and engagement with providers and patients. Neetu explains, "Health plans tend to have lots of data. They have lots of valuable tools, but those tools have data that is locked inside those tools themselves. So if you're, for example, part of an actuarial team, you may actually be getting PDFs, you may be getting spreadsheets, you may be getting access directly to a CSV file. You may get all these things on a one-off basis, and you have to make sense of all this data. The burden of cleaning and making use of this data falls on you." "This is exactly where lots of energy is lost, lots of labor costs are lost, and lots of efficiency is lost. So with Lilac, we're trying to make sure that all of this stuff of banal value is just available behind the scenes. This is exactly what tech is supposed to be doing for you. Help you operate at the top of your license. If you're an actuary, do actuarial things and let tech make sure that the data you need to do your work is available to you when you need it, and you can just rely on and trust it." "Yes, unfortunately, there is still a lot of paper in the process. Some of the paper is regulatorily required. So if you're a payer, you're required to send any of your members a directory of providers that are available. If they ask for it in paper, you are required to send them ID cards that can be paper. We all know about the fax machines and all of those things on paper." #LilacSoftware #DataAnalytics #HealthAI #AI #MedicareAdvantage #MedicareStarRating #HealthPlan #HealthcarePayers #HealthcareInsurance lilacsoftware.com Listen to the podcast here
Thanks to the 5,975 people who took the 2025 Astral Codex Ten survey. See the questions for the ACX survey See the results from the ACX Survey (click “see previous responses” on that page1) I'll be publishing more complicated analyses over the course of the next year, hopefully starting later this month. If you want to scoop me, or investigate the data yourself, you can download the answers of the 5500 people who agreed to have their responses shared publicly. Out of concern for anonymity, the public dataset will exclude or bin certain questions2. If you want more complete information, email me and explain why, and I'll probably send it to you. You can download the public data here as an Excel or CSV file: http://slatestarcodex.com/Stuff/ACXPublic2025.xlsx http://slatestarcodex.com/Stuff/ACXPublic2025.csv Here are some of the answers I found most interesting: https://www.astralcodexten.com/p/acx-survey-results-2025
Tabular data is a broad term that encompasses structured data that generally fits into a specific row and column. It can be a SQL database, a spreadsheet, a .CSV file, etc. While there has been tremendous progress on artificial intelligence applied to unstructured and sequential data, these large language models are fuzzy by design. Learn more about your ad choices. Visit podcastchoices.com/adchoices
In this episode of Research Like a Pro, Nicole and Diana discuss using AI to analyze tax records. Tax research involves a lot of data, and once you've extracted the data, analyzing it can be a challenge. Diana explains how she exported data from Airtable into a CSV file, and Nicole explains how she used Claude AI to create a table from the data. Diana provides an example of how she used the AI analysis to gain new insights into Jefferson Weatherford, an early settler in Dallas County. Diana shares a case study of Henderson Weatherford, demonstrating how the tax records revealed his connection to Samuel H. Beeman and Henderson's death or move by 1865. Nicole shares technical tips for refining a narrative using AI and incorporating the data into a research report. Diana discusses the benefits and limitations of using AI for tax record analysis, emphasizing how it enhanced her analysis and saved writing time while also providing new insights. This summary was generated by Google Gemini. Links Using AI to Analyze Tax Data in a Research Log - https://familylocket.com/using-ai-to-analyze-tax-data-in-a-research-log/ Claude AI - https://claude.ai/new Sponsor – Newspapers.com For listeners of this podcast, Newspapers.com is offering new subscribers 20% off a Publisher Extra subscription so you can start exploring today. Just use the code “FamilyLocket” at checkout. Research Like a Pro Resources Airtable Universe - Nicole's Airtable Templates - https://www.airtable.com/universe/creator/usrsBSDhwHyLNnP4O/nicole-dyer Airtable Research Logs Quick Reference - by Nicole Dyer - https://familylocket.com/product-tag/airtable/ Research Like a Pro: A Genealogist's Guide book by Diana Elder with Nicole Dyer on Amazon.com - https://amzn.to/2x0ku3d 14-Day Research Like a Pro Challenge Workbook - digital - https://familylocket.com/product/14-day-research-like-a-pro-challenge-workbook-digital-only/ and spiral bound - https://familylocket.com/product/14-day-research-like-a-pro-challenge-workbook-spiral-bound/ Research Like a Pro Webinar Series 2024 - monthly case study webinars including documentary evidence and many with DNA evidence - https://familylocket.com/product/research-like-a-pro-webinar-series-2024/ Research Like a Pro eCourse - independent study course - https://familylocket.com/product/research-like-a-pro-e-course/ RLP Study Group - upcoming group and email notification list - https://familylocket.com/services/research-like-a-pro-study-group/ Research Like a Pro with DNA Resources Research Like a Pro with DNA: A Genealogist's Guide to Finding and Confirming Ancestors with DNA Evidence book by Diana Elder, Nicole Dyer, and Robin Wirthlin - https://amzn.to/3gn0hKx Research Like a Pro with DNA eCourse - independent study course - https://familylocket.com/product/research-like-a-pro-with-dna-ecourse/ RLP with DNA Study Group - upcoming group and email notification list - https://familylocket.com/services/research-like-a-pro-with-dna-study-group/ Thank you Thanks for listening! We hope that you will share your thoughts about our podcast and help us out by doing the following: Write a review on iTunes or Apple Podcasts. If you leave a review, we will read it on the podcast and answer any questions that you bring up in your review. Thank you! Leave a comment in the comment or question in the comment section below. Share the episode on Twitter, Facebook, or Pinterest. Subscribe on iTunes or your favorite podcast app. Sign up for our newsletter to receive notifications of new episodes - https://familylocket.com/sign-up/ Check out this list of genealogy podcasts from Feedspot: Best Genealogy Podcasts - https://blog.feedspot.com/genealogy_podcasts/
Measuring the ‘S' in Environmental, Social and Governance (ESG) is becoming more of a priority when it comes to company reporting. In this episode, learn more about the methodology, benefits and challenges involved in social impact reporting, and what you need to know. Tune in to elevate your understanding with a leader in the field. Host: Aidan Ormond, digital content editor, CPA Australia Guest: Adam Vise, Group Treasurer, Strategy and Social Value at Australian Unity, and Chair of Birchal Equity, a crowdfunding platform for entrepreneurs You can learn more about Australian Unity's Impact 2024 and the organisation's community and social value (CSV) framework at their website. Further information on social impact reporting is available on the INTHEBLACK website. Would you like to listen to more INTHEBLACK episodes? Head to CPA Australia's YouTube channel. And you can find a CPA at our custom portal on the CPA Australia website. CPA Australia publishes four podcasts, providing commentary and thought leadership across business, finance and accounting: With Interest INTHEBLACK INTHEBLACK Out Loud Excel Tips Search for them in your podcast platform. Email the podcast team at podcasts@cpaaustralia.com.au
Happy holidays! We'll be sharing snippets from Latent Space LIVE! through the break bringing you the best of 2024! We want to express our deepest appreciation to event sponsors AWS, Daylight Computer, Thoth.ai, StrongCompute, Notable Capital, and most of all all our LS supporters who helped fund the gorgeous venue and A/V production!For NeurIPS last year we did our standard conference podcast coverage interviewing selected papers (that we have now also done for ICLR and ICML), however we felt that we could be doing more to help AI Engineers 1) get more industry-relevant content, and 2) recap 2024 year in review from experts. As a result, we organized the first Latent Space LIVE!, our first in person miniconference, at NeurIPS 2024 in Vancouver.Our next keynote covers The State of LLM Agents, with the triumphant return of Professor Graham Neubig's return to the pod (his ICLR episode here!). OpenDevin is now a startup known as AllHands! The renamed OpenHands has done extremely well this year, as they end the year sitting comfortably at number 1 on the hardest SWE-Bench Full leaderboard at 29%, though on the smaller SWE-Bench Verified, they are at 53%, behind Amazon Q, devlo, and OpenAI's self reported o3 results at 71.7%.Many are saying that 2025 is going to be the year of agents, with OpenAI, DeepMind and Anthropic setting their sights on consumer and coding agents, vision based computer-using agents and multi agent systems. There has been so much progress on the practical reliability and applications of agents in all domains, from the huge launch of Cognition AI's Devin this year, to the sleeper hit of Cursor Composer and Codeium's Windsurf Cascade in the IDE arena, to the explosive revenue growth of Stackblitz's Bolt, Lovable, and Vercel's v0, and the unicorn rounds and high profile movements of customer support agents like Sierra (now worth $4 billion) and search agents like Perplexity (now worth $9 billion). We wanted to take a little step back to understand the most notable papers of the year in Agents, and Graham indulged with his list of 8 perennial problems in building agents in 2024.Must-Read Papers for the 8 Problems of Agents* The agent-computer interface: CodeAct: Executable Code Actions Elicit Better LLM Agents. Minimial viable tools: Execution Sandbox, File Editor, Web Browsing* The human-agent interface: Chat UI, GitHub Plugin, Remote runtime, …?* Choosing an LLM: See Evaluation of LLMs as Coding Agents on SWE-Bench at 30x - must understand instructions, tools, code, environment, error recovery* Planning: Single Agent Systems vs Multi Agent (CoAct: A Global-Local Hierarchy for Autonomous Agent Collaboration) - Explicit vs Implicit, Curated vs Generated* Reusable common workflows: SteP: Stacked LLM Policies for Web Actions and Agent Workflow Memory - Manual prompting vs Learning from Experience* Exploration: Agentless: Demystifying LLM-based Software Engineering Agents and BAGEL: Bootstrapping Agents by Guiding Exploration with Language* Search: Tree Search for Language Model Agents - explore paths and rewind* Evaluation: Fast Sanity Checks (miniWoB and Aider) and Highly Realistic (WebArena, SWE-Bench) and SWE-Gym: An Open Environment for Training Software Engineering Agents & VerifiersFull Talk on YouTubePlease like and subscribe!Timestamps* 00:00 Welcome to Latent Space Live at NeurIPS 2024* 00:29 State of LLM Agents in 2024* 02:20 Professor Graham Newbig's Insights on Agents* 03:57 Live Demo: Coding Agents in Action* 08:20 Designing Effective Agents* 14:13 Choosing the Right Language Model for Agents* 16:24 Planning and Workflow for Agents* 22:21 Evaluation and Future Predictions for Agents* 25:31 Future of Agent Development* 25:56 Human-Agent Interaction Challenges* 26:48 Expanding Agent Use Beyond Programming* 27:25 Redesigning Systems for Agent Efficiency* 28:03 Accelerating Progress with Agent Technology* 28:28 Call to Action for Open Source Contributions* 30:36 Q&A: Agent Performance and Benchmarks* 33:23 Q&A: Web Agents and Interaction Methods* 37:16 Q&A: Agent Architectures and Improvements* 43:09 Q&A: Self-Improving Agents and Authentication* 47:31 Live Demonstration and Closing RemarksTranscript[00:00:29] State of LLM Agents in 2024[00:00:29] Speaker 9: Our next keynote covers the state of LLM agents. With the triumphant return of Professor Graham Newbig of CMU and OpenDevon, now a startup known as AllHands. The renamed OpenHands has done extremely well this year, as they end the year sitting comfortably at number one on the hardest SWE Benchful leaderboard at 29%.[00:00:53] Speaker 9: Though, on the smaller SWE bench verified, they are at 53 percent behind Amazon Q [00:01:00] Devlo and OpenAI's self reported O3 results at 71. 7%. Many are saying that 2025 is going to be the year of agents, with OpenAI, DeepMind, and Anthropic setting their sights on consumer and coding agents. Vision based computer using agents and multi agent systems.[00:01:22] Speaker 9: There has been so much progress on the practical reliability and applications of agents in all domains, from the huge launch of Cognition AI's Devon this year, to the sleeper hit of Cursor Composer and recent guest Codium's Windsurf Cascade in the IDE arena. To the explosive revenue growth of recent guests StackBlitz's Bolt, Lovable, and Vercel's vZero.[00:01:44] Speaker 9: And the unicorn rounds and high profile movements of customer support agents like Sierra, now worth 4 billion, and search agents like Perplexity, now worth 9 billion. We wanted to take a little step back to understand the most notable papers of the year in [00:02:00] agents, and Graham indulged with his list of eight perennial problems in building agents.[00:02:06] Speaker 9: As always, don't forget to check our show notes for all the selected best papers of 2024, and for the YouTube link to their talk. Graham's slides were especially popular online, and we are honoured to have him. Watch out and take care![00:02:20] Professor Graham Newbig's Insights on Agents[00:02:20] Speaker: Okay hi everyone. So I was given the task of talking about agents in 2024, and this is An impossible task because there are so many agents, so many agents in 2024. So this is going to be strongly covered by like my personal experience and what I think is interesting and important, but I think it's an important topic.[00:02:41] Speaker: So let's go ahead. So the first thing I'd like to think about is let's say I gave you you know, a highly competent human, some tools. Let's say I gave you a web browser and a terminal or a file system. And the ability to [00:03:00] edit text or code. What could you do with that? Everything. Yeah.[00:03:07] Speaker: Probably a lot of things. This is like 99 percent of my, you know, daily daily life, I guess. When I'm, when I'm working. So, I think this is a pretty powerful tool set, and I am trying to do, and what I think some other people are trying to do, is come up with agents that are able to, you know, manipulate these things.[00:03:26] Speaker: Web browsing, coding, running code in successful ways. So there was a little bit about my profile. I'm a professor at CMU, chief scientist at All Hands AI, building open source coding agents. I'm maintainer of OpenHands, which is an open source coding agent framework. And I'm also a software developer and I, I like doing lots of coding and, and, you know, shipping new features and stuff like this.[00:03:51] Speaker: So building agents that help me to do this, you know, is kind of an interesting thing, very close to me.[00:03:57] Live Demo: Coding Agents in Action[00:03:57] Speaker: So the first thing I'd like to do is I'd like to try [00:04:00] some things that I haven't actually tried before. If anybody has, you know, tried to give a live demo, you know, this is, you know very, very scary whenever you do it and it might not work.[00:04:09] Speaker: So it might not work this time either. But I want to show you like three things that I typically do with coding agents in my everyday work. I use coding agents maybe five to 10 times a day to help me solve my own problems. And so this is a first one. This is a data science task. Which says I want to create scatter plots that show the increase of the SWE bench score over time.[00:04:34] Speaker: And so I, I wrote a kind of concrete prompt about this. Agents work better with like somewhat concrete prompts. And I'm gonna throw this into open hands and let it work. And I'll, I'll go back to that in a second. Another thing that I do is I create new software. And I, I've been using a [00:05:00] service a particular service.[00:05:01] Speaker: I won't name it for sending emails and I'm not very happy with it. So I want to switch over to this new service called resend. com, which makes it easier to send emails. And so I'm going to ask it to read the docs for the resend. com API and come up with a script that allows me to send emails. The input to the script should be a CSV file and the subject and body should be provided in Jinja2 templates.[00:05:24] Speaker: So I'll start another agent and and try to get it to do that for me.[00:05:35] Speaker: And let's go with the last one. The last one I do is. This is improving existing software and in order, you know, once you write software, you usually don't throw it away. You go in and, like, actually improve it iteratively. This software that I have is something I created without writing any code.[00:05:52] Speaker: It's basically software to monitor how much our our agents are contributing to the OpenHance repository. [00:06:00] And on the, let me make that a little bit bigger, on the left side, I have the number of issues where it like sent a pull request. I have the number of issues where it like sent a pull request, whether it was merged in purple, closed in red, or is still open in green. And so these are like, you know, it's helping us monitor, but one thing it doesn't tell me is the total number. And I kind of want that feature added to this software.[00:06:33] Speaker: So I'm going to try to add that too. So. I'll take this, I'll take this prompt,[00:06:46] Speaker: and here I want to open up specifically that GitHub repo. So I'll open up that repo and paste in the prompt asking it. I asked it to make a pie chart for each of these and give me the total over the entire time period that I'm [00:07:00] monitoring. So we'll do that. And so now I have let's see, I have some agents.[00:07:05] Speaker: Oh, this one already finished. Let's see. So this one already finished. You can see it finished analyzing the Swebench repository. It wrote a demonstration of, yeah, I'm trying to do that now, actually.[00:07:30] Speaker: It wrote a demonstration of how much each of the systems have improved over time. And I asked it to label the top three for each of the data sets. And so it labeled OpenHands as being the best one for SWE Bench Normal. For SWE Bench Verified, it has like the Amazon QAgent and OpenHands. For the SWE Bench Lite, it has three here over three over here.[00:07:53] Speaker: So you can see like. That's pretty useful, right? If you're a researcher, you do data analysis all the time. I did it while I was talking to all [00:08:00] of you and making a presentation. So that's, that's pretty nice. I, I doubt the other two are finished yet. That would be impressive if the, yeah. So I think they're still working.[00:08:09] Speaker: So maybe we'll get back to them at the end of the presentation. But so these are the kinds of the, these are the kinds of things that I do every day with coding agents now. And it's or software development agents. It's pretty impressive.[00:08:20] Designing Effective Agents[00:08:20] Speaker: The next thing I'd like to talk about a little bit is things I worry about when designing agents.[00:08:24] Speaker: So we're designing agents to, you know, do a very difficult task of like navigating websites writing code, other things like this. And within 2024, there's been like a huge improvement in the methodology that we use to do this. But there's a bunch of things we think about. There's a bunch of interesting papers, and I'd like to introduce a few of them.[00:08:46] Speaker: So the first thing I worry about is the agent computer interface. Like, how do we get an agent to interact with computers? And, How do we provide agents with the tools to do the job? And [00:09:00] within OpenHands we are doing the thing on the right, but there's also a lot of agents that do the thing on the left.[00:09:05] Speaker: So the thing on the left is you give like agents kind of granular tools. You give them tools like or let's say your instruction is I want to determine the most cost effective country to purchase the smartphone model, Kodak one the countries to consider are the USA, Japan, Germany, and India. And you have a bunch of available APIs.[00:09:26] Speaker: And. So what you do for some agents is you provide them all of these tools APIs as tools that they can call. And so in this particular case in order to solve this problem, you'd have to make about like 30 tool calls, right? You'd have to call lookup rates for Germany, you'd have to look it up for the US, Japan, and India.[00:09:44] Speaker: That's four tool goals. And then you go through and do all of these things separately. And the method that we adopt in OpenHands instead is we provide these tools, but we provide them by just giving a coding agent, the ability to call [00:10:00] arbitrary Python code. And. In the arbitrary Python code, it can call these tools.[00:10:05] Speaker: We expose these tools as APIs that the model can call. And what that allows us to do is instead of writing 20 tool calls, making 20 LLM calls, you write a program that runs all of these all at once, and it gets the result. And of course it can execute that program. It can, you know, make a mistake. It can get errors back and fix things.[00:10:23] Speaker: But that makes our job a lot easier. And this has been really like instrumental to our success, I think. Another part of this is what tools does the agent need? And I, I think this depends on your use case, we're kind of extreme and we're only giving the agent five tools or maybe six tools.[00:10:40] Speaker: And what, what are they? The first one is program execution. So it can execute bash programs, and it can execute Jupyter notebooks. It can execute cells in Jupyter notebooks. So that, those are two tools. Another one is a file editing tool. And the file editing tool allows you to browse parts of files.[00:11:00][00:11:00] Speaker: And kind of read them, overwrite them, other stuff like this. And then we have another global search and replace tool. So it's actually two tools for file editing. And then a final one is web browsing, web browsing. I'm kind of cheating when I call it only one tool. You actually have like scroll and text input and click and other stuff like that.[00:11:18] Speaker: But these are basically the only things we allow the agent to do. What, then the question is, like, what if we wanted to allow it to do something else? And the answer is, well, you know, human programmers already have a bunch of things that they use. They have the requests PyPy library, they have the PDF to text PyPy library, they have, like, all these other libraries in the Python ecosystem that they could use.[00:11:41] Speaker: And so if we provide a coding agent with all these libraries, it can do things like data visualization and other stuff that I just showed you. So it can also get clone repositories and, and other things like this. The agents are super good at using the GitHub API also. So they can do, you know, things on GitHub, like finding all of the, you know, [00:12:00] comments on your issues or checking GitHub actions and stuff.[00:12:02] Speaker: The second thing I think about is the human agent interface. So this is like how do we get humans to interact with agents? Bye. I already showed you one variety of our human agent interface. It's basically a chat window where you can browse through the agent's results and things like this. This is very, very difficult.[00:12:18] Speaker: I, I don't think anybody has a good answer to this, and I don't think we have a good answer to this, but the, the guiding principles that I'm trying to follow are we want to present enough info to the user. So we want to present them with, you know, what the agent is doing in the form of a kind of.[00:12:36] Speaker: English descriptions. So you can see here you can see here every time it takes an action, it says like, I will help you create a script for sending emails. When it runs a bash command. Sorry, that's a little small. When it runs a bash command, it will say ran a bash command. It won't actually show you the whole bash command or the whole Jupyter notebook because it can be really large, but you can open it up and see if you [00:13:00] want to, by clicking on this.[00:13:01] Speaker: So like if you want to explore more, you can click over to the Jupyter notebook and see what's displayed in the Jupyter notebook. And you get like lots and lots of information. So that's one thing.[00:13:16] Speaker: Another thing is go where the user is. So like if the user's already interacting in a particular setting then I'd like to, you know, integrate into that setting, but only to a point. So at OpenHands, we have a chat UI for interaction. We have a GitHub plugin for tagging and resolving issues. So basically what you do is you Do at open hands agent and the open hands agent will like see that comment and be able to go in and fix things.[00:13:42] Speaker: So if you say at open hands agent tests are failing on this PR, please fix the tests. It will go in and fix the test for you and stuff like this. Another thing we have is a remote runtime for launching headless jobs. So if you want to launch like a fleet of agents to solve, you know five different problems at once, you can also do [00:14:00] that through an API.[00:14:00] Speaker: So we have we have these interfaces and this probably depends on the use case. So like, depending if you're a coding agent, you want to do things one way. If you're a like insurance auditing agent, you'll want to do things other ways, obviously.[00:14:13] Choosing the Right Language Model for Agents[00:14:13] Speaker: Another thing I think about a lot is choosing a language model.[00:14:16] Speaker: And for agentic LMs we have to have a bunch of things work really well. The first thing is really, really good instruction following ability. And if you have really good instruction following ability, it opens up like a ton of possible applications for you. Tool use and coding ability. So if you provide tools, it needs to be able to use them well.[00:14:38] Speaker: Environment understanding. So it needs, like, if you're building a web agent, it needs to be able to understand web pages either through vision or through text. And error awareness and recovery ability. So, if it makes a mistake, it needs to be able to, you know, figure out why it made a mistake, come up with alternative strategies, and other things like this.[00:14:58] Speaker: [00:15:00] Under the hood, in all of the demos that I did now Cloud, we're using Cloud. Cloud has all of these abilities very good, not perfect, but very good. Most others don't have these abilities quite as much. So like GPT 4. 0 doesn't have very good error recovery ability. And so because of this, it will go into loops and do the same thing over and over and over again.[00:15:22] Speaker: Whereas Claude does not do this. Claude, if you, if you use the agents enough, you get used to their kind of like personality. And Claude says, Hmm, let me try a different approach a lot. So, you know, obviously it's been trained in some way to, you know, elicit this ability. We did an evaluation. This is old.[00:15:40] Speaker: And we need to update this basically, but we evaluated CLOD, mini LLAMA 405B, DeepSeq 2. 5 on being a good code agent within our framework. And CLOD was kind of head and shoulders above the rest. GPT 40 was kind of okay. The best open source model was LLAMA [00:16:00] 3. 1 405B. This needs to be updated because this is like a few months old by now and, you know, things are moving really, really fast.[00:16:05] Speaker: But I still am under the impression that Claude is the best. The other closed models are, you know, not quite as good. And then the open models are a little bit behind that. Grok, I, we haven't tried Grok at all, actually. So, it's a good question. If you want to try it I'd be happy to help.[00:16:24] Speaker: Cool.[00:16:24] Planning and Workflow for Agents[00:16:24] Speaker: Another thing is planning. And so there's a few considerations for planning. The first one is whether you have a curated plan or you have it generated on the fly. And so for solving GitHub issues, you can kind of have an overall plan. Like the plan is first reproduce. If there's an issue, first write tests to reproduce the issue or to demonstrate the issue.[00:16:50] Speaker: After that, run the tests and make sure they fail. Then go in and fix the tests. Run the tests again to make sure they pass and then you're done. So that's like a pretty good workflow [00:17:00] for like solving coding issues. And you could curate that ahead of time. Another option is to let the language model basically generate its own plan.[00:17:10] Speaker: And both of these are perfectly valid. Another one is explicit structure versus implicit structure. So let's say you generate a plan. If you have explicit structure, you could like write a multi agent system, and the multi agent system would have your reproducer agent, and then it would have your your bug your test writer agent, and your bug fixer agent, and lots of different agents, and you would explicitly write this all out in code, and then then use it that way.[00:17:38] Speaker: On the other hand, you could just provide a prompt that says, please do all of these things in order. So in OpenHands, we do very light planning. We have a single prompt. We don't have any multi agent systems. But we do provide, like, instructions about, like, what to do first, what to do next, and other things like this.[00:17:56] Speaker: I'm not against doing it the other way. But I laid [00:18:00] out some kind of justification for this in this blog called Don't Sleep on Single Agent Systems. And the basic idea behind this is if you have a really, really good instruction following agent it will follow the instructions as long as things are working according to your plan.[00:18:14] Speaker: But let's say you need to deviate from your plan, you still have the flexibility to do this. And if you do explicit structure through a multi agent system, it becomes a lot harder to do that. Like, you get stuck when things deviate from your plan. There's also some other examples, and I wanted to introduce a few papers.[00:18:30] Speaker: So one paper I liked recently is this paper called CoAct where you generate plans and then go in and fix them. And so the basic idea is like, if you need to deviate from your plan, you can You know, figure out that your plan was not working and go back and deviate from it.[00:18:49] Speaker: Another thing I think about a lot is specifying common workflows. So we're trying to tackle a software development and I already showed like three use cases where we do [00:19:00] software development and when we. We do software development, we do a ton of different things, but we do them over and over and over again.[00:19:08] Speaker: So just to give an example we fix GitHub actions when GitHub actions are failing. And we do that over and over and over again. That's not the number one thing that software engineers do, but it's a, you know, high up on the list. So how can we get a list of all of, like, the workflows that people are working on?[00:19:26] Speaker: And there's a few research works that people have done in this direction. One example is manual prompting. So there's this nice paper called STEP that got state of the art on the WebArena Web Navigation Benchmark where they came up with a bunch of manual workflows for solving different web navigation tasks.[00:19:43] Speaker: And we also have a paper recently called Agent Workflow Memory where the basic idea behind this is we want to create self improving agents that learn from their past successes. And the way it works is is we have a memory that has an example of lots of the previous [00:20:00] workflows that people have used. And every time the agent finishes a task and it self judges that it did a good job at that task, you take that task, you break it down into individual workflows included in that, and then you put it back in the prompt for the agent to work next time.[00:20:16] Speaker: And this we demonstrated that this leads to a 22. 5 percent increase on WebArena after 40 examples. So that's a pretty, you know, huge increase by kind of self learning and self improvement.[00:20:31] Speaker: Another thing is exploration. Oops. And one thing I think about is like, how can agents learn more about their environment before acting? And I work on coding and web agents, and there's, you know, a few good examples of this in, in both areas. Within coding, I view this as like repository understanding, understanding the code base that you're dealing with.[00:20:55] Speaker: And there's an example of this, or a couple examples of this, one example being AgentList. [00:21:00] Where they basically create a map of the repo and based on the map of the repo, they feed that into the agent so the agent can then navigate the repo and and better know where things are. And for web agents there's an example of a paper called Bagel, and basically what they do is they have the agent just do random tasks on a website, explore the website, better understand the structure of the website, and then after that they they feed that in as part of the product.[00:21:27] Speaker: Part seven is search. Right now in open hands, we just let the agent go on a linear search path. So it's just solving the problem once. We're using a good agent that can kind of like recover from errors and try alternative things when things are not working properly, but still we only have a linear search path.[00:21:45] Speaker: But there's also some nice work in 2024 that is about exploring multiple paths. So one example of this is there's a paper called Tree Search for Language Agents. And they basically expand multiple paths check whether the paths are going well, [00:22:00] and if they aren't going well, you rewind back. And on the web, this is kind of tricky, because, like, how do you rewind when you accidentally ordered something you don't want on Amazon?[00:22:09] Speaker: It's kind of, you know, not, not the easiest thing to do. For code, it's a little bit easier, because you can just revert any changes that you made. But I, I think that's an interesting topic, too.[00:22:21] Evaluation and Future Predictions for Agents[00:22:21] Speaker: And then finally evaluation. So within our development for evaluation, we want to do a number of things. The first one is fast sanity checks.[00:22:30] Speaker: And in order to do this, we want things we can run really fast, really really cheaply. So for web, we have something called mini world of bits, which is basically these trivial kind of web navigation things. We have something called the Adder Code Editing Benchmark, where it's just about editing individual files that we use.[00:22:48] Speaker: But we also want highly realistic evaluation. So for the web, we have something called WebArena that we created at CMU. This is web navigation on real real open source websites. So it's open source [00:23:00] websites that are actually used to serve shops or like bulletin boards or other things like this.[00:23:07] Speaker: And for code, we use Swebench, which I think a lot of people may have heard of. It's basically a coding benchmark that comes from real world pull requests on GitHub. So if you can solve those, you can also probably solve other real world pull requests. I would say we still don't have benchmarks for the fur full versatility of agents.[00:23:25] Speaker: So, for example We don't have benchmarks that test whether agents can code and do web navigation. But we're working on that and hoping to release something in the next week or two. So if that sounds interesting to you, come talk to me and I, I will tell you more about it.[00:23:42] Speaker: Cool. So I don't like making predictions, but I was told that I should be somewhat controversial, I guess, so I will, I will try to do it try to do it anyway, although maybe none of these will be very controversial. Um, the first thing is agent oriented LLMs like large language models for [00:24:00] agents.[00:24:00] Speaker: My, my prediction is every large LM trainer will be focusing on training models as agents. So every large language model will be a better agent model by mid 2025. Competition will increase, prices will go down, smaller models will become competitive as agents. So right now, actually agents are somewhat expensive to run in some cases, but I expect that that won't last six months.[00:24:23] Speaker: I, I bet we'll have much better agent models in six months. Another thing is instruction following ability, specifically in agentic contexts, will increase. And what that means is we'll have to do less manual engineering of agentic workflows and be able to do more by just prompting agents in more complex ways.[00:24:44] Speaker: Cloud is already really good at this. It's not perfect, but it's already really, really good. And I expect the other models will catch up to Cloud pretty soon. Error correction ability will increase, less getting stuck in loops. Again, this is something that Cloud's already pretty good at and I expect the others will, will follow.[00:25:00][00:25:01] Speaker: Agent benchmarks. Agent benchmarks will start saturating.[00:25:05] Speaker: And Swebench I think WebArena is already too easy. It, it is, it's not super easy, but it's already a bit too easy because the tasks we do in there are ones that take like two minutes for a human. So not, not too hard. And kind of historically in 2023 our benchmarks were too easy. So we built harder benchmarks like WebArena and Swebench were both built in 2023.[00:25:31] Future of Agent Development[00:25:31] Speaker: In 2024, our agents were too bad, so we built agents and now we're building better agents. In 2025, our benchmarks will be too easy, so we'll build better benchmarks, I'm, I'm guessing. So, I would expect to see much more challenging agent benchmarks come out, and we're already seeing some of them.[00:25:49] Speaker: In 2026, I don't know. I didn't write AGI, but we'll, we'll, we'll see.[00:25:56] Human-Agent Interaction Challenges[00:25:56] Speaker: Then the human agent computer interface. I think one thing that [00:26:00] we'll want to think about is what do we do at 75 percent success rate at things that we like actually care about? Right now we have 53 percent or 55 percent on Swebench verified, which is real world GitHub PRs.[00:26:16] Speaker: My impression is that the actual. Actual ability of models is maybe closer to 30 to 40%. So 30 to 40 percent of the things that I want an agent to solve on my own repos, it just solves without any human intervention. 80 to 90 percent it can solve without me opening an IDE. But I need to give it feedback.[00:26:36] Speaker: So how do we, how do we make that interaction smooth so that humans can audit? The work of agents that are really, really good, but not perfect is going to be a big challenge.[00:26:48] Expanding Agent Use Beyond Programming[00:26:48] Speaker: How can we expose the power of programming agents to other industries? So like as programmers, I think not all of us are using agents every day in our programming, although we probably will be [00:27:00] in in months or maybe a year.[00:27:02] Speaker: But I, I think it will come very naturally to us as programmers because we know code. We know, you know. Like how to architect software and stuff like that. So I think the question is how do we put this in the hands of like a lawyer or a chemist or somebody else and have them also be able to, you know, interact with it as naturally as we can.[00:27:25] Redesigning Systems for Agent Efficiency[00:27:25] Speaker: Another interesting thing is how can we redesign our existing systems for agents? So we had a paper on API based web agents, and basically what we showed is If you take a web agent and the agent interacts not with a website, but with APIs, the accuracy goes way up just because APIs are way easier to interact with.[00:27:42] Speaker: And in fact, like when I ask the, well, our agent, our agent is able to browse websites, but whenever I want it to interact with GitHub, I tell it do not browse the GitHub website. Use the GitHub API because it's way more successful at doing that. So maybe, you know, every website is going to need to have [00:28:00] an API because we're going to be having agents interact with them.[00:28:03] Accelerating Progress with Agent Technology[00:28:03] Speaker: About progress, I think progress will get faster. It's already fast. A lot of people are already overwhelmed, but I think it will continue. The reason why is agents are building agents. And better agents will build better agents faster. So I expect that you know, if you haven't interacted with a coding agent yet, it's pretty magical, like the stuff that it can do.[00:28:24] Speaker: So yeah.[00:28:28] Call to Action for Open Source Contributions[00:28:28] Speaker: And I have a call to action. I'm honestly, like I've been working on, you know, natural language processing and, and Language models for what, 15 years now. And even for me, it's pretty impressive what like AI agents powered by strong language models can do. On the other hand, I believe that we should really make these powerful tools accessible.[00:28:49] Speaker: And what I mean by this is I don't think like, you know, We, we should have these be opaque or limited to only a set, a certain set of people. I feel like they should be [00:29:00] affordable. They shouldn't be increasing the, you know, difference in the amount of power that people have. If anything, I'd really like them to kind of make it It's possible for people who weren't able to do things before to be able to do them well.[00:29:13] Speaker: Open source is one way to do that. That's why I'm working on open source. There are other ways to do that. You know, make things cheap, make things you know, so you can serve them to people who aren't able to afford them. Easily, like Duolingo is one example where they get all the people in the US to pay them 20 a month so that they can give all the people in South America free, you know, language education, so they can learn English and become, you know like, and become, you know, More attractive on the job market, for instance.[00:29:41] Speaker: And so I think we can all think of ways that we can do that sort of thing. And if that resonates with you, please contribute. Of course, I'd be happy if you contribute to OpenHands and use it. But another way you can do that is just use open source solutions, contribute to them, research with them, and train strong open source [00:30:00] models.[00:30:00] Speaker: So I see, you know, Some people in the room who are already training models. It'd be great if you could train models for coding agents and make them cheap. And yeah yeah, please. I, I was thinking about you among others. So yeah, that's all I have. Thanks.[00:30:20] Speaker 2: Slight, slightly controversial. Tick is probably the nicest way to say hot ticks. Any hot ticks questions, actual hot ticks?[00:30:31] Speaker: Oh, I can also show the other agents that were working, if anybody's interested, but yeah, sorry, go ahead.[00:30:36] Q&A: Agent Performance and Benchmarks[00:30:36] Speaker 3: Yeah, I have a couple of questions. So they're kind of paired, maybe. The first thing is that you said that You're estimating that your your agent is successfully resolving like something like 30 to 40 percent of your issues, but that's like below what you saw in Swebench.[00:30:52] Speaker 3: So I guess I'm wondering where that discrepancy is coming from. And then I guess my other second question, which is maybe broader in scope is that [00:31:00] like, if, if you think of an agent as like a junior developer, and I say, go do something, then I expect maybe tomorrow to get a Slack message being like, Hey, I ran into this issue.[00:31:10] Speaker 3: How can I resolve it? And, and, like you said, your agent is, like, successfully solving, like, 90 percent of issues where you give it direct feedback. So, are you thinking about how to get the agent to reach out to, like, for, for planning when it's, when it's stuck or something like that? Or, like, identify when it runs into a hole like that?[00:31:30] Speaker: Yeah, so great. These are great questions. Oh,[00:31:32] Speaker 3: sorry. The third question, which is a good, so this is the first two. And if so, are you going to add a benchmark for that second question?[00:31:40] Speaker: Okay. Great. Yeah. Great questions. Okay. So the first question was why do I think it's resolving less than 50 percent of the issues on Swebench?[00:31:48] Speaker: So first Swebench is on popular open source repos, and all of these popular open source repos were included in the training data for all of the language models. And so the language [00:32:00] models already know these repos. In some cases, the language models already know the individual issues in Swebench.[00:32:06] Speaker: So basically, like, some of the training data has leaked. And so it, it definitely will overestimate with respect to that. I don't think it's like, you know, Horribly, horribly off but I think, you know, it's boosting the accuracy by a little bit. So, maybe that's the biggest reason why. In terms of asking for help, and whether we're benchmarking asking for help yes we are.[00:32:29] Speaker: So one one thing we're working on now, which we're hoping to put out soon, is we we basically made SuperVig. Sweep edge issues. Like I'm having a, I'm having a problem with the matrix multiply. Please help. Because these are like, if anybody's run a popular open source, like framework, these are what half your issues are.[00:32:49] Speaker: You're like users show up and say like, my screen doesn't work. What, what's wrong or something. And so then you need to ask them questions and how to reproduce. So yeah, we're, we're, we're working on [00:33:00] that. I think. It, my impression is that agents are not very good at asking for help, even Claude. So like when, when they ask for help, they'll ask for help when they don't need it.[00:33:11] Speaker: And then won't ask for help when they do need it. So this is definitely like an issue, I think.[00:33:20] Speaker 4: Thanks for the great talk. I also have two questions.[00:33:23] Q&A: Web Agents and Interaction Methods[00:33:23] Speaker 4: It's first one can you talk a bit more about how the web agent interacts with So is there a VLM that looks at the web page layout and then you parse the HTML and select which buttons to click on? And if so do you think there's a future where there's like, so I work at Bing Microsoft AI.[00:33:41] Speaker 4: Do you think there's a future where the same web index, but there's an agent friendly web index where all the processing is done offline so that you don't need to spend time. Cleaning up, like, cleaning up these TML and figuring out what to click online. And any thoughts on, thoughts on that?[00:33:57] Speaker: Yeah, so great question. There's a lot of work on web [00:34:00] agents. I didn't go into, like, all of the details, but I think there's There's three main ways that agents interact with websites. The first way is the simplest way and the newest way, but it doesn't work very well, which is you take a screenshot of the website and then you click on a particular pixel value on the website.[00:34:23] Speaker: And Like models are not very good at that at the moment. Like they'll misclick. There was this thing about how like clawed computer use started like looking at pictures of Yellowstone national park or something like this. I don't know if you heard about this anecdote, but like people were like, oh, it's so human, it's looking for vacation.[00:34:40] Speaker: And it was like, no, it probably just misclicked on the wrong pixels and accidentally clicked on an ad. So like this is the simplest way. The second simplest way. You take the HTML and you basically identify elements in the HTML. You don't use any vision whatsoever. And then you say, okay, I want to click on this element.[00:34:59] Speaker: I want to enter text [00:35:00] in this element or something like that. But HTML is too huge. So it actually, it usually gets condensed down into something called an accessibility tree, which was made for screen readers for visually impaired people. And So that's another way. And then the third way is kind of a hybrid where you present the screenshot, but you also present like a textual summary of the output.[00:35:18] Speaker: And that's the one that I think will probably work best. What we're using is we're just using text at the moment. And that's just an implementation issue that we haven't implemented the. Visual stuff yet, but that's kind of like we're working on it now. Another thing that I should point out is we actually have two modalities for web browsing.[00:35:35] Speaker: Very recently we implemented this. And the reason why is because if you want to interact with full websites you will need to click on all of the elements or have the ability to click on all of the elements. But most of our work that we need websites for is just web browsing and like gathering information.[00:35:50] Speaker: So we have another modality where we convert all of it to markdown because that's like way more concise and easier for the agent to deal with. And then [00:36:00] can we create an index specifically for agents, maybe a markdown index or something like that would be, you know, would make sense. Oh, how would I make a successor to Swebench?[00:36:10] Speaker: So I mean, the first thing is there's like live code bench, which live code bench is basically continuously updating to make sure it doesn't leak into language model training data. That's easy to do for Swebench because it comes from real websites and those real websites are getting new issues all the time.[00:36:27] Speaker: So you could just do it on the same benchmarks that they have there. There's also like a pretty large number of things covering various coding tasks. So like, for example, Swebunch is mainly fixing issues, but there's also like documentation, there's generating tests that actually test the functionality that you want.[00:36:47] Speaker: And there there was a paper by a student at CMU on generating tests and stuff like that. So I feel like. Swebench is one piece of the puzzle, but you could also have like 10 different other tasks and then you could have like a composite [00:37:00] benchmark where you test all of these abilities, not just that particular one.[00:37:04] Speaker: Well, lots, lots of other things too, but[00:37:11] Speaker 2: Question from across. Use your mic, it will help. Um,[00:37:15] Speaker 5: Great talk. Thank you.[00:37:16] Q&A: Agent Architectures and Improvements[00:37:16] Speaker 5: My question is about your experience designing agent architectures. Specifically how much do you have to separate concerns in terms of tasks specific agents versus having one agent to do three or five things with a gigantic prompt with conditional paths and so on.[00:37:35] Speaker: Yeah, so that's a great question. So we have a basic coding and browsing agent. And I won't say basic, like it's a good, you know, it's a good agent, but it does coding and browsing. And it has instructions about how to do coding and browsing. That is enough for most things. Especially given a strong language model that has a lot of background knowledge about how to solve different types of tasks and how to use different APIs and stuff like that.[00:37:58] Speaker: We do have [00:38:00] a mechanism for something called micro agents. And micro agents are basically something that gets added to the prompt when a trigger is triggered. Right now it's very, very rudimentary. It's like if you detect the word GitHub anywhere, you get instructions about how to interact with GitHub, like use the API and don't browse.[00:38:17] Speaker: Also another one that I just added is for NPM, the like JavaScript package manager. And NPM, when it runs and it hits a failure, it Like hits in interactive terminals where it says, would you like to quit? Yep. Enter yes. And if that does it, it like stalls our agent for the time out until like two minutes.[00:38:36] Speaker: So like I added a new microagent whenever it started using NPM, it would Like get instructions about how to not use interactive terminal and stuff like that. So that's our current solution. Honestly, I like it a lot. It's simple. It's easy to maintain. It works really well and stuff like that. But I think there is a world where you would want something more complex than that.[00:38:55] Speaker 5: Got it. Thank you.[00:38:59] Speaker 6: I got a [00:39:00] question about MCP. I feel like this is the Anthropic Model Context Protocol. It seems like the most successful type of this, like, standardization of interactions between computers and agents. Are you guys adopting it? Is there any other competing standard?[00:39:16] Speaker 6: Anything, anything thought about it?[00:39:17] Speaker: Yeah, I think the Anth, so the Anthropic MCP is like, a way to It, it's essentially a collection of APIs that you can use to interact with different things on the internet. I, I think it's not a bad idea, but it, it's like, there's a few things that bug me a little bit about it.[00:39:40] Speaker: It's like we already have an API for GitHub, so why do we need an MCP for GitHub? Right. You know, like GitHub has an API, the GitHub API is evolving. We can look up the GitHub API documentation. So it seems like kind of duplicated a little bit. And also they have a setting where [00:40:00] it's like you have to spin up a server to serve your GitHub stuff.[00:40:04] Speaker: And you have to spin up a server to serve your like, you know, other stuff. And so I think it makes, it makes sense if you really care about like separation of concerns and security and like other things like this, but right now we haven't seen, we haven't seen that. To have a lot more value than interacting directly with the tools that are already provided.[00:40:26] Speaker: And that kind of goes into my general philosophy, which is we're already developing things for programmers. You know,[00:40:36] Speaker: how is an agent different than from a programmer? And it is different, obviously, you know, like agents are different from programmers, but they're not that different at this point. So we can kind of interact with the interfaces we create for, for programmers. Yeah. I might change my mind later though.[00:40:51] Speaker: So we'll see.[00:40:54] Speaker 7: Yeah. Hi. Thanks. Very interesting talk. You were saying that the agents you have right now [00:41:00] solve like maybe 30 percent of your, your issues out of the gate. I'm curious of the things that it doesn't do. Is there like a pattern that you observe? Like, Oh, like these are the sorts of things that it just seems to really struggle with, or is it just seemingly random?[00:41:15] Speaker: It's definitely not random. It's like, if you think it's more complex than it's. Like, just intuitively, it's more likely to fail. I've gotten a bit better at prompting also, so like, just to give an example it, it will sometimes fail to fix a GitHub workflow because it will not look at the GitHub workflow and understand what the GitHub workflow is doing before it solves the problem.[00:41:43] Speaker: So I, I think actually probably the biggest thing that it fails at is, um, er, that our, our agent plus Claude fails at is insufficient information gathering before trying to solve the task. And so if you provide all, if you provide instructions that it should do information [00:42:00] gathering beforehand, it tends to do well.[00:42:01] Speaker: If you don't provide sufficient instructions, it will try to solve the task without, like, fully understanding the task first, and then fail, and then you need to go back and give feedback. You know, additional feedback. Another example, like, I, I love this example. While I was developing the the monitor website that I, I showed here, we hit a really tricky bug where it was writing out a cache file to a different directory than it was reading the cache file from.[00:42:26] Speaker: And I had no idea what to do. I had no idea what was going on. I, I thought the bug was in a different part of the code, but what I asked it to do was come up with five possible reasons why this could be failing and decreasing order of likelihood and examine all of them. And that worked and it could just go in and like do that.[00:42:44] Speaker: So like I think a certain level of like scaffolding about like how it should sufficiently Gather all the information that's necessary in order to solve a task is like, if that's missing, then that's probably the biggest failure point at the moment. [00:43:00][00:43:01] Speaker 7: Thanks.[00:43:01] Speaker 6: Yeah.[00:43:06] Speaker 6: I'm just, I'm just using this as a chance to ask you all my questions.[00:43:09] Q&A: Self-Improving Agents and Authentication[00:43:09] Speaker 6: You had a, you had a slide on here about like self improving agents or something like that with memory. It's like a really throwaway slide for like a super powerful idea. It got me thinking about how I would do it. I have no idea how.[00:43:21] Speaker 6: So I just wanted you to chain a thought more on this.[00:43:25] Speaker: Yeah, self, self improving. So I think the biggest reason, like the simplest possible way to create a self improving agent. The problem with that is to have a really, really strong language model that with infinite context, and it can just go back and look at like all of its past experiences and, you know, learn from them.[00:43:46] Speaker: You might also want to remove the bad stuff just so it doesn't over index on it's like failed past experiences. But the problem is a really powerful language model is large. Infinite context is expensive. We don't have a good way to [00:44:00] index into it because like rag, Okay. At least in my experience, RAG from language to code doesn't work super well.[00:44:08] Speaker: So I think in the end, it's like, that's the way I would like to solve this problem. I'd like to have an infinite context and somehow be able to index into it appropriately. And I think that would mostly solve it. Another thing you can do is fine tuning. So I think like RAG is one way to get information into your model.[00:44:23] Speaker: Fine tuning is another way to get information into your model. So. That might be another way of continuously improving. Like you identify when you did a good job and then just add all of the good examples into your model.[00:44:34] Speaker 6: Yeah. So, you know, how like Voyager tries to write code into a skill library and then you reuse as a skill library, right?[00:44:40] Speaker 6: So that it improves in the sense that it just builds up the skill library over time.[00:44:44] Speaker: Yep.[00:44:44] Speaker 6: One thing I was like thinking about and there's this idea of, from, from Devin, your, your arch nemesis of playbooks. I don't know if you've seen them.[00:44:52] Speaker: Yeah, I mean, we're calling them workflows, but they're simpler.[00:44:55] Speaker 6: Yeah, so like, basically, like, you should, like, once a workflow works, you can kind of, [00:45:00] like, persist them as a skill library. Yeah. Right? Like I, I feel like that there's a, that's like some in between, like you said, you know, it's hard to do rag between language and code, but I feel like that is ragged for, like, I've done this before, last time I did it, this, this worked.[00:45:14] Speaker 6: So I'm just going to shortcut. All the stuff that failed before.[00:45:18] Speaker: Yeah, I totally, I think it's possible. It's just, you know, not, not trivial at the same time. I'll explain the two curves. So basically, the base, the baseline is just an agent that does it from scratch every time. And this curve up here is agent workflow memory where it's like adding the successful experiences back into the prompt.[00:45:39] Speaker: Why is this improving? The reason why is because just it failed on the first few examples and for the average to catch up it, it took a little bit of time. So it's not like this is actually improving it. You could just basically view the this one is constant and then this one is like improving.[00:45:56] Speaker: Like this, basically you can see it's continuing to go [00:46:00] up.[00:46:01] Speaker 8: How do you think we're going to solve the authentication problem for agents right now?[00:46:05] Speaker: When you say authentication, you mean like credentials, like, yeah.[00:46:09] Speaker 8: Yeah. Cause I've seen a few like startup solutions today, but it seems like it's limited to the amount of like websites or actual like authentication methods that it's capable of performing today.[00:46:19] Speaker: Yeah. Great questions. So. My preferred solution to this at the moment is GitHub like fine grained authentication tokens and GitHub fine grained authentication tokens allow you to specify like very free. On a very granular basis on this repo, you have permission to do this, on this repo, you have permission to do this.[00:46:41] Speaker: You also can prevent people from pushing to the main branch unless they get approved. You can do all of these other things. And I think these were all developed for human developers. Or like, the branch protection rules were developed for human developers. The fine grained authentication tokens were developed for GitHub apps.[00:46:56] Speaker: I think for GitHub, maybe [00:47:00] just pushing this like a little bit more is the way to do this. For other things, they're totally not prepared to give that sort of fine grained control. Like most APIs don't have something like a fine grained authentication token. And that goes into my like comment that we're going to need to prepare the world for agents, I think.[00:47:17] Speaker: But I think like the GitHub authentication tokens are like a good template for how you could start doing that maybe, but yeah, I don't, I don't, I don't have an answer.[00:47:25] Speaker 8: I'll let you know if I find one.[00:47:26] Speaker: Okay. Yeah.[00:47:31] Live Demonstration and Closing Remarks[00:47:31] Speaker: I'm going to finish up. Let, let me just see.[00:47:37] Speaker: Okay. So this one this one did write a script. I'm not going to actually read it for you. And then the other one, let's see.[00:47:51] Speaker: Yeah. So it sent a PR, sorry. What is, what is the PR URL?[00:48:00][00:48:02] Speaker: So I don't, I don't know if this sorry, that's taking way longer than it should. Okay, cool. Yeah. So this one sent a PR. I'll, I'll tell you later if this actually like successfully Oh, no, it's deployed on Vercel, so I can actually show you, but let's, let me try this real quick. Sorry. I know I don't have time.[00:48:24] Speaker: Yeah, there you go. I have pie charts now. So it's so fun. It's so fun to play with these things. Cause you could just do that while I'm giving a, you know, talk and things like that. So, yeah, thanks. Get full access to Latent Space at www.latent.space/subscribe
In today's podcast, we have two experts who will be speaking about DLA Internet Bid Board System (DIBBS), Mike and his wife Vanessa. Mike is a pilot who did 700,000 in sales in August and over 5 million in sales for the year shares his experience. We also have a Q&A session to guide you look at sources sought on Sam.gov. Noy only that, in this episode, I share how you can navigate Sam.gov and download relevant information into a CSV file. So tune in to this episode now to learn more!
In this episode, Andrew talks about the rapid audience growth he's seeing on Bluesky and the challenges he's running into finding Founding Users for MetaMonster. The guys talk about the idea of product market fit as a spectrum and validation as a way to uncover where you might be on that spectrum, but without ever getting to 100% certainty. Then Sean talks through trying to find his next side project idea. Links:Andrew's Twitter: @AndrewAskinsAndrew's website: https://www.andrewaskins.com/MetaMonster: https://metamonster.ai/ChartJuice: https://www.chartjuice.com/Sean's Twitter: @seanqsunMiscreants: http://miscreants.com/CopyWork: https://copy.work/Wordpress to Webflow-ready CSV: https://contentgobl.in/For more information about the podcast, check out https://www.smalleffortspod.com/.Transcript:00:00.01SeanThis is a full Andrew podcast with Sean does a support character today Mainly because I was late before this call00:08.75AndrewUh, I was also kind of late. I was chatting with a friend and it took me a while. Yeah. what's up, man? How are you?00:17.09SeanI'm good. um I'm busy. um Yeah, I'm busy. A lot of things happening with miscreants, Q4, trying to get a lot of things wrapped up before the holidays and and just so much inbound.00:31.44SeanSo much inbound.00:31.83AndrewThat's great.00:32.67SeanYeah, it is. It's amazing. You know, partially00:34.65Andrewyou see You sound exhausted by it. but00:36.91SeanI'm so tired. Yeah. Well, partially thanks to you for for doing some really good work as well. So I appreciate it.00:43.48AndrewCool. I'm glad it's been helping.00:45.98SeanYeah, absolutely. What's up with you? What's going on?00:49.78AndrewSo two things I want to talk about today, a small one and then a bigger one that I want to work out some thoughts with, with you and our, our 12 listeners.01:01.03SeanYeah.01:02.70Andrewso the small one real quick blue sky is popping off right now. It is wild. I've been, I've had an account for like two weeks now and I'm up to 1200 followers.01:16.59Andrewstarter kits are this like magical growth hack, uh, where01:16.28Seanhe yeah01:20.44SeanAre starter kits just lists but people can auto follow everyone in the list?01:25.31AndrewBasically, yeah, yeah, a starter kit is basically like a Twitter list.01:26.99Seanokay01:30.65AndrewBut so in Blue Sky, you have three concepts that that fulfill part of the responsibility of lists um in Twitter. So you have starter kits, which are a way to curate a list of people that you think others should follow most often around the topic. So like, I'm in a couple of like indie hacker, indie founder, bootstrap founder, starter kits. and you can, when you open a starter kit, you can follow individual people or you can just click follow all.02:01.31Andrewand And so that's how I found people to follow. I'm following like 360 people. It's almost all from like a handful of starter kits. And then and it's a really great mechanism for people to very quickly build a little network.02:18.09AndrewAnd it's if you get added to a couple of these, it's an awesome growth mechanism.02:22.54SeanHell02:22.64AndrewSo I'm up to 1,200 followers. very, very quickly, which is cool.02:28.21Seanyeah.02:29.29AndrewAnd we'll see how that correlates to engagement. wasn't seeing much engagement for the first like maybe 400 or 500 followers, but now I feel like I'm starting to see some pretty solid engagement.02:41.50AndrewI think engagement is naturally going to lag behind followers on the platform while people are still building a habit of checking and using Blue Sky, because it's still like A lot of people are switching over right now or like signing up and trying it for the first time, and so they're not going to have that usage habit right away. But I'm now seeing more engagement there than I am on tweets, and I have like 2700 Twitter followers.03:09.80AndrewSo there's there's starter kits, then there's lists. they do have lists that work exactly like Twitter lists, which are like a non-algorithmic way to curate a list of people whose content you want to view without following them. And then they have feeds, which are custom algorithms that developers can write and publish.03:31.61Andrewso there's like 50,000 feeds. So there's essentially 50,000 different algorithms you can choose to subscribe to and follow. So there's a default discover algorithm. There's a default like following algorithm, where it's just everyone you follow.03:46.88Andrewthen you can subscribe to lots of different feeds and customize your algorithm work the way you want to, which is a really cool idea.03:59.56AndrewYeah.03:57.79Seanthat is a really cool idea i wonder i wonder if there's like a secret like if your feed is used a lot you know how like you know plug-in like places that like have that you do upload plugins and stuff will pay out a certain amount if you have like a cool04:13.36AndrewMaybe. Yeah. I don't think they've started doing revenue sharing on blue sky yet. and they very openly said they're going to try to stay away from advertising and sell premium features, which is interesting. sort of Also the blue sky team is only 20 people. There are 20 people on the team.04:34.05SeanSo you're saying Musk didn't fire enough people when he joined Twitter. That's what you're telling me.04:39.64AndrewI'm saying it's really cool what they're doing. And they're they're being very open and like chatting with and interacting with the community. Overall, the vibes are just, it I think because it's new, like this won't last forever, but because it's new, people are looking for people to follow. They're looking for content to engage with. therere They're just more open and and curious than where people have more set patterns of behavior that they fall into.05:11.07AndrewSo the vibes are really good right ...
Jem and Justin cover a lot in this episode! Justin's been working on Zapier SMS magic for show notes, while Jem dives into pricing blocks for the upcoming ThreadBoard launch. They also chat about using Easy CSV Editor and dynamic HTML configurators. Will Justin fire as Jem tunes out the BOM, and they discuss after-hours work and gift-giving—with Justin making custom gift boxes for a special carbide gift. They wrap up by comparing web chat vs. text chat, exploring custom pop-ups, and finishing their thoughts on Extreme Ownership.Watch on YoutubeDISCUSSED:✍️ Send Comments on EpisodeJustin made Zapier SMS magic for show notesCSV - What use this junk for!? Easy CSV EditorThreadBoard Launch Product launch accountability ThreadBoard ꘎Making pricing blocks in Rhino for ThreadBoard and more productsIs Nack back?Making HTML configurators - dynamic wall sizeI love Cody for VS CodeWill Justin fire me!?!?!? Jem has stopped listening to the BOMHow much after hours work do you do? ꘎Giftgiving - Justin making gift boxes for that special carbide giftPodium SMSWeb chat vs text chat - direct contact experimentCustom pop up for chat boxLabels, CSV, data merging with Dymo app - PDX CNC VideoDebrief field ꘎Product revision trackingFinishing up: Extreme OwnershipLeading up the chain---Profit First PlaylistClassic Episodes Playlist---SUPPORT THE SHOWBecome a Patreon - Get the Secret ShowReview on Apple Podcast Share with a FriendDiscuss on Show SubredditShow InfoShow WebsiteContact Jem & JustinInstagram | Tiktok | Facebook | YoutubePlease note: Show notes contains affiliate links.HOSTSJem FreemanCastlemaine, Victoria, AustraliaLike Butter |
This show has been flagged as Explicit by the host. Introduction Hosts: MrX Dave Morriss We recorded this on Saturday September 14th 2024. This time we were at Swanston Farm, a place we had previously visited for lunch in March 2024. After lunch we adjourned to Dave's car (Studio N) in the car park, and recorded a chat. The details of why it is Studio N instead of Studio C is mentioned in the chat itself! Preparing this show has taken longer than usual this time - apologies! Topics discussed Studio change: Sadly, since the last recording Studio C (Dave's 10-year old Citroën C4 Picasso) self-destructed. It was a diesel car and one of the fuel injectors failed and destroyed the engine management system as it died. It wasn't worth repairing! The replacement is Studio N, a Nissan Leaf, which is an EV (electric vehicle). The price of nearly new EV cars is fairly good in the UK at this time in 2024, so it seemed like a good opportunity to get one. Learning to own and drive an EV can be challenging to some extent: "Range anxiety" and access to charging stations Regenerative braking Fast (DC) charging on the road is relatively expensive (£0.79p per kWh), but is convenient Ideally, a home (AC) charger is required. It will be slower (7 kW per hour) but will be cheaper with a night tariff (£0.085 per kWh versus £0.25 per kWh normal rate) There is potential, with solar panels and a battery, to use free electricity to charge an EV at home MrX might like to move to an EV in the future YouTube channels: Dave is subscribed to a channel called "The Post Apocalyptic Inventor (TPAI)" and recently shared one of the latest videos with MrX. The channel owner collects discarded items from scrapyards in Germany, or buys old bits of equipment, and gets them working again. Milling Machine Adventure! Bring her Home! / Gantry Build I built a CNC Plasma Cutting Table from Scrap! Databases: MrX used dBase on DOS in the past, and received some training in databases. In 2017 he obtained a large csv (comma-separated values) file from the OFCOM (Office of Communications, UK) website containing their Wireless Legacy Register, which contains licensees and frequencies with longitude and latitude values. A means of interrogating this file was sought, having found that spreadsheets were not really very good at handling files of this size (around 200,000 records). MrX used the xsv tool, which was covered in shows hpr2698 and hpr2752 by Mr. Young. It allows a CSV file to be interrogated in quite a lot of detail from the command line. However, with a file of this size it was still quite slow. In a discussion with Dave the subject of the SQLite database came up. Using the SQLite Browser it was simple to load this CSV file into a database and gain rapid access to its contents. SQLite databases may also be queried through a command-line interface which can also be run on a Raspberry Pi, phones, tablets and on a ChromeBook. The textimg tool: This is a command to convert from colored text (ANSI or 256) to an image. Dave generates coloured text from his meal database (HPR show hpr3386 :: What's for dinner?, this being a later enhancement), then captures the output and sends it to a Telegram channel shared with his family. Dave also exchanges weather data obtained from the site wttr.in with Archer72 on Matrix. This is a useful tool for generating images from text, including any text colours. It can be installed from the GitHub copy, and maybe from some package repositories. Using coloured text in BASH (Dave responding to MrX): I have used a function to define variables with colour names: Call a function define_colours which defines (and exports) variables called red, green, etc. Using red=$(tput setaf 1); export red I use the colours in two ways: Method 1: use these names in echo "${red}Red text${reset}" Method 2: use another function coloured which takes two arguments, a colour name (as a string) and a message. The script encloses the message argument in a colour variable and a reset. The colour name argument is used in a redirection to turn red into the contents of the variable $red. This probably needs a show to explain things fully. Terminal multiplexers: Dave and MrX use GNU screen. Both recognise that the alternative tmux might be better to use in terms of features, but are reluctant to learn a new interface! Dave has noticed a new open-source alternative called zellij but has not yet used it. Variable weather: Dealing with hot weather: YouTube, Techmoan channel PERSONAL AIRCON - Ranvoo Aice Lite Review MrX had recently had a holiday in the Lake District where the weather was good. In Scotland the weather has been wet and windy in the same period. Spectrum24, OggCamp: MrX is attending his first OggCamp in Manchester. Dave will be attending too, as will Ken. HPR has a table/booth at OggCamp. Ken was recently at Spectrum24, an amateur radio conference in Paris. Meshtastic an open source, off-grid, decentralized, mesh network built to run on affordable, low-power devices Old inkjet printers: MrX has an Epson R300 printer where the black ink seems to have dried up. Dave has an old HP Inkjet with the same type of problem. This printer has a scanner and FAX capability. An HPR show was done in 2015 describing how it was set up to use a Raspberry Pi to make it available on the local network. Propelling or mechanical pencils: Dave had a Pentel GraphGear 1000 propelling (aka mechanical) pencil which was mentioned on HPR show 3197. This was dropped onto concrete, and didn't appear damaged at the time, but it apparently received internal damage and eventually fell apart. Links Electric cars: EV (electric vehicle) Regenerative braking Databases SQLite: SQLite SQLite Browser An Easy Way to Master SQLite Fast Open source SQLite Studio available for Linux SQLiteStudio SQL: Origins: The Birth of SQL & the Relational Database Intricacies: MySQL JOIN Types Poster (Steve Stedman) Design: How to Fake a Database Design - Curtis Poe (Ovid) The textimg tool: GitHub repository: textimg zellij: Website: zellij Github repository: zellij Quote from the repo: Zellij is a workspace aimed at developers, ops-oriented people and anyone who loves the terminal. Similar programs are sometimes called "Terminal Multiplexers". Provide feedback on this episode.
In this episode, Sean talks about getting himself out of day-to-day client work at Miscreants, and Andrew provides an update on his slow progress with customer acquisition for MetaMonster. Andrew shares an idea for a side project and they talk about Dharmesh buying the chat.com domain and "flipping" it to OpenAI.Links:Andrew's Twitter: @AndrewAskinsAndrew's website: https://www.andrewaskins.com/MetaMonster: https://metamonster.ai/ChartJuice: https://www.chartjuice.com/Sean's Twitter: @seanqsunMiscreants: http://miscreants.com/CopyWork: https://copy.work/Wordpress to Webflow-ready CSV: https://contentgobl.in/For more information about the podcast, check out https://www.smalleffortspod.com/.Transcript:00:00.98SeanAll right, what is this thing, Andrew? What are you gonna tell me? even It's been 30 minutes since you told me we we're changing what we're talking about today. Now you've been, okay, go.00:12.23Andrewlet me Let me just play this for you and I think it'll all make sense.00:15.25SeanOkay.00:21.25Andrewand00:24.28Andrewhoping you can so into me baby00:28.45AndrewDo you know what this is yet?00:33.18SeanYes. I know what this is. I know what this is. This is the... I feel bad for all Asian guys in SF. I'll never outdo you, you tech bros. It's fucking over.00:51.69AndrewWait, wait! graves keepki godam oh01:02.88AndrewI never thought I would hear Mark Zuckerberg with T paid singing sweat, drip down my balls.01:04.84Seanz pain that's what what what do you want to do you want to dissect the lyrics you want to01:11.18AndrewOh, ski, ski motherfucker. Oh, ski, ski. God damn.01:17.79AndrewOh my God. I found this and I was like, holy shit. Clear everything. This is all I want to talk about.01:32.86Seanhoney01:32.98AndrewI think it's just fucking hilarious. And like, I can't decide if I like love it or hate it. Like I can't decide if it's the best thing ever or the most cringe thing ever. And I think I love it. And I think I love it in part because it is kind of cringe and he just doesn't give a fuck.01:52.89Seanfor what it's worth though maybe this was the light at the this was the this was the one piece of uplifting news in recent times my executor the one who keeps us going02:01.14AndrewRight. Oh my God.02:08.17AndrewYeah. So that, that is the, uh, Mark Zuckerberg collaboration with T-Pain. Uh, their group is unofficially or officially, I guess, Z-Pain. Um, they have one single, which is an acoustic version of Gitlo, which Mark recorded for his wife Priscilla because apparently it's the song they listened to every anniversary.02:28.74SeanYep, and then you played it for her, and...02:32.21AndrewIs there a video of him playing it for her?02:33.61SeanYeah, yeah, yeah, yeah.02:34.49AndrewOh my God, I gotta go watch that. i I mean, he's found something that works. Being a wife guy is working for his brand right now and he is leaning all all the way in.02:47.07Andrewand02:47.21SeanHey, you know what they say about wife guys, though?02:50.47AndrewYeah.02:50.85Seanon the internet. okay I just hope he doesn't.02:53.35AndrewYep, it's gonna blow up at some point.02:57.23SeanI don't think he was on the ditty list, so we'll be okay.02:57.61AndrewBut...03:00.01Seanwe might or Or the FC list, so we might be okay.03:04.72AndrewFor what it's worth, I love T-Pain so much.03:08.46SeanYeah, he's great. He's great.03:09.83AndrewT-Pain's Tiny Desk Concert is one of my all-time favorites. It's so good.03:13.97Seani I just love that he is just doing side quests at the moment. Like, the guy's a genius.03:21.04AndrewZuck or T-Pain?03:22.12SeanNo, no, T-Pain. Zuck, I guess, is also doing a side quests at the moment, but like...03:23.46AndrewYeah.03:26.12SeanI'll just like I'll watch random youtubers and like sometimes T-Pain will just show up like like that are not related to things related to him you know I'm watching like a car guy youtuber boom T-Pain's winning a drift competition yeah I'm on twitch all of a sudden or I'm not on I'm browsing twitch and all of a sudden T-Pain's a streamer I don't know yeah03:28.43AndrewMm-hmm.03:36.42AndrewMm hmm. Holy shit, that's cool.03:47.23AndrewBy the way, on the Zuck side, I just listened to the acquired episode about Metta, and it made me realize that like. I feel like public perception of Metta is changing a little bit like I feel like Elon Musk buying Twitter.04:01.86SeanYeah.04:04.11Andrewhas kind of has taken a lot of the heat off of meta.04:06.90SeanYeah.04:07.45AndrewAnd so now everyone's like, Elon's elon's the evil one.04:09.12SeanHe's almost like, it could be so much worse.04:13.00AndrewAnd yeah, Elon's like, you guys think 2016 Facebook was bad?04:13.84SeanYeah.04:19.71AndrewBro, hold my drink.04:21.08SeanListen, when you gotta be the best at everything, you know?04:26.64AndrewIncluding being a massive fucking troll.04:28.86SeanYeah.04:30.46AndrewYeah, but it made me realize that like I don't hate meta as much as I used to. And I i don't know if that's like.04:35.61...
In this episode, Andrew and Sean talk about Halloween plans and Sean's decision to kill Stackwise. Meanwhile, Sean has launched two new free tools in the last week! The guys talk about how he did it and what his goals are for both of them. Then they spend the rest of the episode brainstorming marketing ideas for MetaMonster.Links:Andrew's Twitter: @AndrewAskinsAndrew's website: https://www.andrewaskins.com/MetaMonster: https://metamonster.ai/ChartJuice: https://www.chartjuice.com/Sean's Twitter: @seanqsunMiscreants: http://miscreants.com/CopyWork: https://copy.work/Wordpress to Webflow-ready CSV: https://contentgobl.in/For more information about the podcast, check out https://www.smalleffortspod.com/.Transcript:00:00.35SeanCool. Happy Halloween.00:02.59AndrewHappy Halloween.00:04.93SeanAre you going to any parties?00:05.11Andrewo00:08.27SeanAre you doing anything?00:09.37AndrewYeah, we're so normally we go to Chicago for Halloween hang out.00:14.09SeanBecause it's extra scary there. Sorry, go ahead.00:16.45AndrewWhat?00:17.99SeanIt's just so scary in Chicago. and i don't know that's not why that's not why i was That's not why I was going with that.00:20.19AndrewWow, that's racist, John. Fuck you, dude.00:26.99SeanJeez, I'm kidding.00:29.92Andrewfucking New Yorkers hating on Chicago. Gross.00:33.16SeanThat's not racist. I'm just elitist. That's different.00:38.29SeanYeah.00:39.11Andrewnormally we go to Chicago because, uh, a bunch of Maddie's best friends from high school all live there and they throw a Halloween party every year. So normally early we go and we go out with them and we go to like one bar and then we go get, uh, like shitty Greek food and then we go home.00:46.34Seanyeah00:56.68SeanNice.00:57.84Andrewbut this year we decided to stay in Detroit, because we love Halloween and we love decorating for Halloween and. We've never gotten to be in Detroit for Halloween, we really want to pass out candy this year so tomorrow, which will probably be today by the time I get this published.01:16.54AndrewWe're gonna hang out, pass out candy, go drink some beers with our neighbors. And then Saturday, one of our best friends is throwing a haunted carnival party.01:27.46AndrewIt's her second year in a row throwing this like haunted carnival themed Halloween party.01:30.91SeanSick.01:32.68AndrewSo we're gonna go to the haunted carnival party and Maddie's dressing up as a ringmaster and I'm dressing up dressing up as a lion.01:39.02SeanNice.01:39.91AndrewShe's like a ringmaster lion tamer and then I'm a lion.01:42.17SeanYeah, hell yeah.01:43.56AndrewSo it's gonna be fun.01:44.70SeanSick.01:46.43AndrewYeah, what about you?01:48.22SeanI'm debating on whether or not there's this like Instagram, ah like ah ah actually been sorry. I don't even know what they are. There's just like some cool internet website thing called shell tech that they're like a band or or producer do, or I don't, I don't fucking know.02:02.36SeanThey just make music. It's apparently dropped the new album.02:03.90AndrewFun.02:05.31SeanThey have like an album party tomorrow and I think it's for, so maybe that, or maybe I'll be in my room jamming away.02:08.48Andrewpun02:16.05Seanon whether or not stackwise has a future because i am uh-huh yeah02:19.88Andrewwait Wait, wait, pause before we get too far into that. Can you just tell me a little bit about like what Halloween is like in New York? I'm so curious.02:32.57AndrewLike Yeah.02:33.00Seanwhat is how yeah yeah yeah yeah it's like uh well in queens it's super cute it's very residential you know so everyone like especially in a lot of kids in the neighborhood they walk around and they get candy and there's like really nice areas in queens that people will go to and there's like halloween decorations and stuff yeah exactly exactly the full skies can i think it's a trap i think that's like a like02:52.01AndrewGot to go get the full size candy bars. I went to Costco today and I was this close to buying the full size bars, but yeah, I'm not actually sure it's better.03:03.66AndrewI'm not sure giving out full size bars is better. Like my plan is to give out a big old handful of candy and, and then you get a mix of candy and you get to like spread it out a little.03:05.94SeanUh-huh. Yeah. Yeah.03:14.05AndrewWhereas if I give you a full size bar, you're going to eat it super fast.03:18.14SeanI was just gonna say, like, the expectation of being in a house that gives full-sized candy bars, that's dangerous.03:21.100AndrewOh yeah. Once you go, once you go full size, you can't go back.03:24.58SeanYeah. Exactly. You can't, you can't roll back the whatever.03:27.20AndrewYeah.03:30.16SeanYeah. But like, yeah.03:34.39AndrewIt was 20 bucks for 30 full size bars or 20 bucks for like a five pound bag of candy.03:40.29SeanYeah, see?03:40.72AndrewAnd so I was like.03:42.09SeanDid you see that meme of, uh, one of the fuh- uh, like, fuh in a Halloween bucket?03:47.49AndrewYes, that was so good.03:47.91SeanAnd it's like, one ...
Liam Eagen (Alpen Labs), Robin Linus (Zero Sync, BitVM) & Jonas Nick (Blockstream, Taproot) talk about their groundbreaking research paper about Shielded Client-Side Validation – a new way to bring privacy & scalability to Bitcoin. –––––––––––––––––––––––––– Time stamps: Introduction (00:00:51) What is Client-Side Validation? (00:02:09) Trade-offs in Client-Side Validation (00:04:20) Comparison with Existing Protocols (00:05:18) On-chain Throughput and Privacy (00:09:21) Connecting Bitcoin and Shielded CSV (00:09:55) User Interaction with the Protocol (00:11:07) Challenges with the Lightning Network (00:12:22) Atomic Swaps and Compatibility (00:14:56) Shielded CSV vs Monero & Zcash (00:16:12) Discussion on STARKs (00:19:28) OP_CAT and Covenants (00:21:46) OP_CAT and Functional Encryption (00:24:03) Exploration of BitcoinOS (00:25:21) Potential of BitVM (00:26:11) Bitcoin as a Currency (00:28:21) Funding for CSV Projects (00:30:40) Taint Concept in Shielded Transactions (00:31:14) Adoption and Privacy Protocols (00:34:08) Transaction Fees in Shielded Transactions (00:35:08) Throughput and Base Layer Efficiency (00:36:04) Node Behavior During Transactions (00:38:12) Comparison with MimbleWimble (00:39:41) CryptoSteel's New Device (00:40:29) Drivechains (00:41:04) Discussion about crypto steel backups and a sponsor, Layer Two Labs, promoting drive chains. Concerns Over Soft Forks (00:43:51) Rough Consensus Definition (00:45:04) Challenges of Soft Fork Implementation (00:46:15) Balancing Stability and Innovation (00:47:29) Miner Incentives and Fee Structures (00:48:38) Future of Post-Quantum Proposals (00:49:44) Increasing Block Size for CSV (00:50:00) Ideal Block Size Debate (00:53:08) Quantum Computing Threat (00:59:02) Potential Consequences of Quantum Attacks (01:00:24) Post-Quantum Solutions (01:02:23) Quantum Computing Timeline (01:02:50) Complexity of Quantum Computers (01:03:31) Quantum Records and Limitations (01:05:44) Chat Question on Shielding Protocol (01:06:12) Hodling.ch (01:06:45) Self-Custody Security (01:09:15) Threat Models in Security (01:09:58) Hardware Wallet Cautionary Tale (01:10:33) Conference Discussions (01:12:05) Shielded CSV Funding and Development Timeline (01:13:58) Prototype Development Outlook (01:15:29) Concerns Over Adoption (01:17:31) Scalability vs. Privacy Debate (01:19:34) Tether on Shielded CSV? (01:22:04) Revenue Models Discussion (01:22:22) Transaction Fees and Profitability (01:23:25) Publisher Fees in CSV Protocol (01:24:16) Competition and Business Models (01:25:40) Soft Fork Proposals for Improvement (01:26:07) Script Restoration and Arithmetic Improvements (01:27:30) Future of Shielded Client-Side Validation (01:37:23) Keeping Up with Developments (01:38:44) Self-Promotion and Team Dynamics (01:41:05) Proof of Stake? (01:42:51) Rare Pepes in Shielded CSV (01:43:56) Acknowledgment of Guests (01:46:18) Call to Action: Shield Emoji (01:47:57) Importance of Learning Bitcoin (01:48:07) Resources for Learning Bitcoin (01:48:31) Commentary on Peter Wuille (01:49:26) Block Size Debate (01:50:02) Optimizing Bitcoin for Global Use (01:51:05)
Brynne Tillman and Stan Robinson Jr. dive into the underutilized LinkedIn feature of exporting connections. Discover the step-by-step process of downloading your LinkedIn contacts into a CSV file and learn about categorizing connections to identify potential clients, prospects, and referral partners. From sending personalized video messages and voice notes to utilizing polls, they emphasize the importance of re-engaging overlooked connections. Stan also introduces an interesting twist by leveraging ChatGPT to analyze LinkedIn connection data. Tune in for actionable strategies to make the most of your LinkedIn network.
In this week's episode, I wanted to celebrate The 20% Podcast turing 4! In order to do it, I decided to reflect on the past 4 years of episodes, and used AI to help summarize data. I converted my Podcast RSS Feed into .CSV, which was then used to create my own GPT. This historical context was used to write this week's episode lessons, Throughout the past four years, The 20% Podcast has hosted a wide range of professionals, from sales experts and marketers to entrepreneurs and thought leaders. This broad guest pool has provided rich discussions on topics including sales, personal development, career transitions, and mental health" Sales Strategies Personal Development Mental Health & Work-Life Balance Career Advice 2020/2021: Building the Foundation and Expanding Professional Skills 2022: Deepening Expertise 2023: Mastering Leadership & Storytelling 2024: Adapting to Change Please enjoy this week's episode! ________________________ I am now in the early stages of writing my first book! It will cover my journey into sales, the lessons learned, and include stories and advice from top sales professionals around the world. I'm excited to share these interviews and bring you along on this journey! Like the show? Subscribe to the email: Subscribe Here I want your feedback! Reach out at 20percentpodcastquestions@gmail.com or connect with me on LinkedIn. If you know anyone who would benefit from this show, please share it! If you have suggestions for guests, let me know! Enjoy the show! Some of the recurring themes include:4 Yearly Anniversary:
The latest Stack Overflow Developer Survey has some concerning results, Joeri Sebrechts helps you do plain vanilla web dev, MIT's "missing semester" course looks pretty amazing, a dive into the fascinating history of CSV & a tool to get request analytics from the nginx access logs.
What if you could streamline your procurement process and make your operations more efficient with just a few clicks? Discover the game-changing features now available on our online experience, including an enhanced quick order functionality that allows you to export product lists to a CSV file, with real-time pricing and availability tailored to your account. The platform now supports direct input and file uploads of up to 50,000 items, making large orders more manageable than ever. You'll also learn how to incorporate custom part numbers and efficiently manage pricing records, making your procurement process smoother and more efficient.Stay ahead with the latest updates aimed at improving your customer experience and operational efficiency. Learn how proactive supplier date changes and estimated ship and delivery dates can optimize your planning and resource allocation. We've also introduced order confirmation emails and PunchOut systems to ensure accuracy and convenience. Plus, find out how you can download proof of delivery and original invoices directly from your online account. Don't miss the newly simplified return process that makes handling returns faster and easier. Be sure to check out our YouTube channel for a wealth of resources, and stay connected with our ever-growing community!Remember to keep asking why...Digitalization Resources:New Features ArticleVideo Explanation of Registering for an AccountRegister for an AccountOther Resources to help with your journey:Installed Asset Analysis SupportEECO Smart Manufacturing GuideSystem Planning SupportSchedule your Visit to a Lab in North or South CarolinaSchedule your Visit to a Lab in VirginiaSubmit your questions and feedback to: podcast@eecoaskwhy.comFollow EECO on LinkedInHost: Chris Grainger