POPULARITY
Categories
How do we heal, grow, and change as apprentices of Jesus? Many of us have subscribed to the traditional Christian approach of “trying harder” to “believe and do what's right,” only to find ourselves stuck and discouraged. Thankfully the vision Jesus casts for transformation shows us a different path forward.Join us for this episode of Soul Talks as Bill and Kristi share how Dallas Willard's mantra, “Don't try — train,” revolutionized their approach to spiritual formation. You'll burn with a desire to become more loving and healthy and get equipped with a practical tool to help you grow in Christlikeness one area at a time.If you want to go deeper into the insights we gained from Dallas Willard, we invite you to join us on a retreat or train to become a spiritual director with Soul Shepherding. You can learn more by following the links below.Resources for this Episode:Attend a Soul Shepherding RetreatEarn a Certificate in Spiritual DirectionYour Best Life in Jesus' Easy Yoke: Rhythms of Grace to De-Stress and Live EmpoweredDonate to Support Soul Shepherding and Soul Talks
stopGOstop » sound collage – field recording – sound art – john wanzel
Vim, Vigor and Vitality (or Arrangements Are in Hand or is it Self-Hypnosis) drifts between assertion and reassurance. A slow pulse moves underneath the piece, joined by the low, sustained presence of a cello. Voices surface in fragments, pause, and … Continue reading →
Есть предположение, что злоупотребление LLM в общем и вайбкодинг в частности отупляет программистов. С другой стороны, этот наброс похож на квохтание Vim-еров на IDE-шников. Где же правда?Спасибо всем, кто нас слушает. Ждем Ваши комментарии.Музыка из выпуска: - https://artists.landr.com/056870627229- https://t.me/angry_programmer_screamsВесь плейлист курса "Kubernetes для DotNet разработчиков": https://www.youtube.com/playlist?list=PLbxr_aGL4q3SrrmOzzdBBsdeQ0YVR3Fc7Бесплатный открытый курс "Rust для DotNet разработчиков": https://www.youtube.com/playlist?list=PLbxr_aGL4q3S2iE00WFPNTzKAARURZW1ZShownotes: 00:00:00 Вступление00:06:15 Из-за чего тупеют люди?00:09:00 Если LLM не подошел, проблема в тебе00:15:35 Плохо ли генерить тесты LLM?00:20:00 Терминальный вайбкодинг00:29:00 Поиск API через LLM 00:34:30 Проектирует человек, а кодит LLM00:42:40 Катастрофа мотивации00:46:15 Эффект циганского гипноза00:51:20 Тупеем ли от поиска через LLM?01:00:00 LLM ловит нас на крючекСсылки:- https://www.youtube.com/watch?v=COovfRQ9hRM : Наше будущее - https://www.linkedin.com/posts/nityan_we-all-know-vibe-coding-has-technical-debt-activity-7339687364216193025-nY2E : Исследование отупения от ИИ - https://codeua.com/ai-coding-tools-can-reduce-productivity-study-results/ : AI Coding Tools Can Reduce Productivity: Study ResultsВидео: https://youtube.com/live/HU7m31-NZmM Слушайте все выпуски: https://dotnetmore.mave.digitalYouTube: https://www.youtube.com/playlist?list=PLbxr_aGL4q3R6kfpa7Q8biS11T56cNMf5Twitch: https://www.twitch.tv/dotnetmoreОбсуждайте:- Telegram: https://t.me/dotnetmore_chatСледите за новостями:– Twitter: https://twitter.com/dotnetmore– Telegram channel: https://t.me/dotnetmoreCopyright: https://creativecommons.org/licenses/by-sa/4.0/
Esta semana en Río de la Vida presentamos un programa histórico: un gran debate nacional con los representantes de cuatro de los intensivos más influyentes de España.Participan:
This show has been flagged as Clean by the host. Setting up Linux Mint with Custom LVM and Luks Linux Mint with Custom LVM on LUKS Overview The current Linux Mint installer doesn't support custom partitions when setting up a new machine with LUKS encryption using LVM. I prefer having a separate partition for my home directory and a backup partition for Timeshift, so that reinstalling or fixing issues won't overwrite my home directory. I found several approaches to achieve this. One method involves setting up partitions first and then using the installer to select them, but this requires extensive post-installation configuration to get boot working with the encrypted drive. I discovered this blog which explains how to repartition your drive after installation. Combined with my guide on setting up hibernation, I created this documentation to help remember how to install a fresh copy of Linux Mint with LVM and LUKS. Tested on: Linux Mint 22 Cinnamon Partition Layout For this guide, I'm working with a 1TB drive that will be split into the following logical volumes: Root - 100GB (system files and applications) Swap - 32GB (for hibernation support) Home - 700GB (user files and documents) Backup - 100GB (Timeshift snapshots) Unallocated - ~68GB (reserved for future expansion) This setup ensures that system snapshots and user data remain separate, making system recovery much easier. Installation Guide Step 1: Initial Linux Mint Installation Start the Linux Mint installation process as normal: Boot from your Linux Mint installation media Follow the installation wizard (language, keyboard layout, etc.) When you reach the Installation type screen: Select "Erase disk and install Linux Mint" Click "Advanced features" Enable both options: ✓ Use LVM with the new Linux Mint installation ✓ Encrypt the new Linux Mint installation for security Click Continue Enter a strong encryption password when prompted Complete the rest of the installation (timezone, user account, etc.) When installation finishes, do NOT click "Restart Now" - we'll repartition first Important: Do NOT reboot after installation completes. We need to repartition before the first boot. Step 2: Access Root Terminal After installation finishes, open a terminal and switch to root: sudo -i This gives you administrative privileges needed for disk operations. Step 3: Check Current Disk Layout View your current partition structure: lsblk -f This displays your filesystem layout. You should see your encrypted volume group (typically vgmint) with a large root partition consuming most of the space. Step 4: Resize Root Partition Shrink the root partition from its default size (nearly full disk) to 100GB: lvresize -L 100G --resizefs vgmint/root What this does: -L 100G sets the logical volume size to exactly 100GB --resizefs automatically resizes the filesystem to match This frees up ~900GB for our other partitions Step 5: Resize Swap Partition The default swap is usually small (a few GB). We need to increase it to 32GB for hibernation: lvresize --verbose -L +32G /dev/mapper/vgmint-swap_1 What this does: -L +32G adds 32GB to the current swap size --verbose shows detailed progress information This ensures enough swap space for RAM contents during hibernation Note: For hibernation to work, swap should be at least equal to your RAM size. Adjust accordingly. Step 6: Create Home Partition Create a new logical volume for your home directory: lvcreate -L 700G vgmint -n home What this does: -L 700G creates a 700GB logical volume vgmint is the volume group name -n home names the new volume "home" Step 7: Create Backup Partition Create a logical volume for Timeshift backups: lvcreate -L 100G vgmint -n backup What this does: Creates a dedicated 100GB space for system snapshots Keeps backups separate from user data Prevents backups from filling up your home partition Step 8: Format New Partitions Format both new partitions with the ext4 filesystem: mkfs.ext4 /dev/vgmint/backup mkfs.ext4 /dev/vgmint/home What this does: Creates ext4 filesystems on both logical volumes ext4 is the standard Linux filesystem with good performance and reliability Step 9: Mount Partitions Create mount points and mount your partitions: mkdir /mnt/{root,home} mount /dev/vgmint/root /mnt/root/ mount /dev/vgmint/home /mnt/home/ What this does: Creates temporary directories to access the filesystems Mounts root and home so we can configure them Step 10: Move Home Directory Contents Move the existing home directory contents from the root partition to the new home partition: mv /mnt/root/home/* /mnt/home/ What this does: Transfers all user files and directories from the old location to the new home partition Preserves your user account settings and any files created during installation Without this step, your home directory would be empty on first boot Step 11: Update fstab Add the home partition to the system's fstab file so it mounts automatically at boot: echo "/dev/mapper/vgmint-home /home ext4 defaults 0 2" >> /mnt/root/etc/fstab What this does: Appends a mount entry to /etc/fstab Ensures /home partition mounts automatically at startup The 0 2 values enable filesystem checks during boot Step 12: Clean Up and Prepare for Reboot Unmount the partitions and deactivate the volume group: umount /mnt/root umount /mnt/home swapoff -a lvchange -an vgmint What this does: Safely unmounts all mounted filesystems Turns off swap Deactivates the volume group to prevent conflicts Ensures everything is properly closed before reboot Step 13: Reboot Now you can safely reboot into your new system: reboot Enter your LUKS encryption password at boot, then log in normally. Verification After rebooting, verify your partition setup: lsblk -f df -h You should see: Root (/) mounted with ~100GB Home (/home) mounted with ~700GB Swap available with 32GB Backup partition ready for Timeshift configuration Setting Up Timeshift To complete your backup solution: Install Timeshift (if not already installed): sudo apt install timeshift Launch Timeshift and select RSYNC mode Choose the backup partition as your snapshot location Configure your backup schedule (daily, weekly, monthly) Create your first snapshot Additional Resources Original blog post on LVM rearrangement Setting up hibernation on Linux Mint Conclusion This setup gives you the best of both worlds: the security of full-disk encryption with LUKS, and the flexibility of custom LVM partitions. Your home directory and system backups are now isolated, making system recovery and upgrades much safer and more manageable. Automating Your Linux Mint Setup After a Fresh Install Automating Your Linux Mint Setup After a Fresh Install Setting up a fresh Linux Mint installation can be time-consuming, especially when you want to replicate your perfect development environment. This guide will show you how to automate the entire process using Ansible and configuration backups, so you can go from a fresh install to a fully configured system in minutes. Why Automate Your Setup? Whether you're setting up a new machine, recovering from a system failure, or just want to maintain consistency across multiple computers, automation offers several key benefits: Time Savings: What normally takes hours can be done in minutes Consistency: Identical setup across all your machines Documentation: Your setup becomes self-documenting Recovery: Quick recovery from system failures Reproducibility: Never forget to install that one crucial tool again Discovering Your Installed Applications Before creating your automation setup, you need to identify which applications you've manually installed since the initial OS installation. This helps you build a complete picture of your custom environment. Finding APT and .deb Packages To see all manually installed packages (excluding those that came with the OS): comm -23
Nathan Sobo has spent nearly two decades pursuing one goal: building an IDE that combines the power of full-featured tools like JetBrains with the responsiveness of lightweight editors like Vim. After hitting the performance ceiling with web-based Atom, he founded Zed and rebuilt from scratch in Rust with GPU-accelerated rendering. Now with 170,000 active developers, Zed is positioned at the intersection of human and AI collaboration. Nathan discusses the Agent Client Protocol that makes Zed "Switzerland" for different AI coding agents, and his vision for fine-grained edit tracking that enables permanent, contextual conversations anchored directly to code—a collaborative layer that asynchronous git-based workflows can't provide. Nathan argues that despite terminal-based AI coding tools visual interfaces for code aren't going anywhere, and that source code is a language designed for humans to read, not just machines to execute. Hosted by Sonya Huang and Pat Grady, Sequoia Capital
Vim li cas yus thiaj yuav tau tso lwm yam kom thiaj raws cuag yus txoj kev npau suav?
Vim li cas Bangladesh tus qub thawj pwm tsav dhau los thiaj raug txim tuag, thiab puas yuav muaj peev xwm coj nws los raug txim raws li tsev hais plaub ntawm Dhaka tau phua txim?
DSO Overflow S5EP5Saving $20,000 a year by self-hosting a map serverwithVimal PaliwalIn this episode, Vimal Paliwal talks about how he led a migration project that saved his organisation $20,000 annually. He talks about how he overcame challenges he faced resulting from compute and storage demands. Vimal discusses how he ensured cost-efficiency and security by implementing a fully serverless architecture using AWS CloudFront, Lambda authorisers, and WAF, integrating robust domain whitelisting and access control. We finish this conversation by reflecting on lessons learned from this project.Vimal is a part of the AWS Community Builders program, where he actively contributes to knowledge-sharing efforts across the cloud ecosystem by writing on real-world implementations and best practices. In addition, Vim has spent several years as an AWS Authorized Instructor, during which he trained over 1,000 professionals.Resources mentioned in this podcast:Vimal's LinkedIn profileVimal's blog post about this projectVimal's GitHub repoDSO Overflow is a DevSecOps London Gathering production. Find the audio version on all good podcast sources like Spotify, Apple Podcast and Buzzsprout.Your HostsSteve Giguere linkedin.com/in/stevegiguereGlenn Wilson linkedin.com/in/glennwilsonJessica Cregg linkedin.com/in/jessicacregg
"Kom totaub tias tej neeg no (volunteers) yog cov pab kaws yus tej keeb kwm dhau los, pab yog cov pab tsim lub neej pem suab, thiaj xav kom hwm, qhia thiab muaj feem koom. Vim cov kev ras los ua ib tug pej xeem Australia txhais tsis tau tias yuav ua rau tsis paub tias yus yog leej twg los sis yuav ua rau yus plam yus tej cim thawj. Tsuas yog cov kev pab kom coj tau ntau cov kab lis kev cai thiab ntau tsev neeg los koom peb lub neej uas peb ris txiaj xwb," raws li Teresa Lane uas yog tus haus zos Logan City Council Division 2 hais.
Tej kws paub zoo txog tej cai pov puag tib neeg txoj cai nqua hu kom tsoom fwv teb chaws Australia tsim Australia tsab cai Human Rights Act siv rau lub caij tshaj 50 xyoo uas tsis muaj siv thiab muaj kev tawm tsam tsis pom zoo thiab muaj tej xov xwm totaub yuam kev txog tsab cai no. Vim li cas thiaj xav kom tsim siv?
This show has been flagged as Clean by the host. Hello, this is your host, Archer72 for another episode of Hacker Public Radio. In this episode, I continue to fall for the AI trap. Here I was, minding my own business, when I was bothered by the icon only showing a generic icon for the Beeper app. Now, I'm not saying that Duck.ai is not useful, but be very careful what you ask for. It was probably a combination of the early morning, and not reading completely through the AI suggestions, but I ended up losing all icons on the Gnome desktop except for a few like Firefox. I won't leave the problematic command so I don't trip up the listener, but it involved updating a desktop database. This in turn left a dash or blank where the icons should be. If that wasn't bad enough, it was suggested to reset Gnome settings, and nothing was as it seemed before. Things that I had taken for granted were not there. You forget what custom settings are there when mistakes like this are made. So the short answer is that the icons directory, located on my Debian system should be located in .local/share/icons. Instead it was in a sub-directory .local/share/icons/icons Correcting the directory location solved everything, but I was still left to reset my custom Gnome keybindings. • Swap Escape and Caps lock key I used this because I am a Vim user, and this feels more natural when I need to hit Escape to change modes. In Gnome, the setting is under Gnome Tweaks > Keyboard > Additional Layout Options > Swap Esc and Caps Lock Key As of this show release the current stable version is Trixie. Gnome Tweaks - Debian Trixie can be installed by sudo apt install gnome-tweaks on any Debian based system. • Compose key • Compose key shortcuts The Compose key is found at Settings > Keyboard > Compose Key. I selected the Menu key, because this is rarely used, and can still be accessed by the track pad. • Shortcut to open MPV with a clipboard URL from Youtube This can be found in Setting > Keyboard > View and Customize Shortcuts > Custom Shortcuts Shift+Ctrl+P Code placed in /usr/local/bin/ #!/bin/bash ## mpv-url url=`xsel -o -b` echo $url mpv $url Now I can get back to what I started in the first place, creating a .desktop file for Beeper. I created a beeper-desktop.desktop file in `~/.local/share/applications' with the follow contents. [Desktop Entry] Name=Beeper Desktop Exec=/home/mark/AppImages/Beeper-4.1.169.AppImage Icon=/home/mark/.local/share/icons/beeper.png Type=Application Categories=Network;InstantMessaging; Terminal=false StartupWMClass=Beeper The last part of the config file can be found by xprop | grep WM_CLASS Provide feedback on this episode.
This show has been flagged as Clean by the host. Hello, this is Archer72 for Hacker Public Radio. In episode, it seems that AI is a trap. This over-arching generalization is my opinion and may not reflect the opinions of HPR. So the back story to this is that I was listening to the 26 hour Hacker Public Radio New Year's show, and the discussion came up in the Tech and Coffee Telegram Channel My Resolution was to stop using ChatGPT for an AI chat bot, with the implication being to not using AI at all, but instead, to use Duckduckgo and Brave Search Probably less than a week or two later, I was trying to figure out something, and figured that I'd use the easy way and use Claude.ai , which is actually pretty good if you have short and concise questions. I've found that if you have a long drawn-out question, it is better to do a Google or Duck search and document your results. I document in Vim, but you can use whatever is best. This way you can clearly show what works and doesn't work and refer to what you find later, instead of relying on an online service. And sometimes, depending on the AI bot you use, exporting is not very straightforward. With the exception of Duck.ai , that has a button for a quick share of a text file. Then you share it to your self somewhere else like in Proton mail Well… Over the past weekend, I was just making a quick upload button to my own server. The previous weekend, I got HTTPS working. This was just from following the guide on the Let's Encrypt - Documentation and EFF Certbot instruction - Apache2 websites. At least that time, instead of using the AI bot, I just followed clear documentation. See, the thing about going right to the Debian Wiki or the Arch Wiki is that users and developer have already documented plenty. I figured out that part of the hacker method is not to take the ‘easy' way, but to document out what you are trying to learn. So this past weekend, I was trying to learn something about that upload form, and I probably took longer going back and forth with the AI bot than If I had taken the time to search the documentation. And even if it did take longer with the documentation, I would have learned something else and created a Markdown document of my own. There is a tool I use once in a while, which is part of the Duckduckgo search, called Search Assist This can be good, because a have a horrible memory. If there is something small that I can't remember how to do, I let Duck.ai take care of it. But recently, I have turned off the option where it says to sometimes show Search Assist , but instead only when it is on demand. That way I won't be tempted to go down a rabbit hole in order to find what I am looking for. Instead base what I am looking for on standard tools. So Yes, AI is a trap, but is also useful for certain things. But if you are careful how you use it, it's not always a bad thing. This has been Archer72 for Hacker Public Radio. Feel free to comment on this or any other show. Ken says it is the Mana by which we pay our hosts. Also, feel free to record a response show to this or other shows. Provide feedback on this episode.
Кирилл Мокевнин, co-founder школы программирования Hexlet и автор подкаста «Организованное программирование», в гостях у Андрея Смирнова из Weekend Talk. Конференция avito.tech.conf | leads&managers – https://clc.to/p0dRAA Телеграм-канал Андрея Смирнова – https://t.me/itsmirnov 00:00 Начало 00:31 Чем можешь быть известен моей аудитории? 00:52 Рекламная пауза 02:01 Почему не ушёл от инженерного мышления, даже став предпринимателем? 13:33 Что спровоцировало переход из найма в своё дело? 17:46 Почему Хекслет появился раньше EdTech рынка и не стал бизнесом сразу? 29:27 Зачем был создан Хекслет.Клуб и что за комьюнити ты строишь? 42:09 Как AI, кризис EdTech и твой личный бренд влияют на Хекслет сейчас? 50:15 Почему Vim – это больше, чем редактор? 55:47 Знание каких языков программирования важно для создания инженерного mindset'а? 1:04:01 Кем бы ты стал, если бы не было IT-сферы? 1:05:03 Почему стоит переехать в Майами? 1:07:27 В чём сейчас главная проблема современного IT? Ссылки по теме: 1) Телеграм-канал Кирилла – https://t.me/orgprog 2) Подкаст «Организованное программирование» – https://youtube.com/@mokevnin 3) Сайт школы Hexlet – https://hexlet.io
Access to safe drinking water is essential, and Australia's often harsh environment means that our drinking water supplies are especially precious. With differences in the availability and quality of drinking water across the country, how do we know if it's safe to drink? In this episode we get water experts to answer this question and more. - Cov kev tau tej dej huv siv uas tau txais kev nyab xeeb yog tej yam tseem ceeb, thiab Australia yog ib lub teb chaws uas muaj tej ib puag ncig yuav ua rau yus raug tau teeb meem, txhais tias peb tej dej haus yog ib co tseem ceeb heev. Vim muaj tej dej ntawm tej chaw sib txawv thiab muaj tej dej qab txawv thoob plaws ntawm lub teb chaws no, ces peb ho paub li cas tias tej dej ntawd yog cov dej haus tau txais kev nyab xeeb? Nyob rau toom xov xwm no peb coj ib co kws paub txog dej los teb tej lus nug no thiab ntau yam.
Vim li cas Neeb Thoj (Neng Thao) thiaj xav kos khaws thiab ceev txuag Hmoob tej paj nruag thiab tej twj paj nruag thiab phom Hmoob khaws tseg cia?
Today we have a bit of a fun show, we have the developer of a relatively new project called Lilly on the show, a Vim like text editor that spawned out of the performance issues with heavy use of Vim Plugins==========Support The Channel==========► Patreon: https://www.patreon.com/brodierobertson► Paypal: https://www.paypal.me/BrodieRobertsonVideo► Amazon USA: https://amzn.to/3d5gykF► Other Methods: https://cointr.ee/brodierobertson==========Guest Links==========Repo: https://github.com/tauraamui/lillyWebsite: https://tauraamui.website/==========Support The Show==========► Patreon: https://www.patreon.com/brodierobertson► Paypal: https://www.paypal.me/BrodieRobertsonVideo► Amazon USA: https://amzn.to/3d5gykF► Other Methods: https://cointr.ee/brodierobertson=========Video Platforms==========
We spent the week learning keybindings, installing dependencies, and cramming for bonus points. Today, we score up and see how we did in the TUI Challenge.Sponsored By:Tailscale: Tailscale is a programmable networking software that is private and secure by default - get it free on up to 100 devices! 1Password Extended Access Management: 1Password Extended Access Management is a device trust solution for companies with Okta, and they ensure that if a device isn't trusted and secure, it can't log into your cloud apps. Unraid: A powerful, easy operating system for servers and storage. Maximize your hardware with unmatched flexibility. Support LINUX UnpluggedLinks:
Vim li cas Australia tej neeg txum tim thiaj nqua hu kom los qhia qhov tseeb txog Australia tej neeg txum tim tej keeb kwm thiab hais kom tej nom tswv kub siab nqes tes los daws ntau yam teeb meem kom lawv thiaj tsis txhob plam txiaj ntsim.
Our terminal apps are loaded, the goals are set, but we're already hitting a few snags. The TUI Challenge begins...Sponsored By:Tailscale: Tailscale is a programmable networking software that is private and secure by default - get it free on up to 100 devices! 1Password Extended Access Management: 1Password Extended Access Management is a device trust solution for companies with Okta, and they ensure that if a device isn't trusted and secure, it can't log into your cloud apps. Unraid: A powerful, easy operating system for servers and storage. Maximize your hardware with unmatched flexibility. Support LINUX UnpluggedLinks:
The crew descend upon the Iron Garden in a last-ditch effort to find what fate awaits Vim and Regina. As answers become questions and questions become paradoxes, the only thing left to ask is: are the NHPs feeling okay? Matcha exhausts a dialogue tree. Moxie gets digging. Roadkill shows the big iron on their hip. Silver dreams of cannibalism.Thank you to our Patreon supporters: Austin Sietsema, Eleanor Lady, Daffodil, RiverSupportsArtists, Gnome, Creaux, James Knevitt, Rocky Loy, Tau, JC Darcy, Robert Ruthven, AlexCrow, Jordan Myers, Keenan Geyer, DragonGirlJosie, Shonn Briscoe, Andor, & Diana Plante!!Bring Your Own Mech is a biweekly Lancer RPG actual play podcast of four Lancers thrown together by circumstance, destiny... and credits.Featuring: Reed (@ReedPlays) as the Game Master Amelia (@amelia_g_music) as Matcha Aki (@akinomii_art) as Moxie Dusty (@Dustehill) as Roadkill Aubrey (@MadQueenCosplay) as SilverFind us on Bluesky @bringyourownmech.bsky.social, and remember: batteries are not included.Lancer is created by Tom Parkinson Morgan (author of Kill Six Billion Demons) and Miguel Lopez of Massif Press. Bring Your Own Mech is not an official Lancer product; it is a third party work, and is not affiliated with Massif Press. Bring Your Own Mech is published via the Lancer Third Party License. Lancer is copyright Massif Press. Support the official release at https://massif-press.itch.ioSupport us on Patreon! https://www.patreon.com/bringyourownmechGet the official season 1 album, Bring Your Own Mixtape vol. 1! https://ownmech.bandcamp.com/album/bring-your-own-mixtape-vol-1DRC CUSTOM OUTFITTERS Download: https://ownmech.itch.io/drc-custom-outfitters-a-lancer-supplementPilot NET Discord Server: https://discord.gg/p3p8FUm9b4
In this episode of Gradient Dissent, Lukas Biewald talks with Martin Shkreli — the infamous "pharma bro" turned founder — about his path from hedge fund manager and pharma CEO to convicted felon and now software entrepreneur. Shkreli shares his side of the drug pricing controversy, reflects on his prison experience, and explains how he rebuilt his life and business after being "canceled."They dive deep into AI and drug discovery, where Shkreli delivers a strong critique of mainstream approaches. He also talks about his latest venture in finance software, building Godel Terminal “a Vim for traders", and why he thinks the AI hype cycle is just beginning. It's a wide-ranging and candid conversation with one of the most controversial figures in tech and biotech.Follow Martin Shkreli on TwitterGodel Terminal: https://godelterminal.com/Follow Weights & Biases on Twitterhttps://www.linkedin.com/company/wandb Join the Weights & Biases Discord Server:https://discord.gg/CkZKRNnaf3
Arcadia June is coming, don't let it pass you by! We chat about photo libraries, Andrew is a bike guy again, and Martin shares a fun story about his son using the Mac! Ring the bell, a new One Prime Plus Dot Com member has entered the room! Stuff-ups and Shout-outs! 00:00:00 Imagine what Andrew said in those thirty seconds..
Hej Somna.Detta är fortsättningen. Du vet, den där berättelsen om Vim och Vindel som började som ett romantiskt infall i Gamla stan och sen växte till ett helt epos med fäktning, kastanjer, kastanjekyssar och en baron som luktar vinbål. Det är alltså del två. Och ja, jag kanske gick lite... all in.Vi får följa hur Vim och Vindel famlar sig fram fram över klassgränser, passionerade blickar och nattliga räddningsaktioner. Bellman dyker upp, som någon slags romantikens ande, och dikter flyger som förälskade fjärilar. Det blir sabelstrid i kyrkan, droppande blod, kunglig intervention och kanske – bara kanske – ett lyckligt slut.Jag vet inte. Det blev stort. Men ibland måste man bara följa vågen bakom ögonen.Sov gott!Mer om Henrik, klicka här: https://linktr.ee/HenrikstahlLyssna utan reklam, få extraavsnitt, spellistor med mera på: https://somnamedhenrik.supercast.com/ Hosted on Acast. See acast.com/privacy for more information.
Vim li cas thiaj ua lub koom txoos Hmoob tuaj nyob tau 50 xyoo ntawm teb chaws Australia thiab ho yuav muaj tej program dab tsi ntawm lub koom txoos uas ua ntawm Brisbane lub caij Easter 2025. Pa Yaj uas yog ib tug ntawm coob tus uas organised lub koom txoos no tawm tswv yim txog nqe no.
In this episode of Gradient Dissent, host Lukas Biewald talks with Sualeh Asif, the CPO and co-founder of Cursor, one of the fastest-growing and most loved AI-powered coding platforms. Sualeh shares the story behind Cursor's creation, the technical and design decisions that set it apart, and how AI models are changing the way we build software. They dive deep into infrastructure challenges, the importance of speed and user experience, and how emerging trends in agents and reasoning models are reshaping the developer workflow.Sualeh also discusses scaling AI inference to support hundreds of millions of requests per day, building trust through product quality, and his vision for how programming will evolve in the next few years.⏳Timestamps:00:00 How Cursor got started and why it took off04:50 Switching from Vim to VS Code and the rise of CoPilot08:10 Why Cursor won among competitors: product philosophy and execution10:30 How user data and feedback loops drive Cursor's improvements12:20 Iterating on AI agents: what made Cursor hold back and wait13:30 Competitive coding background: advantage or challenge?16:30 Making coding fun again: latency, flow, and model choices19:10 Building Cursor's infrastructure: from GPUs to indexing billions of files26:00 How Cursor prioritizes compute allocation for indexing30:00 Running massive ML infrastructure: surprises and scaling lessons34:50 Why Cursor chose DeepSeek models early36:00 Where AI agents are heading next40:07 Debugging and evaluating complex AI agents42:00 How coding workflows will change over the next 2–3 years46:20 Dream future projects: AI for reading codebases and papers
Hej Somna.Jag var i Gamla Stan och blev förälskad. Inte i någon särskild, utan i själva idén om romantik. Så jag bestämde mig för att dikta upp en episk kärlekshistoria från 1700-talets Stockholm – du vet, hästspillning, ärtsoppa och träskor mot kullersten. En stad där tystnaden ännu kunde höras.Det här avsnittet, Somna, är början på berättelsen om Vindel och Vim – han från Södermalm, hon från salongerna på Djurgården. Det blir ett avsnitt i två delar, för jag gick kanske lite... in i det. Som både Vindel och Vim samtidigt, i ett kärleksrus av fjäderpennor och ögonkast. Det är höst i berättelsen, och det är höst i mitt hjärta.Och du vet vad? Det gör inget om det blir lite dåligt. Det finns höjd för att vara dålig. Ibland är det det vackraste som finns.Sov gott!Mer om Henrik, klicka här: https://linktr.ee/Henrikstahl Bli medlem i Somna med Henrik PLUS här: https://plus.acast.com/s/somna-med-henrik. Hosted on Acast. See acast.com/privacy for more information.
Vim li cas thiaj hais tias zoo li Australia tej nom tswv tsis tham dab tsi txog tej tswv yim daws teeb meem kub ntxhov rau cov kev sib tw nrhiav suab xaiv tsoom fwv teb chaws Australia tshiab no li?
Varun Mohan is the co-founder and CEO of Windsurf (formerly Codeium), an AI-powered development environment (IDE) that has been used by over 1 million developers in just four months and has quickly emerged as a leader in transforming how developers build software. Prior to finding success with Windsurf, the company pivoted twice—first from GPU virtualization infrastructure to an IDE plugin, and then to their own standalone IDE.In this conversation, you'll learn:1. Why Windsurf walked away from a profitable GPU infrastructure business and bet the company on helping engineers code2. The surprising UI discovery that tripled adoption rates overnight.3. The secret behind Windsurf's B2B enterprise plan, and why they invested early in an 80-person sales team despite conventional startup wisdom.4. How non-technical staff at Windsurf built their own custom tools instead of purchasing SaaS products, saving them over $500k in software costs5. Why Varun believes 90% of code will be AI-generated, but engineering jobs will actually increase6. How training on millions of incomplete code samples gives Windsurf an edge, and creates a moat long-term7. Why agency is the most undervalued and important skill in the AI era—Brought to you by:• Brex—The banking solution for startups• Productboard—Make products that matter• Coda—The all-in-one collaborative workspace—Where to find Varun Mohan:• X: https://x.com/_mohansolo• LinkedIn: https://www.linkedin.com/in/varunkmohan/—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Varun's background(03:57) Building and scaling Windsurf(12:58) Windsurf: The new purpose-built IDE to harness magic(17:11) The future of engineering and AI(21:30) Skills worth investing in(23:07) Hiring philosophy and company culture(35:22) Sales strategy and market position(39:37) JetBrains vs. VS Code: extensibility and enterprise adoption(41:20) Live demo: building an Airbnb for dogs with Windsurf(42:46) Tips for using Windsurf effectively(46:38) AI's role in code modification and review(48:56) Empowering non-developers to build custom software(54:03) Training Windsurf(01:00:43) Windsurf's unique team structure and product strategy(01:06:40) The importance of continuous innovation(01:08:57) Final thoughts and advice for aspiring developers—Referenced:• Windsurf: https://windsurf.com/• VS Code: https://code.visualstudio.com/• JetBrains: https://www.jetbrains.com/• Eclipse: https://eclipseide.org/• Visual Studio: https://visualstudio.microsoft.com/• Vim: https://www.vim.org/• Emacs: https://www.gnu.org/software/emacs/• Lessons from a two-time unicorn builder, 50-time startup advisor, and 20-time company board member | Uri Levine (co-founder of Waze): https://www.lennysnewsletter.com/p/lessons-from-uri-levine• IntelliJ: https://www.jetbrains.com/idea/• Julia: https://julialang.org/• Parallel computing: https://en.wikipedia.org/wiki/Parallel_computing• Douglas Chen on LinkedIn: https://www.linkedin.com/in/douglaspchen/• Carlos Delatorre on LinkedIn: https://www.linkedin.com/in/cadelatorre/• MongoDB: https://www.mongodb.com/• Cursor: https://www.cursor.com/• GitHub Copilot: https://github.com/features/copilot• Llama: https://www.llama.com/• Mistral: https://mistral.ai/• Building Lovable: $10M ARR in 60 days with 15 people | Anton Osika (CEO and co-founder): https://www.lennysnewsletter.com/p/building-lovable-anton-osika• Inside Bolt: From near-death to ~$40m ARR in 5 months—one of the fastest-growing products in history | Eric Simons (founder & CEO of StackBlitz): https://www.lennysnewsletter.com/p/inside-bolt-eric-simons• Behind the product: Replit | Amjad Masad (co-founder and CEO): https://www.lennysnewsletter.com/p/behind-the-product-replit-amjad-masad• React: https://react.dev/• Sonnet: https://www.anthropic.com/claude/sonnet• OpenAI: https://openai.com/• FedRamp: https://www.fedramp.gov/• Dario Amodei on LinkedIn: https://www.linkedin.com/in/dario-amodei-3934934/• Amdahl's law: https://en.wikipedia.org/wiki/Amdahl%27s_law• How to win in the AI era: Ship a feature every week, embrace technical debt, ruthlessly cut scope, and create magic your competitors can't copy | Gaurav Misra (CEO and co-founder of Captions): https://www.lennysnewsletter.com/p/how-to-win-in-the-ai-era-gaurav-misra—Recommended book:• Fall in Love with the Problem, Not the Solution: A Handbook for Entrepreneurs: https://www.amazon.com/Fall-Love-Problem-Solution-Entrepreneurs/dp/1637741987—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe
We've done hot takes episodes in the past but this is different, it's hot questions. Would we rather have bad managers who can code or good managers who can't? Too many comments or none? 80 columns or as long as you like? What editor do we use and why? Vim for Fun or PeerTube version... Read More
We've done hot takes episodes in the past but this is different, it's hot questions. Would we rather have bad managers who can code or good managers who can't? Too many comments or none? 80 columns or as long as you like? What editor do we use and why? Vim for Fun or PeerTube version... Read More
The beach episode approaches! After jailbreaks and microscopic misadventures. the gang tries to wind down with some sun and sea. As Regina prepares to tell them her story, will our crew be able to find any lead on the Iron Garden? And more importantly, will they survive Vim's pranks before then? Matcha digs her own grave. Moxie enters all-range mode. Roadkill goes thrifting. Silver makes some introductions.Thank you to our Patreon supporters: Austin Sietsema, Eleanor Lady, Daffodil, RiverSupportsArtists, Gnome, Creaux, James Knevitt, Rocky Loy, Tau, JC Darcy, Robert Ruthven, AlexCrow, Jordan Myers, Keenan Geyer, DragonGirlJosie, Shonn Briscoe, Andor, & Diana Plante!!Bring Your Own Mech is a biweekly Lancer RPG actual play podcast of four Lancers thrown together by circumstance, destiny... and credits.Featuring: Reed (@ReedPlays) as the Game Master Amelia (@amelia_g_music) as Matcha Aki (@akinomii_art) as Moxie Dusty (@Dustehill) as Roadkill Aubrey (@MadQueenCosplay) as ScarlettFind us on Bluesky @bringyourownmech.bsky.social, and remember: batteries are not included.Lancer is created by Tom Parkinson Morgan (author of Kill Six Billion Demons) and Miguel Lopez of Massif Press. Bring Your Own Mech is not an official Lancer product; it is a third party work, and is not affiliated with Massif Press. Bring Your Own Mech is published via the Lancer Third Party License. Lancer is copyright Massif Press. Support the official release at https://massif-press.itch.ioSupport us on Patreon! https://www.patreon.com/bringyourownmechGet the official season 1 album, Bring Your Own Mixtape vol. 1! https://ownmech.bandcamp.com/album/bring-your-own-mixtape-vol-1DRC CUSTOM OUTFITTERS Download: https://ownmech.itch.io/drc-custom-outfitters-a-lancer-supplementPilot NET Discord Server: https://discord.gg/p3p8FUm9b4
Bem amigos do Pelada na Net, chegamos em definitivo para o programa 722! E hoje temos o Príncipe Vidane, Show do Vitinho e Maidana aprendendo novas siglas com Dudu. E neste programa comentamos a metida de louco que a defesa de Dudu incluiu no processo contra Leila Pereira que ressignificou VTNC como "Vim trabalhar no Cruzeiro", falamos sobre o escândalo envolvendo Bruno Henrique que foi indiciado pela Polícia Federal por envolvimento com esquema de apostas, analisamos tudo sobre o fim das quartas da Champions League em que o Arsenal despachou o Real Madrid e se classificou pras semis ao lado de PSG, Barcelona e Inter de Milão, além de muito mais! E não se esqueça de usar as Hashtags: #BRITAJR Entre em nosso site! https://peladananet.com.brSiga nosso Bluesky! @peladananet.com.brSiga nosso Twitter! @PeladaNETSiga nosso Instagram! @PeladaNaNetParticipe do nosso grupo no TELEGRAM! https://t.me/padegostosodemaisCompre nossos produtos na Podcast Store! Temos camisetas, canecas, ímãs, pôsteres, bottons e ecobags disponíveis! Titulares:Fernando Maidana – Twitter / Instagram / BlueskyVictor “Show do Vitinho” Raphael – Twitter / Instagram / BlueskyVitor “Príncipe Vidane” Faglioni Rossi – Twitter / Instagram / Bluesky Projetos paralelos:Jovem NerdMau Acompanhado – no Jovem NerdFeed do Mau Acompanhado no SpotifyDentro da Minha CabeçaReinaldo JaquelineCanal do Versão Brasihueira no YouTubePauta Livre NewsCanal do Victinho no YoutubeRede ChorumeFábrica de FilmesLegião dos HeróisNoites com Maidana Ouça também:Frango FinoPapo DelasRadiofobiaThe Dark One – PodtrashVortex – com Kat Barcelos Contribua com o Peladinha:Apoia.sePatreonOu através da nossa chave pix: podcast@peladananet.com.br Colaboradores de Março/2025!Fica aqui o nosso agradecimento pelo carinho, dedicação e investimento aos queridos: Adriana Cristina Alves Pinto Gioielli | Adriano Nazário | André Vinícius De Carvalho Costa | Fellipe Miranda | Fernando Costa Campos | Gabriel Machado De Freitas | Guilherme Rezende Soria | Gustavo Henrique Rossini | Heverton Coneglian De Freitas | Higor Nunes Resende | Higor Pêgas Rosa De Faria | Igor Leite Da Silva | Igor Zacarias Dos Santos | Ítalo Leandro Freire De Albuquerque | João Paulo Lobo Marins | Joao Pedro Barros Barbosa | Leonardo Delefrate | Luis Henrique Santos | Luiz Guilherme Borges Silva | Messias Feitosa Santana | Pedro Marcelo Rocha Gomes | Rafael Brandão Brasil | Rena Marcon | Renato Grigoli Pereira | Thais Cristine Cavalcanti | Vanessa Fontana | Welton Sousa Gouveia | André Stábile | Arthur Takeshi Gonçalves Murakawa | Brayan Ksenhuck | Bruno Burkart | Caio Mandolesi | Concílio Silva | Daniel Lucas Martins Lacerda | Davi Andrade | Fabio Simoes | Fabio Simoes | Filipi Froufe | Flavio Barbosa | George Alfradique | Gustavo Marques Leite | Heitor Dias | Igor Trusz | Jhonathan Romão | João Gabriel Paduan Tristante | Josué Solano De Barros | Leonardo Lachi Manetti | Listen2urs2 (Listen Tchu Iór Rârrtchi)) | Lucas Freitas | Luis Alberto De Seixas Buttes | Matheus De Sales Freitas | Pedro Lauria | Rafael Gomes Da Silva | Robson De Sousa | Rodrigo Pimentel | Tiago Vital Urgal | Tio Patux | Vander Carlos Ribeiro Vilanova | Vinicius Renan Lauermann Moreira | Vinicius Verissimo Lopes | Thiago Lins | Hassan Jorge | Diego Santos | Felipe Avelar | Leonardo Motta | Felipe Pastor | Bruno Franzini | David Gilvan | Luiz Strina | Adryel Romeiro | Aline Aparecida Matias | Antonino Firmino Da Silva Neto | Antonio Augusto Mendes Rodrigues | Bruno Kellton | Bruno Marques Monteiro | Carlos Eduardo Ardigo | Daniel Pandeló Corrêa | Elisnei Menezes De Oliveira | Evilasio Costa Junior | Felipe Brasil | Felipe Duarte | Fernando Bilhiere | Fernando De Araujo Brandão Filho | Gabriel Frizzo | Gabriel Lecomte | Gabriel Lopes Dos Santos | Gian Luca Barbosa Mainini | Jailson Gomes | João Pedro Machareth | Jose Wellington De Moura Melo | Leandro Jose De Souza | Leonardo Giehl | Luan Germano | Luca Vianna | Marcelo São Martinho Cabral | Marco Antônio Maassen Da Silva | Marianna Feitosa | Matheus Andion De Souza Vitorino | Matheus Bezerra Lucas Bittencourt | Maxwell Dos Santos Nelle | Pedro Bonifácio | Pedro Henrique Tonetto Lopes | Pollyana Bruno | Rafael Manenti | Rafael Matis | Rainer Almeida | Raphael Piccoli | Raphael Pini Bubinick | Rodrigo Oliveira Porto | Stéfano Bellote | Thiago Nogueira Marcal | Thomas Rodrigues | Tiago Weiss | Vinicius Athanasopoulos | Vinícius Lima Silva | Vinícius Ramalho | Vitor Carnelosso Varella | Vitor Motta Vigerelli | Wendel Ferreira Santiago | Wladimir Araújo Neto | Marco Antônio Rodrigues Júnior (Markão) | Leonardo Pimentel | Bruno Macedo | Aquila Barros Nogueira | Danilo Da Silva Pereira | Henrique Zani | Pedro Henrique De Paula Lemos | Victor Rodrigues | Daniel Moreira | Lucas Penetra | Lucas, O Fofo | Albert José | Raphael De Souza | Thiago Goncales | Daniel Ferreira De Lima Vilha | Felipe Artemio | Joseane Freitas Santos | Tatiane Oliveira Ferreira | Bruno Vieira Silva | Itallo Rossi Lucas | Isabelle Zacara Obrigado por acreditarem em nós! Comente!Envie sua cartinha via e-mail para podcast@peladananet.com.br e comente no post do Instagram com a capa deste episódio!See omnystudio.com/listener for privacy information.
Vim li cas thiaj muab ntxhais pauv ncuav pias? Nchaiv Khwb Xiong tawm tswv yim txog tias vim li cas thiaj muab ntxhais pauv ncuav pias lub caij Hmoob tsiv teb chaws Suav tuam tshoj los rau xov tshoj (Southeast Asia).
We are happy to announce that there will be a dedicated MCP track at the 2025 AI Engineer World's Fair, taking place Jun 3rd to 5th in San Francisco, where the MCP core team and major contributors and builders will be meeting. Join us and apply to speak or sponsor!When we first wrote Why MCP Won, we had no idea how quickly it was about to win.In the past 4 weeks, OpenAI and now Google have now announced the MCP support, effectively confirming our prediction that MCP was the presumptive winner of the agent standard wars. MCP has now overtaken OpenAPI, the incumbent option and most direct alternative, in GitHub stars (3 months ahead of conservative trendline):We have explored the state of MCP at AIE (now the first ever >100k views workshop):And since then, we've added a 7th reason why MCP won - this team acts very quickly on feedback, with the 2025-03-26 spec update adding support for stateless/resumable/streamable HTTP transports, and comprehensive authz capabilities based on OAuth 2.1.This bodes very well for the future of the community and project. For protocol and history nerds, we also asked David and Justin to tell the origin story of MCP, which we leave to the reader to enjoy (you can also skim the transcripts, or, the changelogs of a certain favored IDE). It's incredible the impact that individual engineers solving their own problems can have on an entire industry.Full video episodeLike and subscribe on YouTube!Show Links* David* Justin* MCP* Why MCP WonTimestamps* 00:00 Introduction and Guest Welcome* 00:37 What is MCP?* 02:00 The Origin Story of MCP* 05:18 Development Challenges and Solutions* 08:06 Technical Details and Inspirations* 29:45 MCP vs Open API* 32:48 Building MCP Servers* 40:39 Exploring Model Independence in LLMs* 41:36 Building Richer Systems with MCP* 43:13 Understanding Agents in MCP* 45:45 Nesting and Tool Confusion in MCP* 49:11 Client Control and Tool Invocation* 52:08 Authorization and Trust in MCP Servers* 01:01:34 Future Roadmap and Stateless Servers* 01:10:07 Open Source Governance and Community Involvement* 01:18:12 Wishlist and Closing RemarksTranscriptAlessio [00:00:02]: Hey, everyone. Welcome back to Latent Space. This is Alessio, partner and CTO at Decibel, and I'm joined by my co-host Swyx, founder of Small AI.swyx [00:00:10]: Hey, morning. And today we have a remote recording, I guess, with David and Justin from Anthropic over in London. Welcome. Hey, good You guys have created a storm of hype because of MCP, and I'm really glad to have you on. Thanks for making the time. What is MCP? Let's start with a crisp what definition from the horse's mouth, and then we'll go into the origin story. But let's start off right off the bat. What is MCP?Justin/David [00:00:43]: Yeah, sure. So Model Context Protocol, or MCP for short, is basically something we've designed to help AI applications extend themselves or integrate with an ecosystem of plugins, basically. The terminology is a bit different. We use this client-server terminology, and we can talk about why that is and where that came from. But at the end of the day, it really is that. It's like extending and enhancing the functionality of AI application.swyx [00:01:05]: David, would you add anything?Justin/David [00:01:07]: Yeah, I think that's actually a good description. I think there's like a lot of different ways for how people are trying to explain it. But at the core, I think what Justin said is like extending AI applications is really what this is about. And I think the interesting bit here that I want to highlight, it's AI applications and not models themselves that this is focused on. That's a common misconception that we can talk about a bit later. But yeah. Another version that we've used and gotten to like is like MCP is kind of like the USB-C port of AI applications and that it's meant to be this universal connector to a whole ecosystem of things.swyx [00:01:44]: Yeah. Specifically, an interesting feature is, like you said, the client and server. And it's a sort of two-way, right? Like in the same way that said a USB-C is two-way, which could be super interesting. Yeah, let's go into a little bit of the origin story. There's many people who've tried to make statistics. There's many people who've tried to build open source. I think there's an overall, also, my sense is that Anthropic is going hard after developers in the way that other labs are not. And so I'm also curious if there was any external influence or was it just you two guys just in a room somewhere riffing?Justin/David [00:02:18]: It is actually mostly like us two guys in a room riffing. So this is not part of a big strategy. You know, if you roll back time a little bit and go into like July 2024. I was like, started. I started at Anthropic like three months earlier or two months earlier. And I was mostly working on internal developer tooling, which is what I've been doing for like years and years before. And as part of that, I think there was an effort of like, how do I empower more like employees at Anthropic to use, you know, to integrate really deeply with the models we have? Because we've seen these, like, how good it is, how amazing it will become even in the future. And of course, you know, just dogfoot your own model as much as you can. And as part of that. From my development tooling background, I quickly got frustrated by the idea that, you know, on one hand side, I have Cloud Desktop, which is this amazing tool with artifacts, which I really enjoyed. But it was very limited to exactly that feature set. And it was there was no way to extend it. And on the other hand side, I like work in IDEs, which could greatly like act on like the file system and a bunch of other things. But then they don't have artifacts or something like that. And so what I constantly did was just copy. Things back and forth on between Cloud Desktop and the IDE, and that quickly got me, honestly, just very frustrated. And part of that frustration wasn't like, how do I go and fix this? What, what do we need? And back to like this development developer, like focus that I have, I really thought about like, well, I know how to build all these integrations, but what do I need to do to let these applications let me do this? And so it's very quickly that you see that this is clearly like an M times N problem. Like you have multiple like applications. And multiple integrations you want to build and like, what that is better there to fix this than using a protocol. And at the same time, I was actually working on an LSP related thing internally that didn't go anywhere. But you put these things together in someone's brain and let them wait for like a few weeks. And out of that comes like the idea of like, let's build some, some protocol. And so back to like this little room, like it was literally just me going to a room with Justin and go like, I think we should build something like this. Uh, this is a good idea. And Justin. Lucky for me, just really took an interest in the idea, um, and, and took it from there to like, to, to build something, together with me, that's really the inception story is like, it's us to, from then on, just going and building it over, over the course of like, like a month and a half of like building the protocol, building the first integration, like Justin did a lot of the, like the heavy lifting of the first integrations in cloud desktop. I did a lot of the first, um, proof of concept of how this can look like in an IDE. And if you, we could talk about like some of. All the tidbits you can find way before the inception of like before the official release, if you were looking at the right repositories at the right time, but there you go. That's like some of the, the rough story.Alessio [00:05:12]: Uh, what was the timeline when, I know November 25th was like the official announcement date. When did you guys start working on it?Justin/David [00:05:19]: Justin, when did we start working on that? I think it, I think it was around July. I think, yeah, I, as soon as David pitched this initial idea, I got excited pretty quickly and we started working on it, I think. I think almost immediately after that conversation and then, I don't know, it was a couple, maybe a few months of, uh, building the really unrewarding bits, if we're being honest, because for, for establishing something that's like this communication protocol has clients and servers and like SDKs everywhere, there's just like a lot of like laying the groundwork that you have to do. So it was a pretty, uh, that was a pretty slow couple of months. But then afterward, once you get some things talking over that wire, it really starts to get exciting and you can start building. All sorts of crazy things. And I think this really came to a head. And I don't remember exactly when it was, maybe like approximately a month before release, there was an internal hackathon where some folks really got excited about MCP and started building all sorts of crazy applications. I think the coolest one of which was like an MCP server that can control a 3d printer or something. And so like, suddenly people are feeling this power of like cloud connecting to the outside world in a really tangible way. And that, that really added some, uh, some juice to us and to the release.Alessio [00:06:32]: Yeah. And we'll go into the technical details, but I just want to wrap up here. You mentioned you could have seen some things coming if you were looking in the right places. We always want to know what are the places to get alpha, how, how, how to find MCP early.Justin/David [00:06:44]: I'm a big Zed user. I liked the Zed editor. The first MCP implementation on an IDE was in Zed. It was written by me and it was there like a month and a half before the official release. Just because we needed to do it in the open because it's an open source project. Um, and so it was, it was not, it was named slightly differently because we. We were not set on the name yet, but it was there.swyx [00:07:05]: I'm happy to go a little bit. Anthropic also had some preview of a model with Zed, right? Some kind of fast editing, uh, model. Um, uh, I, I'm con I confess, you know, I'm a cursor windsurf user. Haven't tried Zed. Uh, what's, what's your, you know, unrelated or, you know, unsolicited two second pitch for, for Zed. That's a good question.Justin/David [00:07:28]: I, it really depends what you value in editors. For me. I, I wouldn't even say I like, I love Zed more than others. I like them all like complimentary in, in a way or another, like I do use windsurf. I do use Zed. Um, but I think my, my main pitch for Zed is low latency, super smooth experience editor with a decent enough AI integration.swyx [00:07:51]: I mean, and maybe, you know, I think that's, that's all it is for a lot of people. Uh, I think a lot of people obviously very tied to the VS code paradigm and the extensions that come along with it. Okay. So I wanted to go back a little bit. You know, on, on, on some of the things that you mentioned, Justin, uh, which was building MCP on paper, you know, obviously we only see the end result. It just seems inspired by LSP. And I, I think both of you have acknowledged that. So how much is there to build? And when you say build, is it a lot of code or a lot of design? Cause I felt like it's a lot of design, right? Like you're picking JSON RPC, like how much did you base off of LSP and, and, you know, what, what, what was the sort of hard, hard parts?Justin/David [00:08:29]: Yeah, absolutely. I mean, uh, we, we definitely did take heavy inspiration from LSP. David had much more prior experience with it than I did working on developer tools. So, you know, I've mostly worked on products or, or sort of infrastructural things. LSP was new to me. But as a, as a, like, or from design principles, it really makes a ton of sense because it does solve this M times N problem that David referred to where, you know, in the world before LSP, you had all these different IDEs and editors, and then all these different languages that each wants to support or that their users want them to support. And then everyone's just building like one. And so, like, you use Vim and you might have really great support for, like, honestly, I don't know, C or something, and then, like, you switch over to JetBrains and you have the Java support, but then, like, you don't get to use the great JetBrains Java support in Vim and you don't get to use the great C support in JetBrains or something like that. So LSP largely, I think, solved this problem by creating this common language that they could all speak and that, you know, you can have some people focus on really robust language server implementations, and then the IDE developers can really focus on that side. And they both benefit. So that was, like, our key takeaway for MCP is, like, that same principle and that same problem in the space of AI applications and extensions to AI applications. But in terms of, like, concrete particulars, I mean, we did take JSON RPC and we took this idea of bidirectionality, but I think we quickly took it down a different route after that. I guess there is one other principle from LSP that we try to stick to today, which is, like, this focus on how features manifest. More than. The semantics of things, if that makes sense. David refers to it as being presentation focused, where, like, basically thinking and, like, offering different primitives, not because necessarily the semantics of them are very different, but because you want them to show up in the application differently. Like, that was a key sort of insight about how LSP was developed. And that's also something we try to apply to MCP. But like I said, then from there, like, yeah, we spent a lot of time, really a lot of time, and we could go into this more separately, like, thinking about each of the primitives that we want to offer in MCP. And why they should be different, like, why we want to have all these different concepts. That was a significant amount of work. That was the design work, as you allude to. But then also already out of the gate, we had three different languages that we wanted to at least support to some degree. That was TypeScript, Python, and then for the Z integration, it was Rust. So there was some SDK building work in those languages, a mixture of clients and servers to build out to try to create this, like, internal ecosystem that we could start playing with. And then, yeah, I guess just trying to make everything, like, robust over, like, I don't know, this whole, like, concept that we have for local MCP, where you, like, launch subprocesses and stuff and making that robust took some time as well. Yeah, maybe adding to that, I think the LSP inference goes even a little bit further. Like, we did take actually quite a look at criticisms on LSP, like, things that LSP didn't do right and things that people felt they would love to have different and really took that to heart to, like, see, you know, what are some of the things. that we wish, you know, we should do better. We took a, you know, like, a lengthy, like, look at, like, their very unique approach to JSON RPC, I may say, and then we decided that this is not what we do. And so there's, like, these differences, but it's clearly very, very inspired. Because I think when you're trying to build and focus, if you're trying to build something like MCP, you kind of want to pick the areas you want to innovate in, but you kind of want to be boring about the other parts in pattern matching LSP. So the problem allows you to be boring in a lot of the core pieces that you want to be boring in. Like, the choice of JSON RPC is very non-controversial to us because it's just, like, it doesn't matter at all, like, what the action, like, bites on the bar that you're speaking. It makes no difference to us. The innovation is on the primitives you choose and these type of things. And so there's way more focus on that that we wanted to do. So having some prior art is good there, basically.swyx [00:12:26]: It does. I wanted to double click. I mean, there's so many things you can go into. Obviously, I am passionate about protocol design. I wanted to show you guys this. I mean, I think you guys know, but, you know, you already referred to the M times N problem. And I can just share my screen here about anyone working in developer tools has faced this exact issue where you see the God box, basically. Like, the fundamental problem and solution of all infrastructure engineering is you have things going to N things, and then you put the God box and they'll all be better, right? So here is one problem for Uber. One problem for... GraphQL, one problem for Temporal, where I used to work at, and this is from React. And I was just kind of curious, like, you know, did you solve N times N problems at Facebook? Like, it sounds like, David, you did that for a living, right? Like, this is just N times N for a living.Justin/David [00:13:16]: David Pérez- Yeah, yeah. To some degree, for sure. I did. God, what a good example of this, but like, I did a bunch of this kind of work on like source control systems and these type of things. And so there were a bunch of these type of problems. And so you just shove them into something that everyone can read from and everyone can write to, and you build your God box somewhere, and it works. But yeah, it's just in developer tooling, you're absolutely right. In developer tooling, this is everywhere, right?swyx [00:13:47]: And that, you know, it shows up everywhere. And what was interesting is I think everyone who makes the God box then has the same set of problems, which is also you now have like composability off and remotes versus local. So, you know, there's this very common set of problems. So I kind of want to take a meta lesson on how to do the God box, but, you know, we can talk about the sort of development stuff later. I wanted to double click on, again, the presentation that Justin mentioned of like how features manifest and how you said some things are the same, but you just want to reify some concepts so they show up differently. And I had that sense, you know, when I was looking at the MCP docs, I'm like, why do these two things need to be the difference in other? I think a lot of people treat tool calling as the solution to everything, right? And sometimes you can actually sort of view kinds of different kinds of tool calls as different things. And sometimes they're resources. Sometimes they're actually taking actions. Sometimes they're something else that I don't really know yet. But I just want to see, like, what are some things that you sort of mentally group as adjacent concepts and why were they important to you to emphasize?Justin/David [00:14:58]: Yeah, I can chat about this a bit. I think fundamentally we every sort of primitive that we thought through, we thought from the perspective of the application developer first, like if I'm building an application, whether it is an IDE or, you know, call a desktop or some agent interface or whatever the case may be, what are the different things that I would want to receive from like an integration? And I think once you take that lens, it becomes quite clear that that tool calling is necessary, but very insufficient. Like there are many other things you would want to do besides just get tools. And plug them into the model and you want to have some way of differentiating what those different things are. So the kind of core primitives that we started MCP with, we've since added a couple more, but the core ones are really tools, which we've already talked about. It's like adding, adding tools directly to the model or function calling is sometimes called resources, which is basically like bits of data or context that you might want to add to the context. So excuse me, to the, to the model context. And this, this is the first primitive where it's like, we, we. Decided this could be like application controlled, like maybe you want a model to automatically search through and, and find relevant resources and bring them into context. But maybe you also want that to be an explicit UI affordance in the application where the user can like, you know, pick through a dropdown or like a paperclip menu or whatever, and find specific things and tag them in. And then that becomes part of like their message to the LLM. Like those are both use cases for resources. And then the third one is prompts. Which are deliberately meant to be like user initiated or. Like. User substituted. Text or messages. So like the analogy here would be like, if you're an editor, like a slash command or something like that, or like an at, you know, auto completion type thing where it's like, I have this kind of macro effectively that I want to drop in and use. And we have sort of expressed opinions through MCP about the different ways that these things could manifest, but ultimately it is for application developers to decide, okay, you, you get these different concepts expressed differently. Um, and it's very useful as an application developer because you can decide. The appropriate experience for each, and actually this can be a point of differentiation to, like, we were also thinking, you know, from the application developer perspective, they, you know, application developers don't want to be commoditized. They don't want the application to end up the same as every other AI application. So like, what are the unique things that they could do to like create the best user experience even while connecting up to this big open ecosystem of integration? I, yeah. And I think to add to that, the, I think there are two, two aspects to that, that I want to. I want to mention the first one is that interestingly enough, like while nowadays tool calling is obviously like probably like 95% plus of the integrations, and I wish there would be, you know, more clients doing tool resources, doing prompts. The, the very first implementation in that is actually a prompt implementation. It doesn't deal with tools. And, and it, we found this actually quite useful because what it allows you to do is, for example, build an MCP server that takes like a backtrack. So it's, it's not necessarily like a tool that literally just like rawizes from Sentry or any other like online platform that, that tracks your, your crashes. And just lets you pull this into the context window beforehand. And so it's quite nice that way that it's like a user driven interaction that you does the user decide when to pull this in and don't have to wait for the model to do it. And so it's a great way to craft the prompt in a way. And I think similarly, you know, I wish, you know, more MCP servers today would bring prompts as examples of, like how to even use the tools. Yeah. at the same time. The resources bits are quite interesting as well. And I wish we would see more usage there because it's very easy to envision, but yet nobody has really implemented it. A system where like an MCP server exposes, you know, a set of documents that you have, your database, whatever you might want to as a set of resources. And then like a client application would build a full rack index around this, right? This is definitely an application use case we had in mind as to why these are exposed in such a way that they're not model driven, because you might want to have way more resource content than is, you know, realistically usable in a context window. And so I think, you know, I wish applications and I hope applications will do this in the next few months, use these primitives, you know, way better, because I think there's way more rich experiences to be created that way. Yeah, completely agree with that. And I would also add that I would go into it if I haven't.Alessio [00:19:30]: I think that's a great point. And everybody just, you know, has a hammer and wants to do tool calling on everything. I think a lot of people do tool calling to do a database query. They don't use resources for it. What are like the, I guess, maybe like pros and cons or like when people should use a tool versus a resource, especially when it comes to like things that do have an API interface, like for a database, you can do a tool that does a SQL query versus when should you do that or a resource instead with the data? Yeah.Justin/David [00:20:00]: The way we separate these is like tools are always meant to be initiated by the model. It's sort of like at the model's discretion that it will like find the right tool and apply it. So if that's the interaction you want as a server developer, where it's like, okay, this, you know, suddenly I've given the LLM the ability to run a SQL queries, for example, that makes sense as a tool. But resources are more flexible, basically. And I think, to be completely honest, the story here is practically a bit complicated today. Because many clients don't support resources yet. But like, I think in an ideal world where all these concepts are fully realized, and there's like full ecosystem support, you would do resources for things like the schemas of your database tables and stuff like that, as a way to like either allow the user to say like, okay, now, you know, cloud, I want to talk to you about this database table. Here it is. Let's have this conversation. Or maybe the particular AI application that you're using, like, you know, could be something agentic, like cloud code. is able to just like agentically look up resources and find the right schema of the database table you're talking about, like both those interactions are possible. But I think like, anytime you have this sort of like, you want to list a bunch of entities, and then read any of them, that makes sense to model as resources. Resources are also, they're uniquely identified by a URI, always. And so you can also think of them as like, you know, sort of general purpose transformers, even like, if you want to support an interaction where a user just like drops a URI in, and then you like automatically figure out how to interpret that, you could use MCP servers to do that interpretation. One of the interesting side notes here, back to the Z example of resources, is that has like a prompt library that you can do, that people can interact with. And we just exposed a set of default prompts that we want everyone to have as part of that prompt library. Yeah, resources for a while so that like, you boot up Zed and Zed will just populate the prompt library from an MCP server, which was quite a cool interaction. And that was, again, a very specific, like, both sides needed to agree upon the URI format and the underlying data format. And but that was a nice and kind of like neat little application of resources. There's also going back to that perspective of like, as an application developer, what are the things that I would want? Yeah. We also applied this thinking to like, you know, like, we can do this, we can do this, we can do this, we can do this. Like what existing features of applications could conceivably be kind of like factored out into MCP servers if you were to take that approach today. And so like basically any IDE where you have like an attachment menu that I think naturally models as resources. It's just, you know, those implementations already existed.swyx [00:22:49]: Yeah, I think the immediate like, you know, when you introduced it for cloud desktop and I saw the at sign there, I was like, oh, yeah, that's what Cursor has. But this is for everyone else. And, you know, I think like that that is a really good design target because it's something that already exists and people can map on pretty neatly. I was actually featuring this chart from Mahesh's workshop that presumably you guys agreed on. I think this is so useful that it should be on the front page of the docs. Like probably should be. I think that's a good suggestion.Justin/David [00:23:19]: Do you want to do you want to do a PR for this? I love it.swyx [00:23:21]: Yeah, do a PR. I've done a PR for just Mahesh's workshop in general, just because I'm like, you know. I know.SPEAKER_03 [00:23:28]: I approve. Yeah.swyx [00:23:30]: Thank you. Yeah. I mean, like, but, you know, I think for me as a developer relations person, I always insist on having a map for people. Here are all the main things you have to understand. We'll spend the next two hours going through this. So some one image that kind of covers all this, I think is pretty helpful. And I like your emphasis on prompts. I would say that it's interesting that like I think, you know, in the earliest early days of like chat GPT and cloud, people. Often came up with, oh, you can't really follow my screen, can you? In the early days of chat of, of chat, GPT and all that, like a lot, a lot of people started like, you know, GitHub for prompts, like we'll do prop manager libraries and, and like those never really took off. And I think something like this is helpful and important. I would say like, I've also seen prompt file from human loop, I think, as, as other ways to standardize how people share prompts. But yeah, I agree that like, there should be. There should be more innovation here. And I think probably people want some dynamicism, which I think you, you afford, you allow for. And I like that you have multi-step that this was, this is the main thing that got me like, like these guys really get it. You know, I think you, you maybe have a published some research that says like, actually sometimes to get, to get the model working the right way, you have to do multi-step prompting or jailbreaking to, to, to behave the way that you want. And so I think prompts are not just single conversations. They're sometimes chains of conversations. Yeah.Alessio [00:25:05]: Another question that I had when I was looking at some server implementations, the server builders kind of decide what data gets eventually returned, especially for tool calls. For example, the Google maps one, right? If you just look through it, they decide what, you know, attributes kind of get returned and the user can not override that if there's a missing one. That has always been my gripe with like SDKs in general, when people build like API wrapper SDKs. And then they miss one parameter that maybe it's new and then I can not use it. How do you guys think about that? And like, yeah, how much should the user be able to intervene in that versus just letting the server designer do all the work?Justin/David [00:25:41]: I think we probably bear responsibility for the Google maps one, because I think that's one of the reference servers we've released. I mean, in general, for things like for tool results in particular, we've actually made the deliberate decision, at least thus far, for tool results to be not like sort of structured JSON data, not matching a schema, really, but as like a text or images or basically like messages that you would pass into the LLM directly. And so I guess the correlation that is, you really should just return a whole jumble of data and trust the LLM to like sort through it and see. I mean, I think we've clearly done a lot of work. But I think we really need to be able to shift and like, you know, extract the information it cares about, because that's what that's exactly what they excel at. And we really try to think about like, yeah, how to, you know, use LLMs to their full potential and not maybe over specify and then end up with something that doesn't scale as LLMs themselves get better and better. So really, yeah, I suppose what should be happening in this example server, which again, will request welcome. It would be great. It's like if all these result types were literally just passed through from the API that it's calling, and then the API would be able to pass through automatically.Alessio [00:26:50]: Thank you for joining us.Alessio [00:27:19]: It's a hard to sign decisions on where to draw the line.Justin/David [00:27:22]: I'll maybe throw AI under the bus a little bit here and just say that Claude wrote a lot of these example servers. No surprise at all. But I do think, sorry, I do think there's an interesting point in this that I do think people at the moment still to mostly still just apply their normal software engineering API approaches to this. And I think we're still need a little bit more relearning of how to build something for LLMs and trust them, particularly, you know, as they are getting significantly better year to year. Right. And I think, you know, two years ago, maybe that approach would have been very valid. But nowadays, just like just throw data at that thing that is really good at dealing with data is a good approach to this problem. And I think it's just like unlearning like 20, 30, 40 years of software engineering practices that go a little bit into this to some degree. If I could add to that real quickly, just one framing as well for MCP is thinking in terms of like how crazily fast AI is advancing. I mean, it's exciting. It's also scary. Like thinking, us thinking that like the biggest bottleneck to, you know, the next wave of capabilities for models might actually be their ability to like interact with the outside world to like, you know, read data from outside data sources or like take stateful actions. Working at Anthropic, we absolutely care about doing that. Safely and with the right control and alignment measures in place and everything. But also as AI gets better, people will want that. That'll be key to like becoming productive with AI is like being able to connect them up to all those things. So MCP is also sort of like a bet on the future and where this is all going and how important that will be.Alessio [00:29:05]: Yeah. Yeah, I would say any API attribute that says formatted underscore should kind of be gone and we should just get the raw data from all of them. Because why, you know, why are you formatting? For me, the, the model is definitely smart enough to format an address. So I think that should go to the end user.swyx [00:29:23]: Yeah. I have, I think Alessio is about to move on to like server implementation. I wanted to, I think we were talking, we're still talking about sort of MCP design and goals and intentions. And we've, I think we've indirectly identified like some problems that MCP is really trying to address. But I wanted to give you the spot to directly take on MCP versus open API, because I think obviously there's a, this is a top question. I wanted to sort of recap everything we just talked about and give people a nice little segment that, that people can say, say, like, this is a definitive answer on MCP versus open API.Justin/David [00:29:56]: Yeah, I think fundamentally, I mean, open API specifications are a very great tool. And like I've used them a lot in developing APIs and consumers of APIs. I think fundamentally, or we think that they're just like too granular for what you want to do with LLMs. Like they don't express higher level AI specific concepts like this whole mental model. Yeah. But we've talked about with the primitives of MCP and thinking from the perspective of the application developer, like you don't get any of that when you encode this information into an open API specification. So we believe that models will benefit more from like the purpose built or purpose design tools, resources, prompts, and the other primitives than just kind of like, here's our REST API, go wild. I do think there, there's another aspect. I think that I'm not an open API expert, so I might, everything might not be perfectly accurate. But I do think that we're... Like there's been, and we can talk about this a bit more later. There's a deliberate design decision to make the protocol somewhat stateful because we do really believe that AI applications and AI like interactions will become inherently more stateful and that we're the current state of like, like need for statelessness is more a temporary point in time that will, you know, to some degree that will always exist. But I think like more statefulness will become increasingly more popular, particularly when you think about additional modalities that go beyond just pure text-based, you know, interactions with models, like it might be like video, audio, whatever other modalities exist and out there already. And so I do think that like having something a bit more stateful is just inherently useful in this interaction pattern. I do think they're actually more complimentary open API and MCP than if people wanted to make it out. Like people look. For these, like, you know, A versus B and like, you know, have, have all the, all the developers of these things go in a room and fist fight it out. But that's rarely what's going on. I think it's actually, they're very complimentary and they have their little space where they're very, very strong. And I think, you know, just use the best tool for the job. And if you want to have a rich interaction between an AI application, it's probably like, it's probably MCP. That's the right choice. And if, if you want to have like an API spec somewhere that is very easy and like a model can read. And to interpret, and that's what, what worked for you, then open API is the way to go. One more thing to add here is that we've already seen people, I mean, this happened very early. People in the community built like bridges between the two as well. So like, if what you have is an open API specification and no one's, you know, building a custom MCP server for it, there are already like translators that will take that and re-expose it as MCP. And you could do the other direction too. Awesome.Alessio [00:32:43]: Yeah. I think there's the other side of MCPs that people don't talk as much. Okay. I think there's the other side of MCPs that people don't talk as much about because it doesn't go viral, which is building the servers. So I think everybody does the tweets about like connect the cloud desktop to XMCP. It's amazing. How would you guys suggest people start with building servers? I think the spec is like, so there's so many things you can do that. It's almost like, how do you draw the line between being very descriptive as a server developer versus like going back to our discussion before, like just take the data and then let them auto manipulate it later. Do you have any suggestions for people?Justin/David [00:33:16]: I. I think there, I have a few suggestions. I think that one of the best things I think about MCP and something that we got right very early is that it's just very, very easy to build like something very simple that might not be amazing, but it's pretty, it's good enough because models are very good and get this going within like half an hour, you know? And so I think that the best part is just like pick the language of, you know, of your choice that you love the most, pick the SDK for it, if there's an SDK for it, and then just go build a tool of the thing that matters to you personally. And that you want to use. You want to see the model like interact with, build the server, throw the tool in, don't even worry too much about the description just yet, like do a bit of like, write your little description as you think about it and just give it to the model and just throw it to standard IO protocol transport wise into like an application that you like and see it do things. And I think that's part of the magic that, or like, you know, empowerment and magic for developers to get so quickly to something that the model does. Or something that you care about. That I think really gets you going and gets you into this flow of like, okay, I see this thing can do cool things. Now I go and, and can expand on this and now I can go and like really think about like, which are the different tools I want, which are the different raw resources and prompts I want. Okay. Now that I have that. Okay. Now do I, what do my evals look like for how I want this to go? How do I optimize my prompts for the evals using like tools like that? This is infinite depth so that you can do. But. Okay. Just start. As simple as possible and just go build a server in like half an hour in the language of your choice and how the model interacts with the things that matter to you. And I think that's where the fun is at. And I think people, I think a lot of what MCP makes great is it just adds a lot of fun to the development piece to just go and have models do things quickly. I also, I'm quite partial, again, to using AI to help me do the coding. Like, I think even during the initial development process, we realized it was quite easy to basically just take all the SDK code. Again, you know, what David suggested, like, you know, pick the language you care about, and then pick the SDK. And once you have that, you can literally just drop the whole SDK code into an LLM's context window and say, okay, now that you know MCP, build me a server that does that. This, this, this. And like, the results, I think, are astounding. Like, I mean, it might not be perfect around every single corner or whatever. And you can refine it over time. But like, it's a great way to kind of like one shot something that basically does what you want, and then you can iterate from there. And like David said, there has been a big emphasis from the beginning on like making servers as easy and simple to build as possible, which certainly helps with LLMs doing it too. We often find that like, getting started is like, you know, 100, 200 lines of code in the last couple of years. It's really quite easy. Yeah. And if you don't have an SDK, again, give the like, give the subset of the spec that you care about to the model, and like another SDK and just have it build you an SDK. And it usually works for like, that subset. Building a full SDK is a different story. But like, to get a model to tool call in Haskell or whatever, like language you like, it's probably pretty straightforward.swyx [00:36:32]: Yeah. Sorry.Alessio [00:36:34]: No, I was gonna say, I co-hosted a hackathon at the AGI house. I'm a personal agent, and one of the personal agents somebody built was like an MCP server builder agent, where they will basically put the URL of the API spec, and it will build an MCP server for them. Do you see that today as kind of like, yeah, most servers are just kind of like a layer on top of an existing API without too much opinion? And how, yeah, do you think that's kind of like how it's going to be going forward? Just like AI generated, exposed to API that already exists? Or are we going to see kind of like net new MCP experiences that you... You couldn't do before?Justin/David [00:37:10]: I think, go for it. I think both, like, I, I think there, there will always be value in like, oh, I have, you know, I have my data over here, and I want to use some connector to bring it into my application over here. That use case will certainly remain. I think, you know, this, this kind of goes back to like, I think a lot of things today are maybe defaulting to tool use when some of the other primitives would be maybe more appropriate over time. And so it could still be that connector. It could still just be that sort of adapter layer, but could like actually adapt it onto different primitives, which is one, one way to add more value. But then I also think there's plenty of opportunity for use cases, which like do, you know, or for MCP servers that kind of do interesting things in and out themselves and aren't just adapters. Some of the earliest examples of this were like, you know, the memory MCP server, which gives the LLM the ability to remember things across conversations or like someone who's a close coworker built the... I shouldn't have said that, not a close coworker. Someone. Yeah. Built the sequential thinking MCP server, which gives a model the ability to like really think step-by-step and get better at its reasoning capabilities. This is something where it's like, it really isn't integrating with anything external. It's just providing this sort of like way of thinking for a model.Justin/David [00:38:27]: I guess either way though, I think AI authorship of the servers is totally possible. Like I've had a lot of success in prompting, just being like, Hey, I want to build an MCP server that like does this thing. And even if this thing is not. Adapting some other API, but it's doing something completely original. It's usually able to figure that out too. Yeah. I do. I do think that the, to add to that, I do think that a good part of, of what MCP servers will be, will be these like just API wrapper to some degree. Um, and that's good to be valid because that works and it gets you very, very far. But I think we're just very early, like in, in exploring what you can do. Um, and I think as client support for like certain primitives get better, like we can talk about sampling. I'm playing with my favorite topic and greatest frustration at the same time. Um, I think you can just see it very easily see like way, way, way richer experiences and we have, we have built them internally for as prototyping aspects. And I think you see some of that in the community already, but there's just, you know, things like, Hey, summarize my, you know, my, my, my, my favorite subreddits for the morning MCP server that nobody has built yet, but it's very easy to envision. And the protocol can totally do this. And these are like slightly richer experiences. And I think as people like go away from like the, oh, I just want to like, I'm just in this new world where I can hook up the things that matter to me, to the LLM, to like actually want a real workflow, a real, like, like more richer experience that I, I really want exposed to the model. I think then you will see these things pop up, but again, that's a, there's a little bit of a chicken and egg problem at the moment with like what a client supported versus, you know, what servers like authors want to do. Yeah.Alessio [00:40:10]: That, that, that was. That's kind of my next question on composability. Like how, how do you guys see that? Do you have plans for that? What's kind of like the import of MCPs, so to speak, into another MCP? Like if I want to build like the subreddit one, there's probably going to be like the Reddit API, uh, MCP, and then the summarization MCP. And then how do I, how do I do a super MCP?Justin/David [00:40:33]: Yeah. So, so this is an interesting topic and I think there, um, so there, there are two aspects to it. I think that the one aspect is like, how can I build something? I think agentically that you requires an LLM call and like a one form of fashion, like for summarization or so, but I'm staying model independent and for that, that's where like part of this by directionality comes in, in this more rich experience where we do have this facility for servers to ask the client again, who owns the LLM interaction, right? Like we talk about cursor, who like runs the, the, the loop with the LLM for you there that for the server author to ask the client for a completion. Um, and basically have it like summarize something for the server and return it back. And so now what model summarizes this depends on which one you have selected in cursor and not depends on what the author brings. The author doesn't bring an SDK. It doesn't have, you had an API key. It's completely model independent, how you can build this. There's just one aspect to that. The second aspect to building richer, richer systems with MCP is that you can easily envision an MCP server that serves something to like something like cursor or win server. For a cloud desktop, but at the same time, also is an MCP client at the same time and itself can use MCP servers to create a rich experience. And now you have a recursive property, which we actually quite carefully in the design principles, try to retain. You, you know, you see it all over the place and authorization and other aspects, um, to the spec that we retain this like recursive pattern. And now you can think about like, okay, I have, I have this little bundle of applications, both a server and a client. And I can add. Add these in chains and build basically graphs like, uh, DAGs out of MCP servers, um, uh, that can just richly interact with each other. A agentic MCP server can also use the whole ecosystem of MCP servers available to themselves. And I think that's a really cool environment, cool thing you can do. And people have experimented with this. And I think you see hopefully more of this, particularly when you think about like auto-selecting, auto-installing, there's a bunch of these things you can do that make, uh, make a really fun experience. I, I think practically there are some niceties we still need to add to the SDKs to make this really simple and like easy to execute on like this kind of recursive MCP server that is also a client or like kind of multiplexing together the behaviors of multiple MCP servers into one host, as we call it. These are things we definitely want to add. We haven't been able to yet, but like, uh, I think that would go some way to showcasing these things that we know are already possible, but not necessarily taken up that much yet. Okay.swyx [00:43:08]: This is, uh, very exciting. And very, I'm sure, I'm sure a lot of people get very, very, uh, a lot of ideas and inspiration from this. Is an MCP server that is also a client, is that an agent?Justin/David [00:43:19]: What's an agent? There's a lot of definitions of agents.swyx [00:43:22]: Because like you're, in some ways you're, you're requesting something and it's going off and doing stuff that you don't necessarily know. There's like a layer of abstraction between you and the ultimate raw source of the data. You could dispute that. Yeah. I just, I don't know if you have a hot take on agents.Justin/David [00:43:35]: I do think, I do think that you can build an agent that way. For me, I think you need to define the difference between. An MCP server plus client that is just a proxy versus an agent. I think there's a difference. And I think that difference might be in, um, you know, for example, using a sample loop to create a more richer experience to, uh, to, to have a model call tools while like inside that MCP server through these clients. I think then you have a, an actual like agent. Yeah. I do think it's very simple to build agents that way. Yeah. I think there are maybe a few paths here. Like it definitely feels like there's some relationship. Between MCP and agents. One possible version is like, maybe MCP is a great way to represent agents. Maybe there are some like, you know, features or specific things that are missing that would make the ergonomics of it better. And we should make that part of MCP. That's one possibility. Another is like, maybe MCP makes sense as kind of like a foundational communication layer for agents to like compose with other agents or something like that. Or there could be other possibilities entirely. Maybe MCP should specialize and narrowly focus on kind of the AI application side. And not as much on the agent side. I think it's a very live question and I think there are sort of trade-offs in every direction going back to the analogy of the God box. I think one thing that we have to be very careful about in designing a protocol and kind of curating or shepherding an ecosystem is like trying to do too much. I think it's, it's a very big, yeah, you know, you don't want a protocol that tries to do absolutely everything under the sun because then it'll be bad at everything too. And so I think the key question, which is still unresolved is like, to what degree are agents. Really? Really naturally fitting in to this existing model and paradigm or to what degree is it basically just like orthogonal? It should be something.swyx [00:45:17]: I think once you enable two way and once you enable client server to be the same and delegation of work to another MCP server, it's definitely more agentic than not. But I appreciate that you keep in mind simplicity and not trying to solve every problem under the sun. Cool. I'm happy to move on there. I mean, I'm going to double click on a couple of things that I marked out because they coincide with things that we wanted to ask you. Anyway, so the first one is, it's just a simple, how many MCP things can one implementation support, you know, so this is the, the, the sort of wide versus deep question. And, and this, this is direct relevance to the nesting of MCPs that we just talked about in April, 2024, when, when Claude was launching one of its first contexts, the first million token context example, they said you can support 250 tools. And in a lot of cases, you can't do that. You know, so to me, that's wide in, in the sense that you, you don't have tools that call tools. You just have the model and a flat hierarchy of tools, but then obviously you have tool confusion. It's going to happen when the tools are adjacent, you call the wrong tool. You're going to get the bad results, right? Do you have a recommendation of like a maximum number of MCP servers that are enabled at any given time?Justin/David [00:46:32]: I think be honest, like, I think there's not one answer to this because to some extent, it depends on the model that you're using. To some extent, it depends on like how well the tools are named and described for the model and stuff like that to avoid confusion. I mean, I think that the dream is certainly like you just furnish all this information to the LLM and it can make sense of everything. This, this kind of goes back to like the, the future we envision with MCP is like all this information is just brought to the model and it decides what to do with it. But today the reality or the practicalities might mean that like, yeah, maybe you, maybe in your client application, like the AI application, you do some fill in the blanks. Maybe you do some filtering over the tool set or like maybe you, you run like a faster, smaller LLM to like filter to what's most relevant and then only pass those tools to the bigger model. Or you could use an MCP server, which is a proxy to other MCP servers and does some filtering at that level or something like that. I think hundreds, as you referenced, is still a fairly safe bet, at least for Claude. I can't speak to the other models, but yeah, I don't know. I think over time we should just expect this to get better. So we're wary of like constraining anything and preventing that. Sort of long. Yeah, and obviously it highly, it highly depends on the overlap of the description, right? Like if you, if you have like very separate servers that do very separate things and the tools have very clear unique names, very clear, well-written descriptions, you know, your mileage might be more higher than if you have a GitLab and a GitHub server at the same time in your context. And, and then the overlap is quite significant because they look very similar to the model and confusion becomes easier. There's different considerations too. Depending on the AI application, if you're, if you're trying to build something very agentic, maybe you are trying to minimize the amount of times you need to go back to the user with a question or, you know, minimize the amount of like configurability in your interface or something. But if you're building other applications, you're building an IDE or you're building a chat application or whatever, like, I think it's totally reasonable to have affordances that allow the user to say like, at this moment, I want this feature set or at this different moment, I want this different feature set or something like that. And maybe not treat it as like always on. The full list always on all the time. Yeah.swyx [00:48:42]: That's where I think the concepts of resources and tools get to blend a little bit, right? Because now you're saying you want some degree of user control, right? Or application control. And other times you want the model to control it, right? So now we're choosing just subsets of tools. I don't know.Justin/David [00:49:00]: Yeah, I think it's a fair point or a fair concern. I guess the way I think about this is still like at the end of the day, and this is a core MCP design principle is like, ultimately, the concept of a tool is not a tool. It's a client application, and by extension, the user. Ultimately, they should be in full control of absolutely everything that's happening via MCP. When we say that tools are model controlled, what we really mean is like, tools should only be invoked by the model. Like there really shouldn't be an application interaction or a user interaction where it's like, okay, as a user, I now want you to use this tool. I mean, occasionally you might do that for prompting reasons, but like, I think that shouldn't be like a UI affordance. But I think the client application or the user deciding to like filter out the user, it's not a tool. I think the client application or the user deciding to like filter out things that MCP servers are offering, totally reasonable, or even like transform them. Like you could imagine a client application that takes tool descriptions from an MCP server and like enriches them, makes them better. We really want the client applications to have full control in the MCP paradigm. That in addition, though, like I think there, one thing that's very, very early in my thinking is there might be a addition to the protocol where you want to give the server author the ability to like logically group certain primitives together, potentially. Yeah. To inform that, because they might know some of these logical groupings better, and that could like encompasses prompts, resources, and tools at the same time. I mean, personally, we can have a design discussion on there. I mean, personally, my take would be that those should be separate MCP servers, and then the user should be able to compose them together. But we can figure it out.Alessio [00:50:31]: Is there going to be like a MCP standard library, so to speak, of like, hey, these are like the canonical servers, do not build this. We're just going to take care of those. And those can be maybe the building blocks that people can compose. Or do you expect people to just rebuild their own MCP servers for like a lot of things?Justin/David [00:50:49]: I think we will not be prescriptive in that sense. I think there will be inherently, you know, there's a lot of power. Well, let me rephrase it. Like, I have a long history in open source, and I feel the bizarre approach to this problem is somewhat useful, right? And I think so that the best and most interesting option wins. And I don't think we want to be very prescriptive. I will definitely foresee, and this already exists, that there will be like 25 GitHub servers and like 25, you know, Postgres servers and whatnot. And that's all cool. And that's good. And I think they all add in their own way. But effectively, eventually, over months or years, the ecosystem will converge to like a set of very widely used ones who basically, I don't know if you call it winning, but like that will be the most used ones. And I think that's completely fine. Because being prescriptive about this, I don't think it's any useful, any use. I do think, of course, that there will be like MCP servers, and you see them already that are driven by companies for their products. And, you know, they will inherently be probably the canonical implementation. Like if you want to work with Cloudflow workers and use an MCP server for that, you'll probably want to use the one developed by Cloudflare. Yeah. I think there's maybe a related thing here, too, just about like one big thing worth thinking about. We don't have any like solutions completely ready to go. It's this question of like trust or like, you know, vetting is maybe a better word. Like, how do you determine which MCP servers are like the kind of good and safe ones to use? Regardless of if there are any implementations of GitHub MCP servers, that could be totally fine. But you want to make sure that you're not using ones that are really like sus, right? And so trying to think about like how to kind of endow reputation or like, you know, if hypothetically. Anthropic is like, we've vetted this. It meets our criteria for secure coding or something. How can that be reflected in kind of this open model where everyone in the ecosystem can benefit? Don't really know the answer yet, but that's very much top of mind.Alessio [00:52:49]: But I think that's like a great design choice of MCPs, which is like language agnostic. Like already, and there's not, to my knowledge, an Anthropic official Ruby SDK, nor an OpenAI SDK. And Alex Roudal does a great job building those. But now with MCPs is like. You don't actually have to translate an SDK to all these languages. You just do one, one interface and kind of bless that interface as, as Anthropic. So yeah, that was, that was nice.swyx [00:53:18]: I have a quick answer to this thing. So like, obviously there's like five or six different registries already popped up. You guys announced your official registry that's gone away. And a registry is very tempting to offer download counts, likes, reviews, and some kind of trust thing. I think it's kind of brittle. Like no matter what kind of social proof or other thing you can, you can offer, the next update can compromise a trusted package. And actually that's the one that does the most damage, right? So abusing the trust system is like setting up a trust system creates the damage from the trust system. And so I actually want to encourage people to try out MCP Inspector because all you got to do is actually just look at the traffic. And like, I think that's, that goes for a lot of security issues.Justin/David [00:54:03]: Yeah, absolutely. Cool. And then I think like that's very classic, just supply chain problem that like all registries effectively have. And the, you know, there are different approaches to this problem. Like you can take the Apple approach and like vet things and like have like an army of, of both automated system and review teams to do this. And then you effectively build an app store, right? That's, that's one approach to this type of problem. It kind of works in, you know, in a very set, certain set of ways. But I don't think it works in an open source kind of ecosystem for which you always have a registry kind of approach, like similar to MPM and packages and PiPi.swyx [00:54:36]: And they all have inherently these, like these, these supply chain attack problems, right? Yeah, yeah, totally. Quick time check. I think we're going to go for another like 20, 25 minutes. Is that okay for you guys? Okay, awesome. Cool. I wanted to double click, take the time. So I'm going to sort of, we previewed a little bit on like the future coming stuff. So I want to leave the future coming stuff to the end, like registry, the, the, the stateless servers and remote servers, all the other stuff. But I wanted to double click a little bit. A little bit more on the launch, the core servers that are part of the official repo. And some of them are special ones, like the, like the ones we already talked about. So let me just pull them up already. So for example, you mentioned memory, you mentioned sequential thinking. And I think I really, really encourage people should look at these, what I call special servers. Like they're, they're not normal servers in the, in the sense that they, they wrap some API and it's just easier to interact with those than to work at the APIs. And so I'll, I'll highlight the, the memory one first, just because like, I think there are, there are a few memory startups, but actually you don't need them if you just use this one. It's also like 200 lines of code. It's super simple. And, and obviously then if you need to scale it up, you should probably do some, some more battle tested thing. But if you're interested, if you're just introducing memory, I think this is a really good implementation. I don't know if there's like special stories that you want to highlight with, with some of these.Justin/David [00:56:00]: I think, no, I don't, I don't think there's special stories. I think a lot of these, not all of them, but a lot of them originated from that hackathon that I mentioned before, where folks got excited about the idea of MCP. People internally inside Anthropik who wanted to have memory or like wanted to play around with the idea could quickly now prototype something using MCP in a way that wasn't possible before. Someone who's not like, you know, you don't have to become the, the end to end expert. You don't have access. You don't have to have access to this. Like, you know. You don't have to have this private, you know, proprietary code base. You can just now extend cloud with this memory capability. So that's how a lot of these came about. And then also just thinking about like, you know, what is the breadth of functionality that we want to demonstrate at launch?swyx [00:56:47]: Totally. And I think that is partially why it made your launch successful because you launch with a sufficiently spanning set of here's examples and then people just copy paste and expand from there. I would also highligh
HTML All The Things - Web Development, Web Design, Small Business
Choosing the right code editor can make or break a web developer's workflow. In this episode, we dive into the Top 5 Code Editors for Web Developers—exploring their strengths, quirks, and everything in between. From the widely-loved Visual Studio Code to the blazing-fast newcomer Zed, we discuss which editors could suit your coding style. Whether you're a fan of Vim's keyboard mastery, WebStorm's all-in-one features, or experimenting with modern tools like Cursor, there's something here for everyone. Tune in to find the perfect fit for your development journey! Show Notes: https://www.htmlallthethings.com/podcasts/top-5-code-editors-for-web-developers
In the news today: For our first story of the day focusing on campus news, Sunrise Movement protests federal policies, MSU's response. For our second story focusing on more campus news, Associated Students of MSU wrap up 61st general assembly session. For our final story of the day focusing on campus events, VIM fashion show gives a look 'Behind the Seams'.
Vim li cas Nchaiv Khwb Xyooj thiab coob leej ntau tus Hmoob thiaj tau tuaj pib neej tshiab ntawm teb chaws Australia, thiab Hmoob tej kab lis kev cai puas tseem tseem ceeb rau yav tom ntej ntxiv lawm?
In this vibrant episode of The *(Relate)able Podcast, hosts Sherween and Fiona embark on an unforgettable journey to Guadeloupe to immerse themselves in the island's renowned Carnival. Captivated by the profound spiritual essence of the traditional masquerade ("mas"), they share their personal experiences and reflections. Sherween candidly discusses the physical challenges she faced, having underestimated the demands of the lively marches, while Fiona's dedicated training regimen paid off, allowing her to fully embrace each day's festivities. Both hosts have returned with a deep admiration and love for Guadeloupe's rich cultural heritage. In our "Under the Mango Tree" segment, listeners are treated to the enchanting sounds of the conch shell and drumming by the VIM mas band. Additionally, all three hosts find joy in reading and responding to heartfelt comments from our YouTube channel and other listener feedback.KarataGuadeloupe Tourism BoardAVPAGMarie GalantVoukoumSupport this show http://supporter.acast.com/relateable. Hosted on Acast. See acast.com/privacy for more information.
Send us a textIn this episode of CareTalk, John Driscoll sits down with Oron Afek, CEO of Vim, to discuss how Vim is transforming healthcare by creating a smarter, more connected ecosystem for doctors and patients. Oron shares his entrepreneurial journey, from his early days in the Israeli military to building a healthcare platform that integrates seamlessly with electronic health records (EHRs) to improve clinical decision-making. Learn how Vim's technology is helping doctors make better decisions at the point of care, streamlining workflows, and driving better patient outcomes, all while empowering developers to build innovative healthcare solutions.
SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
DShield Traffic Analysis using ELK The "DShield SIEM" includes an ELK dashboard as part of the Honeypot. Learn how to find traffic of interest with this tool. https://isc.sans.edu/diary/DShield%20Traffic%20Analysis%20using%20ELK/31742 Zen and the Art of Microcode Hacking Google released details, including a proof of concept exploit, showing how to take advantage of the recently patched AMD microcode vulnerability https://bughunters.google.com/blog/5424842357473280/zen-and-the-art-of-microcode-hacking CVE-2024-56161 VIM Vulnerability An attacker may execute arbitrary code by tricking a user to open a crafted tar file in VIM https://github.com/vim/vim/security/advisories/GHSA-wfmf-8626-q3r3 Snil Mail Fake Ransom Note A copy cat group is impersonating ransomware actors. The group sends snail mail to company executives claiming to have stolen company data and threatening to leak it unless a payment is made. https://www.guidepointsecurity.com/blog/snail-mail-fail-fake-ransom-note-campaign-preys-on-fear/
The trip into Vim's brain begins as the gang get shrunk down to mech-roscopic size. Amongst viruses, bullets, harsh words, thrown porcelain, and revelations abound, will the crew be able to save their friend? Matcha has a shocking discussion. Moxie bears witness. Roadkill has a brain blast. Silver knocks them down a peg. Thank you to our Patreon supporters: Austin Sietsema, Eleanor Lady, Daffodil, RiverSupportsArtists, Gnome, Creaux, James Knevitt, Rocky Loy, Tau, JC Darcy, Robert Ruthven, AlexCrow, Jordan Myers, Keenan Geyer, DragonGirlJosie, Shonn Briscoe, Andor, & Diana Plante!! Bring Your Own Mech is a biweekly Lancer RPG actual play podcast of four Lancers thrown together by circumstance, destiny... and credits. Featuring: Reed (@ReedPlays) as the Game Master Amelia (@am_ridz_music) as Matcha Aki (@akinomii_art) as Moxie Dusty (@Dustehill) as Roadkill Aubrey (@MadQueenCosplay) as Scarlett Find us on Bluesky @bringyourownmech.bsky.social, and remember: batteries are not included. Lancer is created by Tom Parkinson Morgan (author of Kill Six Billion Demons) and Miguel Lopez of Massif Press. Bring Your Own Mech is not an official Lancer product; it is a third party work, and is not affiliated with Massif Press. Bring Your Own Mech is published via the Lancer Third Party License. Lancer is copyright Massif Press. Support the official release at https://massif-press.itch.io Support us on Patreon! https://www.patreon.com/bringyourownmech Get the official season 1 album, Bring Your Own Mixtape vol. 1! https://ownmech.bandcamp.com/album/bring-your-own-mixtape-vol-1 DRC CUSTOM OUTFITTERS Download: https://ownmech.itch.io/drc-custom-outfitters-a-lancer-supplement Pilot NET Discord Server: https://discord.gg/p3p8FUm9b4
The gang assesses the damage as Vim goes catatonic. Tensions run high as the Save Point doctors do their best to try and save Moxie's life, but is there a hope for a cure? And why is everyone laughing right now? Matcha hones her mime routine. Moxie enjoys tea time. Roadkill makes a house call. Scarlett gets pulled in for one last job. Thank you to our Patreon supporters: Austin Sietsema, Eleanor Lady, Daffodil, RiverSupportsArtists, Gnome, Creaux, James Knevitt, Rocky Loy, Tau, JC Darcy, Robert Ruthven, AlexCrow, Jordan Myers, Keenan Geyer, DragonGirlJosie, Shonn Briscoe, Andor, & Diana Plante!! Bring Your Own Mech is a biweekly Lancer RPG actual play podcast of four Lancers thrown together by circumstance, destiny... and credits. Featuring: Reed (@ReedPlays) as the Game Master Amelia (@am_ridz_music) as Matcha Aki (@akinomii_art) as Moxie Dusty (@Dustehill) as Roadkill Aubrey (@MadQueenCosplay) as Scarlett Find us on Bluesky @bringyourownmech.bsky.social, and remember: batteries are not included. Lancer is created by Tom Parkinson Morgan (author of Kill Six Billion Demons) and Miguel Lopez of Massif Press. Bring Your Own Mech is not an official Lancer product; it is a third party work, and is not affiliated with Massif Press. Bring Your Own Mech is published via the Lancer Third Party License. Lancer is copyright Massif Press. Support the official release at https://massif-press.itch.io Support us on Patreon! https://www.patreon.com/bringyourownmech Get the official season 1 album, Bring Your Own Mixtape vol. 1! https://ownmech.bandcamp.com/album/bring-your-own-mixtape-vol-1 DRC CUSTOM OUTFITTERS Download: https://ownmech.itch.io/drc-custom-outfitters-a-lancer-supplement Pilot NET Discord Server: https://discord.gg/p3p8FUm9b4
ZFS Storage Fault Management, FreeBSD 14.2-RELEASE Announcement, I feel that NAT is inevitable even with IPv6, Spell checking in Vim, OpenBSD Memory Conflict Messages, The Biggest Shell Programs in the World, and more NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) and the BSDNow Patreon (https://www.patreon.com/bsdnow) Headlines ZFS Storage Fault Management (https://klarasystems.com/articles/zfs-storage-fault-management-linux/?utm_source=BSD%20Now&utm_medium=Podcast) FreeBSD 14.2-RELEASE Announcement (https://www.freebsd.org/releases/14.2R/announce/) News Roundup I feel that NAT is inevitable even with IPv6 (https://utcc.utoronto.ca/~cks/space/blog/tech/IPv6AndStillHavingNAT) Spell checking in Vim (https://www.tumfatig.net/2024/spell-checking-in-vim/) OpenBSD Memory Conflict Messages (https://utcc.utoronto.ca/~cks/space/blog/unix/OpenBSDMemoryConflictMessages) The Biggest Shell Programs in the World (https://github.com/oils-for-unix/oils/wiki/The-Biggest-Shell-Programs-in-the-World) Beastie Bits The Connectivity of Things: Network Cultures since 1832 (https://direct.mit.edu/books/oa-monograph/5866/The-Connectivity-of-ThingsNetwork-Cultures-since) Initial list of 21 EuroBSDcon 2024 videos released (https://www.undeadly.org/cgi?action=article;sid=20241130184249) -current now has more flexible performance policy (https://www.undeadly.org/cgi?action=article;sid=20241129093132) OpenBSD 5.1 on Sun Ultra 5 (https://eggflix.foolbazar.eu/w/fa211a4f-6984-4c03-a6d2-b8c329d9459d) Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/592/feedback/Phillip%20-%20regressions.md Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) Join us and other BSD Fans in our BSD Now Telegram channel (https://t.me/bsdnow)
In the final episode of 2024, Brian Delamont returns to the podcast to talk about transformed direction. Repentance isn't just a one-time event at the time of salvation but a daily surrender and redirection. Joel 2:12-13 “We can rationalize or explain just about anything in our behavior and our thinking rather than truly commit to a transformed direction.” “God is longing for us to genuinely change, for us to live lives fully integrated and aligned with Him.” Hear more about VIM on Episode 157 Listen to “Come, Thou Fount” here and read more about the story behind the song here Acts 2 “Repentance is not a one time for all time thing, but it is an ongoing act of our relationship with God.” “To repent is turning from sin and turning to God.” “It's impossible to underestimate the power of a transformational change in direction. It can break generational patterns of behavior and bring freedom to entire family lines, entire communities.” A Long Obedience in the Same Direction by Eugene Peterson “God does give us moments, like mile markers along the way, that show us progress.” Mark 1:15 “Transformation is not me mustering up enough grit or resolve to somehow change my life. Rather, I become a new creation in Christ Jesus. And this is made possible through the very life of Christ, the Holy Spirit living in me, pouring His life into my life in every way.” December Reflection: What do I know needs changing? What do I need to repent of, so that I live fully into God's future for me in the coming year? What's changing our lives: Keane: Neighborhood friends Heather: Doing “sprint” work on projects with her team Brian: Eggnog in his coffee Weekly Spotlight: GDQ International Christian School We'd love to hear from you! podcast@teachbeyond.org Podcast Website: https://teachbeyond.org/podcast Learn about TeachBeyond: https://teachbeyond.org/
Monitoring your house with security cameras, automating a 3D printer, yet another note taking app, a great FOSS digital audio workstation, browser automation, converting Office documents to markdown, markdown in Vim, and why we think Raspberry Pi OS shouldn't change its default desktop environment. Discoveries motion & frigate Octoprint PSU control with Home Assistant... Read More
Monitoring your house with security cameras, automating a 3D printer, yet another note taking app, a great FOSS digital audio workstation, browser automation, converting Office documents to markdown, markdown in Vim, and why we think Raspberry Pi OS shouldn't change its default desktop environment. Discoveries motion & frigate Octoprint PSU control with Home Assistant... Read More
Deploying pNFS file sharing with FreeBSD, What To Use Instead of PGP, The slow evaporation of the FOSS surplus, I feel that NAT is inevitable even with IPv6, Spell checking in Vim, Iconic consoles of the IBM System/360 mainframes, 55 years old, and more NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) and the BSDNow Patreon (https://www.patreon.com/bsdnow) Headlines Deploying pNFS file sharing with FreeBSD (https://klarasystems.com/articles/deploying-pnfs-file-sharing-with-freebsd/?utm_source=BSD%20Now&utm_medium=Podcast) What To Use Instead of PGP (https://soatok.blog/2024/11/15/what-to-use-instead-of-pgp/) The slow evaporation of the FOSS surplus (https://www.baldurbjarnason.com/2024/the-slow-evaporation-of-the-foss-surplus/) News Roundup FreeBSD 14 on the Desktop (https://www.sacredheartsc.com/blog/freebsd-14-on-the-desktop/) Iconic consoles of the IBM System/360 mainframes, 55 years old (https://www.righto.com/2019/04/iconic-consoles-of-ibm-system360.html) Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) Join us and other BSD Fans in our BSD Now Telegram channel (https://t.me/bsdnow)
So-de-licious. Hairy Metal with Dunaway. Tomorrow we Celebrate FREEDOM... Freedom from TMS. Rah! Rah! Wrong team! The Cracks in Tina. Well Pat my Cheese steak. Certified Companions Let It Rip. F Minor Bombs. Tom's got the Vim. killer cords from outer space. Van batting for the other team. Rat-a-too-tees. Sorry Canada. Soda bubbles are devil's farts. They had hair and were a band. Dilated Anus and Euphoric Recommentals and more on this episode of The Morning Stream. Hosted on Acast. See acast.com/privacy for more information.