POPULARITY
Categories
Letos obeležujemo stoletnico smrti kraškega pesnika Srečka Kosovela, dogodki se odvijajo že od začetka leta, včeraj pa je bil prav en tak poseben - na vlaku v Zagorje. 23. februar je bil namreč za Srečka Kosovela usoden. Takrat je zbolel in se po tej bolezni ni nikoli več pozdravil. Na vlaku je bilo tako tudi petdeset Kraševcev, ki so ustvarjali v slogu Kosovela. Pregledovali so aktualne članke iz časopisov in izrezovali besede, s katerimi bi sestavili poezijo, kombinirali kitice, s katerimi bi sestavili Kosovelovo pesnitev in pisali manifeste. V reportaži slišite, kako so razumeli bistvo pesniškega ustvarjanja in se resnično povezali s tem, kar je bil in kar predstavlja Srečko Kosovel.
IP prevé normalización industrial el martes tras caída de “El Mencho”México pagará menos aranceles a EU: Marcelo EbrardEdomex pone su Banco de Tejidos al servicio de niños con cáncerMás información en nuestro Podcast
El Gabinete de Seguridad de México ya atiende bloqueos en Jalisco Hay más de 2 millones de casos de artritis reumatoide en México: UNAM Servicio Secreto abate a hombre armado en residencia de TrumpMás información en nuestro podcast
Po snežni ujmi, ki je včeraj prizadela vzhodno Slovenijo, je veliko ljudi še vedno brez elektrike. Na prizadetem območju deluje več kot 50 agregatov, poveljnik civilne zaščite Srečko Šestan pa poudarja, da bodo imele prednost ranljive skupine. Druge teme: - Nemški kancler Merz po odpravi ameriških carin pričakuje znižano breme za gospodarstvo. - Iranski predsednik Pezeškjan zatrdil, da se država ne bo uklonila svetovnemu pritisku. - Zaradi pomanjkanja pravosodnih policistov sodišča z vse pogostejšim odpadanjem obravnav.
In this interview Vijoy Pandey from Cisco Outshift discusses the "Internet of Agents," the donation of the AGNTCY framework to the Linux Foundation, and the new open-source SRE tool "Cape." Plus, how Swisscom is using agentic workflows to validate networks. Big thanks to Cisco for sponsoring this video and sponsoring my trip to Cisco Partners' Summit 2025 // Vijoy Pandey SOCIALS // LinkedIn: / vijoy X: https://x.com/vijoy // David's SOCIAL // Discord: discord.com/invite/usKSyzb Twitter: www.twitter.com/davidbombal Instagram: www.instagram.com/davidbombal LinkedIn: www.linkedin.com/in/davidbombal Facebook: www.facebook.com/davidbombal.co TikTok: tiktok.com/@davidbombal YouTube: / @davidbombal Spotify: open.spotify.com/show/3f6k6gE... SoundCloud: / davidbombal Apple Podcast: podcasts.apple.com/us/podcast... // MY STUFF // https://www.amazon.com/shop/davidbombal // SPONSORS // Interested in sponsoring my videos? Reach out to my team here: sponsors@davidbombal.com // MENU // 00:00 - Coming Up 00:37 - Introduction 01:00 - What is Internet of Agents? 02:11 - The 4 Steps of Internet of Agents 03:25 - Project AGNTCY 04:40 - A DNS of the Agentic Internet 06:07 - To What End is the Agency Stack 07:21 - Use-cases for the Agency Stack 11:28 - The Future of the Agency Stack 12:55 - Guardrails for Agents 13:38 - Timeline for the Integration of Agents 14:51 - Outro Please note that links listed may be affiliate links and provide me with a small percentage/kickback should you use them to purchase any of the items listed or recommended. Thank you for supporting me and this channel! Disclaimer: This video is for educational purposes only. #agenticai #agntcy #cisco @Cisco
In this episode, we sit down with Anish Agarwal, CEO and Co-founder of Traversal, for a deep dive into the substance behind the AI noise. Anish, a former Columbia professor and researcher in causal AI, shares his unique journey from academia to founding an organization disrupting the site reliability engineering (SRE) and observability space. We explore the critical difference between "AI wrappers" and companies building genuine infrastructure, the emergence of the "Forward Deployed Engineer" in the sales pod, and how to identify technical moats in a world where models are rapidly evolving.
As AI systems move rapidly from experimentation into production, organizations are discovering that adoption alone is not the hard part, understanding, governing, and trusting AI in live environments is. In this episode of the Tech Transformed, Shubhangi Dua speaks with Camden Swita, Head of AI, New Relic, about why AI observability has become a critical requirement for modern enterprises, particularly as agentic AI and AI-driven operations take on increasingly autonomous roles.The discussion explores how traditional observability models fall short when applied to probabilistic systems, why many AI ops initiatives stall at proof-of-concept, and what security and IT leaders must prioritize to safely scale AI in production.Be the first to see how intelligent observability takes you beyond dashboards to agentic AI with business impact at New Relic Advance, February 24, 2026.Why AI Adoption Is Outpacing Operational ReadinessWhile AI adoption is accelerating rapidly, most organizations still lack visibility into what their AI systems are actually doing once deployed. Generative AI is already widely used for natural language querying, coding assistants, customer support bots, and increasingly within IT operations and SRE workflows. As these systems move into production, new challenges emerge around cost control, governance, performance quality, and trust. Leaders recognize AI's potential value, but without deep observability, they struggle to determine whether AI-enabled systems are delivering consistent outcomes or introducing hidden operational and security risks.How Observability Must Evolve for Agentic AI and AI OpsThe episode then examines how observability itself must evolve to support agentic and autonomous AI systems. While core observability principles still apply, AI introduces a new layer of complexity that requires visibility into model behavior, agent decision-making, and multi-step workflows. Modern AI observability extends traditional application performance monitoring by capturing telemetry from LLM interactions, agent orchestration layers, and automated evaluations of output quality against intended use cases. Without this visibility, teams are effectively operating blind, unable to diagnose failures, validate compliance, or confidently deploy AI at scale. At the same time, AI is increasingly being embedded into observability platforms to reduce noise, accelerate root cause analysis, and improve incident response.Making Agentic AI Work in PracticeSuccessful adoption starts with low-risk, high-friction tasks such as incident triage, dashboard interpretation, and runbook summarization, rather than fully autonomous remediation. These use cases deliver immediate productivity gains while preserving human oversight. Over time, stronger feedback loops, better context management, and human-in-the-loop learning allow agents to become more reliable and useful. Looking ahead, Camden predicts that 2026 will be a turning point for agentic AI in production, driven by maturing AI observability platforms, richer semantic data, and knowledge graphs that connect technical telemetry to real business outcomes.Listen to Are “Vibe-Coded” Systems the Next Big Risk to Enterprise Stability?When Vibe Code Breaks OpsAI-generated code is pushing prototypes into production faster than ops can cope. How observability becomes the...
TestTalks | Automation Awesomeness | Helping YOU Succeed with Test Automation
Is traditional performance testing becoming obsolete? In this episode, performance engineering expert Akash Thakur shares why AI is fundamentally transforming load testing, scripting, observability, and shift-left strategies. With 17 years of real-world enterprise experience, Akash explains how AI-augmented tools are already reducing scripting time by 30%, improving analysis speed, and helping teams move from reactive performance testing to predictive intelligence. You'll learn: How AI is accelerating performance scripting and analysis Why shift-left performance testing is finally becoming realistic The role of structured data in predictive QA models How to test AI applications (LLMs, GPUs, inference throughput) differently than traditional web apps What the future role of performance engineers looks like — architect, not script writer If you're a performance tester, SRE, QA leader, or DevOps engineer wondering how AI will impact your role — this episode gives you practical, actionable insights you can apply immediately.
Za nami je referendum, na katerem smo zavrnili zakon, ki bi omogočil samomor s pomočjo. V veliki večini so bili proti zakonu prav zdravniki, ki so se na družabnih omrežjih preimenovali v zdravnike za paliativno oskrbo. Prav to je namreč eden od osrednjih izzivov, če želimo neozdravljivo bolnim resnično pomagati. V oddaji Srečanja smo gostili tri ugledne zdravnike iz Štajerske, ki so spregovorili o pomenu paliativne oskrbe, izobraževanju laikov na tem področju in odnosu naše družbe do bolezni in umiranja. Skupaj bomo iskali načine, kako postati bolj sočutna družba, v kateri se bo vsak počutil sprejetega.
Is your platform engineering initiative struggling to deliver results? The problem might not be your tools or technology at all.In this episode, Sam Barlien, Community Organizer at Platform Engineering (the world's largest platform engineering community), shares insights from speaking with nearly 400 engineering leaders last year about why their platform initiatives succeed or fail. The biggest revelation: it's almost never about the tools. Sam explains why treating your internal platform like a product, complete with user research, documentation, and a product manager mindset, is the key differentiator between real platform engineering and just a rebranded operations team. He breaks down how to start small with a minimum viable platform, measure what actually matters, and build golden paths that developers want to follow. The conversation also covers how AI is both accelerating the need for platform engineering and transforming how platforms are built and operated.Key Topics Discussed:What platform engineering really means (hint: it's product management)Why DevOps and SRE often fail without product thinkingThe “Golden Path” vs “Golden Cage” approach to developer experienceHow to measure ROI and pitch platform engineering to executivesThe symbiotic relationship between AI and platform engineeringWhy starting with a Minimum Viable Platform beats big-bang transformationsPlatformCon 2025 key takeaways and emerging trendsTimestamps:(00:00:00) Trailer & Intro(00:03:16) What Background Do You Need for Platform Engineering?(00:06:32) How Does Storytelling Help in Platform Engineering?(00:08:53) What Is Platform Engineering?(00:12:27) Why Are Organizations Adopting Platform Engineering?(00:19:51) What's the Difference Between DevOps, SRE, and Platform Engineering?(00:23:25) Why Is the “Plug and Play” Approach to Tools a Trap?(00:28:45) How Do You Pitch Platform as a Product Instead of a Project?(00:34:01) How Do You Measure the ROI of Platform Engineering?(00:40:42) What Is the Golden Path in Platform Engineering?(00:47:12) What Were the Key Takeaways from PlatformCon 2025?(00:53:41) How Does Platform Engineering Leverage AI?(00:58:41) What Are the Hidden Costs of AI-Generated Code?(01:04:01) Why Is Platform Engineering Actually Product Management?(01:07:12) 1 Tech Lead Wisdom_____Sam Barlien's BioSam Barlien is a community organiser for the Platform Engineering Community. He is a tech nerd, and has been involved in tech communities for more than 10 years. He helps manage Platform Weekly, co-hosts PlatformCon, and drives the community Ambassador program, blog and Youtube channel.Follow Sam:LinkedIn – linkedin.com/in/sam-barlien-3b2579184Platform Engineering – platformengineering.orgPlatformCon – platformcon.comWeave Intelligence – weaveintelligence.ioLike this episode?Show notes & transcript: techleadjournal.dev/episodes/247.Follow @techleadjournal on LinkedIn, Twitter, and Instagram.Buy me a coffee or become a patron.
Sreča v nesreči je smučarskega tekača Valerija Gontarja pred štirimi leti iz Rusije pripeljala v Slovenijo. Tu dela in hkrati trenira ter poskuša doseči čim boljše rezultate na tekmovanjih. Slovenijo je vzljubil, želi si, da bi tudi ona vzljubila njega.
Poetična drama Daneta Zajca Voranc se ponovno vrača na oder SNG Drame Ljubljana v novi priredbi dramaturga Nika Žindaršiča in režiji Žive Bizovičar. V Lutkovnem gledališču Ljubljana je bila včeraj najmlajšim premierno namenjena predstava z naslovom Učiteljica, ki je nastala po predlogi istoimenske slikanice italijanske umetnice Susanne Mattiangeli. V dvorani Union v Mariboru se bo nocoj ob 19.30 začel 3. koncert Orkestrskega cikla Koncertne poslovalnice Narodnega doma Maribor, v Galeriji Lojzeta Spacala na Gradu Štanjel pa so včeraj odprli prvi del razstave Eno je sveto: preprosto in pristno, ki je zasnovana kot poklon pesniku Srečku Kosovelu ob 100-letnici njegove smrti. Z likovno umetnostjo nadaljujemo v Ljubljani, kjer bodo v Galeriji P74 odprli razstavo fotografa Tadeja Vaukmana. V mestu Trenčin na zahodu Slovaške pa se bo z uličnim festivalom in svetlobno umetnostjo začelo odprtje Evropske prestolnice kulture. Vabljeni k poslušanju!
This week, we're joined by Anish Agarwal, CEO of Traversal, an AI-native site reliability platform helping teams detect, diagnose, and remediate incidents before they spiral into prolonged downtime.Anish shares how Traversal is tackling the full lifecycle of reliability work: identifying what caused an incident, determining which signals actually mattered during alert triage, and helping teams plan how their infrastructure should evolve over time. As modern systems grow more complex, spanning microservices, serverless, and multi-cloud environments, the surface area for failure continues to expand, especially as AI-generated code accelerates change faster than humans can reasonably keep up.We talk about why observability has produced some of the largest outcomes in software, why the traditional dashboard-first model is breaking down, and why the next generation of tooling needs agents that can search, reason over, and act on observability data, not just display it. Anish also breaks down where “self-healing systems” are real today, where expectations need to be reset, and why many AI-SRE products risk building faster horses instead of rethinking the experience entirely.Episode chapters:1:50 — Anish's background and early building2:30 — The origins of SRE and why it emerged8:40 — Research-driven thinking in product and engineering10:07 — Building for large enterprise environments13:00 — AI-driven code and the growing infrastructure surface area16:11 — Preparing infrastructure for an AI-native future20:10 — Fear, trust, and operating critical systems23:05 — From detection to automated action24:20 — How the SRE role is changing26:05 — Hiring and building the right team29:08 — Raising capital from Sequoia and KP32:38 — Reflections and lessons learned35:50 — Quick-fire round This episode is brought to you by Grata, the leading deal sourcing platform for private equity. Grata's AI powered search, investment grade data, and intuitive workflows help you find and win the right deals faster. Visit grata.com to book a demo.This episode is also sponsored by Overlap, the AI powered app that uses LLMs to surface the best moments from any podcast. Overlap reads full transcripts, finds the most relevant clips, and stitches them into a personalized stream of insights. Tap into podcasts as a real information source with Overlap 2.0, now available on the App Store.
Salud CDMX no contempla cubrebocas obligatorio por sarampiónReactivan apoyo para autos extranjeros no regularizadosUrgen donadores de sangre en Hospital General de EcatepecMás información en nuestro Podcast
México expresa condolencias a CanadáVacunación gratuita en AzcapotzalcoCarlos Castellanos estrena horario informativo en Radio Centro NoticiasMás información en nuestro Podcast
Join us for a discussion with Carla Geisser of Layer Aleph, a company focused on "crisis engineering". Carla distinguishes a crisis from a standard incident by noting that a crisis is novel and lacks a playbook. She outlines five criteria for a true crisis: fundamental surprise, broken critical functions, high visibility, a rigid deadline (unlike internal tech deadlines), and perception breakdown. Crises often arise in organizations that struggle to admit computers control core decisions, leading to complex, glued-together systems. Carla emphasizes that SRE-adjacent skills are essential for connecting the dots and exposing the full system. The key takeaway for SREs is to recognize when a true crisis is happening, as leadership will only be willing to "break rules" and enable substantive change once three of these criteria are met.1
Bachillerato Nacional abre mejores oportunidades laboralesFuego consume 183 vehículos en TlaxcalaConstrucción de Utopía del Maíz beneficiará a 40 mil habitantesMás información en nuestro Podcast
El CIDE, asignó 42% de plazas que debían ser para profesores a personal administrativo; SRE enfatiza llamado a viajar de "manera responsable"; advierte sobre posibles redadas del ICE; Municiones fabricadas para Ejército de EU acaban en manos del narco en México, planta de Lake City, el nexo más crítico; Ucrania ataca con misiles 2 regiones fronterizas con Rusia, reportan al menos un muerto y 6 heridos; Celebración del Año Nuevo Chino llega a la CDMX, miles asisten a desfile; Centrobús deberá convivir con 13 rutas de transporte de la CDMX; Super Bowl 2026: Super Bowl LX: éstas son las apuestas más raras; Super Bowl LX: Estos son los jugadores latinos presentes.Un Podcast de EL UNIVERSAL Hosted on Acast. See acast.com/privacy for more information.
Srečo moramo iskati doma … Pripoveduje: Branko Šturbej. Napisala: Metka Cotič Posneto v studiih Radia Slovenija 2005.
This episode of the Prodcast tackles the challenges of maintaining AI safety and alignment in production. Guests Felipe Tiengo Ferreira and Parker Barnes join hosts Matt Siegler and Steve McGhee to discuss AI model safety, from examining content to emerging security risks. The discussion emphasizes the vital role of SREs in managing safety at scale, detailing multi-layered defenses, including system instructions, LLM classifiers, and Automated Red Teaming (ART). Felipe and Parker dive into the evolving world of AI safety, from core product policies to the groundbreaking Frontier Safety Framework. The guests explore the need for SRE principles like drift detection and context observability. Finally, they raise concerns about the velocity of AI development compressing long-term research, urging the industry to collaborate and share vocabulary to address rapidly emerging risks.
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Centro Histórico celebrará el Año Nuevo Chino este sábado SRE pide evitar viajes a Irán, Líbano e Israel Ciberacecho: violencia digital persistente y peligrosa Más información en nuestro podcast
Real-time analytics at a petabyte scale isn't just a technical challenge; it's a business survival requirement. Catherine Johnson, VP of Global Solutions Engineering at Hydrolix, joins the show to deconstruct the "impossible" architecture required to power the 2025 Super Bowl broadcast for Fox Sports. From managing 1.4 petabytes of daily log data to the brutal reality of why traditional auto-scaling fails during mission-critical events, Catherine reveals the strategic framework behind being a "Truth Teller" in the high-stakes world of Solutions Engineering.Key Takeaways1. Data Architecture as a Competitive Moat- Normalization is Non-Negotiable: At a petabyte scale, you cannot afford "dirty" data. Success requires normalizing disparate CDN logs—matching units (ms vs. s) and handling recursive URL encoding—into a single, queryable schema.- Indexing vs. Regex: Computational intensity kills performance. Strategic indexing for exact matches must replace regular expressions for high-frequency queries to avoid massive, costly table scans.- Schema Flexibility: Implementing multiple schemas on a single table allows for both granular technical deep-dives and high-level executive overviews without duplicating storage.2. Scaling Strategies for "High-Intensity" Events- The Limits of Auto-scaling: For predictable surges like the Super Bowl, relying on auto-scaling is a risk. Pre-scaling to 3x expected peak ensures availability when AWS regional compute limits are hit.- Multi-Region Redundancy: True global scale often exceeds the capacity of a single cloud region. Architecting for multi-region deployment is a requirement, not an option, for Tier-1 broadcast events.- Segregated Query Pools: Prevent "compute competition" by isolating resources. Executive dashboards, SRE monitoring, and ad-hoc troubleshooting should never fight for the same compute cycles.3. Solutions Engineering as "Truth Telling"- The Trust-Based Framework: A Solution Engineer's (SE) primary role isn't selling—it's building trust through accurate empathy. If the product isn't a fit, say it. Protecting your professional reputation outlasts any single sales cycle.- Root Cause Inquiry: When a customer asks for a feature or query optimization, pause. Don't answer the technical question until you've uncovered the business outcome they are trying to achieve.- Business Mapping: Every technical requirement must map directly to a business requirement. If it doesn't, it's just unnecessary complexity.4. The "Break-Fast" Learning Philosophy- Fearless Experimentation: The learning curve is shortened by breaking things in dedicated environments. If you only follow the "happy path" of a tutorial, you haven't actually learned the system.- Bridging Data Realities: There is often a gap between how data is stored for performance and how it looks in the real world. Success in SE requires the ability to bridge these two perspectives for the customer.Chapters:00:10 - Introduction: Meet Catherine Johnson00:50 - The Origins of Hydrolix: Solving the CDN Log Crisis06:10 - Deep Dive: Behind the Scenes of the 2025 Super Bowl10:14 - When the Path Changes: Adjusting Architecture Mid-Season14:25 - Multi-Region Deployment & AWS Compute Limits16:51 - Half-Second Query Times: How to Segregate Compute25:49 - The Non-Obvious Skills of Top-Tier SEs31:32 - The "Farming" Lesson: Understanding How Businesses Make Money37:04 - Lightning RoundVisit our website - https://saassessions.com/Connect with me on LinkedIn - https://www.linkedin.com/in/sunilneurgaonkar/
From Systems Engineer in Aeronautics via many clouds to becoming an SRE in Observability! That's the path from our guest, Alexandra Franz who is a Lead Product Engineer in SRE at Dynatrace. Tune in and learn how their team plans ahead for expected high traffic around Black Friday, Cyber Monday or the Super Bowl. We discuss how regional traffic patterns and differences in available hardware get factored in for capacity management and cost control. We also learn why global cloud outages are stressful - but - how those incidents can also be the reward for a good SRE.Make sure to connect with Alexandra on LinkedIn: https://www.linkedin.com/in/alexandrafranz/
CDMX activa alertas por frío extremo esta tarde y nocheMéxico ofrece ayuda humanitaria a quien la necesite: SREReforma electoral cuenta con legitimidad ciudadana: SegobMás información en nuestro Podcast
This is a special episode from the Neon Fund.In 2025, the US saw $1.8 trillion worth of M&A deals, around 25× more than India. But India's startup ecosystem is much younger, which makes every acquisition a playbook for founders on process, pricing leverage, and stakeholder management.Neon backed Zenduty in 2020, when the founders had been bootstrapping profitably for two years and were already growing at a pace many VC-backed startups aspire to.Today, founders Ankur Rawal and Vishwa Krishnakumar join Siddhartha, Partner at Neon, to discuss one of the most untalked acquisitions of 2025.Over a 10-year journey, Zenduty pivoted to SRE in 2020. Vishwa and Ankur also share insights on the future of the DevTools space, which they believe will always be a strong choice to build great products, because engineers are among the hardest end users to please.This episode is a founders' view on how acquisitions work in Indian SaaS.00:00 – Trailer01:00 – Initial years of a decade-long journey07:12 – How Zenduty chose its investors11:04 – How much should founders dilute?12:24 – Building with profitability before & after fundraise14:45 – Six years of survival before the pivot17:01 – Why the pivot to the SRE space?18:39 – How Zenduty differentiated from PagerDuty19:12 – End users are the toughest to please in engineering20:39 – Is market attractive if biggest player is valued only $1.5B?25:22 – Why acquisition and not a Series A?27:18 – The process before acquisition29:23 – How pricing negotiations work31:51 – Should devtool companies build from India or US?34:58 – Three types of connects at physical events37:06 – What physical presence at events signals39:06 – Founders' feedback on Neon Fund41:41 – “Don't build in silence”43:50 – How to build a core AI-native company today47:54 – Do first-time founders have an edge in the AI era?52:08 – Cost to PMF has drastically gone down54:48 – What hard problems are startups solving today?55:37 – Why are acquisitions rare in India?1:00:20 – How US investors are facilitating M&As1:01:14 – How to make your brand visible to potential acquirers-------------India's talent has built the world's tech—now it's time to lead it.This mission goes beyond startups. It's about shifting the center of gravity in global tech to include the brilliance rising from India.What is Neon Fund?We invest in seed and early-stage founders from India and the diaspora building world-class Enterprise AI companies. We bring capital, conviction, and a community that's done it before.Subscribe for real founder stories, investor perspectives, economist breakdowns, and a behind-the-scenes look at how we're doing it all at Neon.-------------Check us out on:Website: https://neon.fund/Instagram: https://www.instagram.com/theneonshoww/LinkedIn: https://www.linkedin.com/company/beneon/Twitter: https://x.com/TheNeonShowwConnect with Siddhartha on:LinkedIn: https://www.linkedin.com/in/siddharthaahluwalia/Twitter: https://x.com/siddharthaa7-------------This video is for informational purposes only. The views expressed are those of the individuals quoted and do not constitute professional advice.Send us a text
In this episode of the Prodcast, guest Shannon Neufeld-Brady speaks with hosts Jordan Greenberg and Florian Rathgeber about managing Google's vast fleet of internal devices. Shannon explains how Google's Linux platform uses core SRE principles—specifically testing, canarying, and monitoring—for weekly stage rollouts of its Debian-based distribution. Configuration is efficiently managed using Puppet to ensure the right setup for a diverse user base. The conversation pivots to "the year of Linux everything," underscoring its widespread adoption. Discussing AI, Shannon identifies its greatest utility for SREs in rapidly analyzing signals and generating complex queries to resolve outages. This episode reinforces that practicing SRE fundamentals is paramount, demonstrating that you can be an SRE at heart, regardless of your official title.
Clawdbot drives Mac Mini sales, Swizec Teller on the future of software engineering being SRE, Daniel Stenberg decided to end curl's bug bounty program, zerobrew takes some of the best ideas from uv and applies them to Homebrew, and Phil Eaton on LLMs and your career.
Clawdbot drives Mac Mini sales, Swizec Teller on the future of software engineering being SRE, Daniel Stenberg decided to end curl's bug bounty program, zerobrew takes some of the best ideas from uv and applies them to Homebrew, and Phil Eaton on LLMs and your career.
Clawdbot drives Mac Mini sales, Swizec Teller on the future of software engineering being SRE, Daniel Stenberg decided to end curl's bug bounty program, zerobrew takes some of the best ideas from uv and applies them to Homebrew, and Phil Eaton on LLMs and your career.
Localizan con vida a geólogo extraviado en Hermosillo, SonoraMéxico y EU refuerzan cooperación en seguridad fronteriza Más información en nuestro Podcast
Golpe a Los Blancos de Troya en Michoacán El Colmex abre curso público de coreanoPolémica en Argentina por el puerto de UshuaiaMás información en nuestro podcast
Ameriški predsednik Donald Trump je po pogovoru z generalnim sekretarjem Nata Markom Ruttejem ob robu Svetovnega gospodarskega foruma v Davosu vnovič spremenil svojo retoriko. V nagovoru zbranim je sicer vztrajal, da Združene države potrebujejo Grenlandijo zaradi varnosti. Po pogovoru z Ruttejem pa je preklical grožnjo z dodatnimi carinami na uvoz iz osmih evropskih držav, ki so na omenjeni arktični otok napotile svoje vojake. Srečanje sta obe strani označili za zelo produktivno. Druge teme: - Evropski voditelji danes na izrednem zasedanju o krepitvi neodvisnosti Unije v napetih transatlantskih odnosih. - Slovenija, tako kot večina zahodnoevropskih držav, za zdaj skeptična do vabila v nov Odbor za mir. - Na veterinarje letijo očitki o visokih stroških cepljenja proti modrikastemu jeziku; rejci pričakujejo subvencijo.
Vratili smo se sa odmora sa preporukama koje će vas ili oduševiti ili emotivno slomiti. U novoj epizodi Njuz POPkasta analiziramo seriju o kojoj svi pričaju, Heated Rivalry i njen prikaz gej hokejaša. Zaronili smo i u neverovatni svet kultova sa dokumentarcem Wild Wild Country, a pokušali smo da preživimo i emotivni rolerkoster knjige "Malo života". Ali to nije sve! Otkrili smo i novi muzički pravac koji preti da osvoji Srbiju: gotski folk, zahvaljujući pesmi "Plačipička" Mire Aleksić. Da, to je pravi naslov. Pored toga, tu su i preporuke za novi film Jorgosa Lantimosa (Bugonia), seriju sa Džonom Hemom (onaj viralni video), knjigu o životu u sibirskoj brvnari, kao i Viktorova epska potraga za solju koja razotkriva sve slabosti komunalnog sistema.
Michoacán activó más de 2 mil alertas para personas desaparecidas en 2025SEP incorpora carreras de IA y ciberseguridad en EdomexConsejo Europeo convoca cumbre por crisis de GroenlandiaMás información en nuestro Podcast
Edomex despliega brigadas para combatir gusano barrenadorGuatemala suspende clases por ola de violenciaMás información en nuestro Podcast
Incendio en fábrica de colchones en QuerétaroArde embarcación frente a Puerto ProgresoMás información en nuestro Podcast
“AI removes the friction from the intent to the implementation,” says Amanda Silver, corporate vice president and head of products, apps and agents at Microsoft. She talks with Bloomberg Intelligence senior technology analyst Anurag Rana about how copilots and agents are collapsing the software lifecycle — from natural-language ideas to code, tests and operations — shifting developers to reviewing and governance from typing, and making “evals” the new testing standard. She cites big-tech technical-debt wins, such as .NET and Java upgrades requiring 70–80% less manual effort, and SRE agents that reduce remediation time. Additionally, the two discuss GitHub Copilot, already among top contributors in key repos and adopted across most large enterprises.
Join us on The Prodcast as we host Heather Adkins, leader of Google's Office of Cybersecurity Resilience, for a critical look at the future of digital defenses. We explore the intersection of SRE and security , unpacking the "Secure by Design" philosophy and the shared DNA of incident management. Heather candidly discusses the rise of "Agentic AI hackers" and polymorphic malware , revealing how defenders can use AI to stay ahead. From "castle" defense strategies to "nodal biology" theories, this episode is a must-listen for anyone navigating the new era of AI-driven threats.
TestTalks | Automation Awesomeness | Helping YOU Succeed with Test Automation
Performance testing often fails for one simple reason: teams can't see where the slowdown actually happens. In this episode, we explore Locust load testing and why Python-based performance testing is becoming the go-to choice for modern DevOps, QA, and SRE teams. You'll learn how Locust enables highly realistic user behavior, massive concurrency, and distributed load testing — without the overhead of traditional enterprise tools. We also dive into: Why Python works so well for AI-assisted load testing How Locust fits naturally into CI/CD and GitHub Actions The real difference between load testing vs performance testing How observability and end-to-end tracing eliminate guesswork Common performance testing mistakes even experienced teams make Whether you're a software tester, automation engineer, or QA leader looking to shift-left performance testing, this conversation will help you design smarter tests and catch scalability issues before your users do.
Matthew Gill joins The PowerShell Podcast to talk about what it means to be a Site Reliability Engineer (SRE) and how SRE thinking changes the way you approach automation, reliability, and problem solving. Matthew and host Andrew Pla break down core concepts like SLAs, SLOs, and SLIs, and why reliability through planning matters more than rushing straight to the keyboard. They also dig into why PSFramework is worth the dependency for enterprise-grade logging and configuration, how community mentorship (including Fred Weinmann's impact) can fast-track growth, and why books like The Phoenix Project are game-changing for understanding DevOps culture and constraints. Key Takeaways: • SRE is software engineering applied to operations — focus on measurable reliability, proper planning, and balancing change with stability using concepts like SLAs, SLOs, and SLIs. • PSFramework can eliminate “reinventing the wheel” — especially for logging and configuration handling, giving enterprises proven patterns and integrations without custom-built fragility. • Community is a career multiplier — mentorship, learning in public, and teaching others are some of the fastest ways to build confidence and advance your PowerShell journey. Guest Bio: Matthew Gill is a Site Reliability Engineer and is the Co-Director of Content for the PowerShell + DevOps Global Summit. He has been a problem solver, systems administrator, and scripter for nearly 20 years. From working in the United States Marine Corps, education, radio, and currently the private sector, the majority of Matt's experience has been focused on solving problems in a variety of interesting and creative ways.Resource Links PowerShell + DevOps Global Summit – https://powershellsummit.org The Phoenix Project (Book) – https://itrevolution.com/product/the-phoenix-project/ The Unicorn Project (Book) – https://itrevolution.com/product/the-unicorn-project/ PSFramework – https://github.com/PowershellFrameworkCollective/psframework Matthew Gill's Blog – https://therealgill.com Andrew's Links - https://andrewpla.tech/links PDQ Discord – https://discord.gg/PDQ PowerShell Wednesdays – https://www.youtube.com/results?search_query=PowerShell+Wednesdays The PowerShell Podcast on YouTube: https://youtu.be/vkOLsjsPvYo
Sheinbaum se reúne con embajadores y cónsules mexicanos Detienen a 12 personas en operativo en Michoacán Papa se reúne con cardenales para definir futuro de la iglesia
Welcome back to the EUVC Podcast where we dive deep into the craft of building and backing venture-scale companies in Europe.Modern software doesn't fail quietly.It fails on Black Friday.It fails while the CFO is in a board meeting.It fails when your biggest customer is mid-way through a critical workflow.And when it does, there's one brutal reality:The data is there but nobody has time to interpret it.Today we're exploring one of the most under-discussed yet mission-critical parts of building modern software: reliability in production.Joining Andreas are:
Andrej P. Škraba, Klemen Selakovič & Jani Pravdič. Enkrat na mesec se srečamo in preko dialoga (iz gr. diálogos "pogovor"), drug z drugim delimo ideje. Teme DIALOGA 64: Lepi hoteli in kultura savnanja Klemenov izgubljen kovček Zapustiti domovino Slovenski javni servis Kvalitetno življenje - spanje, odnosi, telovadba in hrana Cilji v letu 2026 Srečno in uspešno življenje & vpliv okolja Katere dokumentarce gledamo? Naš trenutni fokus in Janijeve delavnice
From creating SWE-bench in a Princeton basement to shipping CodeClash, SWE-bench Multimodal, and SWE-bench Multilingual, John Yang has spent the last year and a half watching his benchmark become the de facto standard for evaluating AI coding agents—trusted by Cognition (Devin), OpenAI, Anthropic, and every major lab racing to solve software engineering at scale. We caught up with John live at NeurIPS 2025 to dig into the state of code evals heading into 2026: why SWE-bench went from ignored (October 2023) to the industry standard after Devin's launch (and how Walden emailed him two weeks before the big reveal), how the benchmark evolved from Django-heavy to nine languages across 40 repos (JavaScript, Rust, Java, C, Ruby), why unit tests as verification are limiting and long-running agent tournaments might be the future (CodeClash: agents maintain codebases, compete in arenas, and iterate over multiple rounds), the proliferation of SWE-bench variants (SWE-bench Pro, SWE-bench Live, SWE-Efficiency, AlgoTune, SciCode) and how benchmark authors are now justifying their splits with curation techniques instead of just "more repos," why Tau-bench's "impossible tasks" controversy is actually a feature not a bug (intentionally including impossible tasks flags cheating), the tension between long autonomy (5-hour runs) vs. interactivity (Cognition's emphasis on fast back-and-forth), how Terminal-bench unlocked creativity by letting PhD students and non-coders design environments beyond GitHub issues and PRs, the academic data problem (companies like Cognition and Cursor have rich user interaction data, academics need user simulators or compelling products like LMArena to get similar signal), and his vision for CodeClash as a testbed for human-AI collaboration—freeze model capability, vary the collaboration setup (solo agent, multi-agent, human+agent), and measure how interaction patterns change as models climb the ladder from code completion to full codebase reasoning. We discuss: John's path: Princeton → SWE-bench (October 2023) → Stanford PhD with Diyi Yang and the Iris Group, focusing on code evals, human-AI collaboration, and long-running agent benchmarks The SWE-bench origin story: released October 2023, mostly ignored until Cognition's Devin launch kicked off the arms race (Walden emailed John two weeks before: "we have a good number") SWE-bench Verified: the curated, high-quality split that became the standard for serious evals SWE-bench Multimodal and Multilingual: nine languages (JavaScript, Rust, Java, C, Ruby) across 40 repos, moving beyond the Django-heavy original distribution The SWE-bench Pro controversy: independent authors used the "SWE-bench" name without John's blessing, but he's okay with it ("congrats to them, it's a great benchmark") CodeClash: John's new benchmark for long-horizon development—agents maintain their own codebases, edit and improve them each round, then compete in arenas (programming games like Halite, economic tasks like GDP optimization) SWE-Efficiency (Jeffrey Maugh, John's high school classmate): optimize code for speed without changing behavior (parallelization, SIMD operations) AlgoTune, SciCode, Terminal-bench, Tau-bench, SecBench, SRE-bench: the Cambrian explosion of code evals, each diving into different domains (security, SRE, science, user simulation) The Tau-bench "impossible tasks" debate: some tasks are underspecified or impossible, but John thinks that's actually a feature (flags cheating if you score above 75%) Cognition's research focus: codebase understanding (retrieval++), helping humans understand their own codebases, and automatic context engineering for LLMs (research sub-agents) The vision: CodeClash as a testbed for human-AI collaboration—vary the setup (solo agent, multi-agent, human+agent), freeze model capability, and measure how interaction changes as models improve — John Yang SWE-bench: https://www.swebench.com X: https://x.com/jyangballin Chapters 00:00:00 Introduction: John Yang on SWE-bench and Code Evaluations 00:00:31 SWE-bench Origins and Devon's Impact on the Coding Agent Arms Race 00:01:09 SWE-bench Ecosystem: Verified, Pro, Multimodal, and Multilingual Variants 00:02:17 Moving Beyond Django: Diversifying Code Evaluation Repositories 00:03:08 Code Clash: Long-Horizon Development Through Programming Tournaments 00:04:41 From Halite to Economic Value: Designing Competitive Coding Arenas 00:06:04 Ofir's Lab: SWE-ficiency, AlgoTune, and SciCode for Scientific Computing 00:07:52 The Benchmark Landscape: TAU-bench, Terminal-bench, and User Simulation 00:09:20 The Impossible Task Debate: Refusals, Ambiguity, and Benchmark Integrity 00:12:32 The Future of Code Evals: Long Autonomy vs Human-AI Collaboration 00:14:37 Call to Action: User Interaction Data and Codebase Understanding Research
From creating SWE-bench in a Princeton basement to shipping CodeClash, SWE-bench Multimodal, and SWE-bench Multilingual, John Yang has spent the last year and a half watching his benchmark become the de facto standard for evaluating AI coding agents—trusted by Cognition (Devin), OpenAI, Anthropic, and every major lab racing to solve software engineering at scale. We caught up with John live at NeurIPS 2025 to dig into the state of code evals heading into 2026: why SWE-bench went from ignored (October 2023) to the industry standard after Devin's launch (and how Walden emailed him two weeks before the big reveal), how the benchmark evolved from Django-heavy to nine languages across 40 repos (JavaScript, Rust, Java, C, Ruby), why unit tests as verification are limiting and long-running agent tournaments might be the future (CodeClash: agents maintain codebases, compete in arenas, and iterate over multiple rounds), the proliferation of SWE-bench variants (SWE-bench Pro, SWE-bench Live, SWE-Efficiency, AlgoTune, SciCode) and how benchmark authors are now justifying their splits with curation techniques instead of just “more repos,” why Tau-bench's “impossible tasks” controversy is actually a feature not a bug (intentionally including impossible tasks flags cheating), the tension between long autonomy (5-hour runs) vs. interactivity (Cognition's emphasis on fast back-and-forth), how Terminal-bench unlocked creativity by letting PhD students and non-coders design environments beyond GitHub issues and PRs, the academic data problem (companies like Cognition and Cursor have rich user interaction data, academics need user simulators or compelling products like LMArena to get similar signal), and his vision for CodeClash as a testbed for human-AI collaboration—freeze model capability, vary the collaboration setup (solo agent, multi-agent, human+agent), and measure how interaction patterns change as models climb the ladder from code completion to full codebase reasoning.We discuss:* John's path: Princeton → SWE-bench (October 2023) → Stanford PhD with Diyi Yang and the Iris Group, focusing on code evals, human-AI collaboration, and long-running agent benchmarks* The SWE-bench origin story: released October 2023, mostly ignored until Cognition's Devin launch kicked off the arms race (Walden emailed John two weeks before: “we have a good number”)* SWE-bench Verified: the curated, high-quality split that became the standard for serious evals* SWE-bench Multimodal and Multilingual: nine languages (JavaScript, Rust, Java, C, Ruby) across 40 repos, moving beyond the Django-heavy original distribution* The SWE-bench Pro controversy: independent authors used the “SWE-bench” name without John's blessing, but he's okay with it (”congrats to them, it's a great benchmark”)* CodeClash: John's new benchmark for long-horizon development—agents maintain their own codebases, edit and improve them each round, then compete in arenas (programming games like Halite, economic tasks like GDP optimization)* SWE-Efficiency (Jeffrey Maugh, John's high school classmate): optimize code for speed without changing behavior (parallelization, SIMD operations)* AlgoTune, SciCode, Terminal-bench, Tau-bench, SecBench, SRE-bench: the Cambrian explosion of code evals, each diving into different domains (security, SRE, science, user simulation)* The Tau-bench “impossible tasks” debate: some tasks are underspecified or impossible, but John thinks that's actually a feature (flags cheating if you score above 75%)* Cognition's research focus: codebase understanding (retrieval++), helping humans understand their own codebases, and automatic context engineering for LLMs (research sub-agents)* The vision: CodeClash as a testbed for human-AI collaboration—vary the setup (solo agent, multi-agent, human+agent), freeze model capability, and measure how interaction changes as models improve—John Yang* SWE-bench: https://www.swebench.com* X: https://x.com/jyangballinFull Video EpisodeTimestamps00:00:00 Introduction: John Yang on SWE-bench and Code Evaluations00:00:31 SWE-bench Origins and Devon's Impact on the Coding Agent Arms Race00:01:09 SWE-bench Ecosystem: Verified, Pro, Multimodal, and Multilingual Variants00:02:17 Moving Beyond Django: Diversifying Code Evaluation Repositories00:03:08 Code Clash: Long-Horizon Development Through Programming Tournaments00:04:41 From Halite to Economic Value: Designing Competitive Coding Arenas00:06:04 Ofir's Lab: SWE-ficiency, AlgoTune, and SciCode for Scientific Computing00:07:52 The Benchmark Landscape: TAU-bench, Terminal-bench, and User Simulation00:09:20 The Impossible Task Debate: Refusals, Ambiguity, and Benchmark Integrity00:12:32 The Future of Code Evals: Long Autonomy vs Human-AI Collaboration00:14:37 Call to Action: User Interaction Data and Codebase Understanding Research Get full access to Latent.Space at www.latent.space/subscribe
"Iako je pojedinačno strašno, zajedno uvek bude malo manje strašno." Novogodišnja epizoda Pojačala je tradicionalna retrospektiva godine koja ostaje iza nas i pokušaj da se, bez ulepšavanja, pogleda u ono što dolazi. Godina 2025. bila je teška, iscrpljujuća i puna neizvesnosti, posebno za ljude koji pokušavaju da grade i održe biznise u okruženju koje se stalno menja i sve manje prašta greške. U ovoj epizodi Ivan govori o stanju u društvu i privredi, o zatvaranju malih biznisa, o potrebi da redefinišemo prioritete i naučimo kako da preživimo period koji nije namenjen rastu, već opstanku. Epizoda za kraj godine, za one koji žele da razumeju gde smo, zašto nam je teško i kako da u novu godinu uđemo sabranije, realnije i hrabrije. Srećna Nova godina i hvala vam što ste uz Pojačalo. Podržite nas na BuyMeACoffee: https://bit.ly/3uSBmoa Pročitajte transkript ove epizode: https://bit.ly/3L9zd51 Posetite naš sajt i prijavite se na našu mailing listu: http://bit.ly/2LUKSBG Prijavite se na naš YouTube kanal: http://bit.ly/2Rgnu7o Pratite Pojačalo na društvenim mrežama: FB: https://www.facebook.com/PojacaloRS/ IG: https://www.instagram.com/pojacalo.rs/ X: https://x.com/PojacaloRS LN: https://www.linkedin.com/company/pojacalo TikTok: https://www.tiktok.com/@pojacalo.rs
U novoj epizodi Njuz Podkasta analiziramo temelje srpske ekonomije: girice koje plaća Siniša Mali i propast projekta veka, jer se Trampov zet očigledno uplašio blokadera. Srećom, Vlada je te milijarde odmah preusmerila u Albaniju, da se ne baci. U međuvremenu, hiljade ljudi ostaje bez posla jer ističu subvencije, ali koga briga za fabrike kad imamo super radare i tužilaštvo koje radi. Ponekad. Saznaćete i zašto je Ćaciland postao "sveta zemlja hodočašća" (reče Bakarec, ne mi), zašto Desingerica ne sme u Banjaluku i da li je istina da Karleuša peva u piceriji. Za kraj, rešavamo najveću srpsku misteriju: zašto na svakom ćošku imamo tri kladionice, a nijedan posao. Pratite naše društvene mreže jer uskoro stiže nagrada igra koju organizujemo sa našim prijateljima iz Ivko Woman & Man!