Central component of any computer system which executes input/output, arithmetical, and logical operations
POPULARITY
Categories
In Podcast Playbook (Part 2), Kelly Kennedy rips the fluff off podcast tech and gives you the real, unfiltered truth about the gear that actually matters. If you're serious about launching a podcast that sounds world-class, this episode is your blueprint. From computers and microphones to audio interfaces and headphones, Kelly breaks down what to buy, why it matters, and how to build a pro-level setup without wasting a dollar. No gimmicks. No jargon. Just clarity and confidence to build your studio the right way—whether you're in a spare bedroom or a full production space.But this isn't just about gear—it's about showing up like a pro from day one. Kelly explains why audio quality is make-or-break, why your computer is the real unsung hero, and how the right setup positions you for long-term podcast success. This episode will cut months off your learning curve and set you up to hit record with power and purpose. If you want to build a podcast that's built to last, Episode 251 is non-negotiable.Key Takeaways: 1. Your computer is the most critical piece of podcasting equipment—editing and production demand serious processing power.2. A gaming laptop or desktop is often the best choice due to its high-end GPU, CPU, RAM, and SSD performance.3. Sound quality can make or break your show; even great content won't save you if the audio hurts people's ears.4. USB microphones are great for beginners, but XLR microphones paired with an interface deliver far superior sound and control.5. A quality audio interface like the Rodecaster Pro 2 allows for zero-latency monitoring, clean gain control, and pro-level audio routing.6. Headphones are non-negotiable—they prevent feedback, help monitor sound live, and allow you to edit with precision.7. Bluetooth headphones introduce latency—always go wired when producing or editing your show.8. You don't need a full studio to sound professional—a home setup with the right gear can match broadcast quality.9. Start with a setup you can grow into—XLR systems are scalable and used by nearly all professional podcasters.10. Equipment helps—but consistency, connection, and your message are what truly build a great podcast.Ready to build something that lasts?The Catalyst Club isn't just another business community—it's your backstage pass to real growth. If you're a founder, executive, podcaster, or builder chasing clarity, connection, and momentum, this is where you belong. Inside, you'll find exclusive coaching, behind-the-scenes strategy, live events, and a rockstar crew of high-performers pushing the edge just like you.No fluff. No noise. Just fuel for what you're building.Join us: www.kellykennedyofficial.com/thecatalystclubIf you know, you're known.
Your computer's CPU is a complex piece of circuitry trying to maximize how much it can do and how quickly it can do it. I'll outline one of the techniques that makes a single CPU core look like two.
Rustler Core Team Member Sonny Scroggin joins Elixir Wizards Sundi Myint and Charles Suggs. Rustler serves as a bridge to write Native Implemented Functions (NIFs) in Rust that can be called from Elixir code. This combo leverages Rust's performance and memory safety while maintaining Elixir's fault tolerance and concurrency model, creating a powerful solution for CPU-intensive operations within Elixir applications. Sonny provides guidance on when developers should consider using NIFs versus other approaches like ports or external services and highlights the considerations needed when stepping outside Elixir's standard execution model into native code. Looking toward the future, Sonny discusses exciting developments for Rustler, including an improved asynchronous NIF interface, API modernization efforts, and better tooling. While Rust offers tremendous performance benefits for specific use cases, Sonny emphasizes that Elixir's dynamic nature and the BEAM's capabilities for distributed systems remain unmatched for many applications. Rustler simply provides another powerful tool that expands what developers can accomplish within the Elixir ecosystem. Key topics discussed in this episode: Rust as a "high-level low-level language" with memory safety NIFs (Native Implemented Functions) in the BEAM virtual machine Rustler's role simplifying Rust-Elixir integration with macros CPU-intensive operations as primary NIF use cases Beam scheduler interaction considerations with native code Dirty schedulers for longer-running NIFs in OTP 20+ Memory safety advantages of Rust for NIFs Development workflow using Mix tasks for Rustler Common pitfalls when first working with Rust Error handling improvements possible with Rustler NIFs Differences between ports, NIFs, and external services Asynchronous programming approaches in Rust versus Elixir Tokyo runtime integration for asynchronous operations Static NIFs for mobile device compatibility Upcoming CLI tooling to simplify Rustler development Rustler's API modernization efforts for better ergonomics Thread pool sharing across multiple NIFs Wasm integration possibilities for the BEAM Compile-time safety versus dynamic runtime capabilities Performance considerations when implementing NIFs Compiler-assisted memory management in Rust Automatic encoding/decoding between Rust and Elixir types The importance of proper error handling Real-world application in high-traffic authentication servers Community resources for learning Rustler Links mentioned: https://github.com/rusterlium/rustler https://github.com/rust-lang/rust https://www.angelfire.lycos.com/ https://www.webdesignmuseum.org/flash-websites https://www.php.net/ https://xmpp.org/ https://jabberd2.org/ Geocities: https://cybercultural.com/p/geocities-1995/ (fun fact: when you search Geocities on Google, the results page is in Comic Sans font.) https://bleacherreport.com/ https://hexdocs.pm/jose/readme.html https://github.com/rust-lang/rust-bindgen Erlang Ports: https://www.erlang.org/doc/system/cport.html Erlang ETFs (External Term Format): https://www.erlang.org/doc/apps/erts/erlextdist.html Elixir gRPC https://github.com/elixir-grpc/grpc gRPC (“Remote Proceduce Call”): https://grpc.io/ dirtycpu.ex https://github.com/E-xyza/zigler/blob/main/lib/zig/nif/dirty_cpu.ex ets https://www.erlang.org/doc/apps/stdlib/ets.html Mnesia https://www.erlang.org/doc/apps/mnesia/mnesia.html VPPs (Virtual Power Plants): https://www.energy.gov/lpo/virtual-power-plants https://nixos.org/ WASM WebAssembly with Elixir: https://github.com/RoyalIcing/Orb Rust Tokio https://tokio.rs/ Getting Started: https://hexdocs.pm/rustler/0.17.0/Mix.Tasks.Rustler.New.html https://rustup.rs/ Special Guest: Sonny Scroggin.
This week on the podcast we go over our reviews of the Fractal Design Scape Gaming Headset and HyperX Pulsefire Saga Pro Gaming Mouse. We also talk about the newly announced GeForce RTX 5050 graphics card, the RTX 5090DD, a crazy CPU cooler, and much more!
Cambiare tutto con le azioni ETF investimenti risparmio finanza personale business soldi economia
Benvenuti a The Deep Dive, il podcast che analizza i fenomeni più rilevanti del momento! In questa puntata esploreremo una svolta epocale che sta ridisegnando il panorama finanziario globale: Nvidia Oltre Apple e Microsoft: È Davvero l'AI il Nuovo Cuore Pulsante della Finanza Globale?Per la prima volta nella storia, un'azienda legata a doppio filo con l'Intelligenza Artificiale ha superato i giganti indiscussi di Wall Street. Parliamo di Nvidia, che ha infranto ogni barriera, raggiungendo una capitalizzazione di mercato da capogiro di 3.750 miliardi di dollari, superando Microsoft (3.650 miliardi) e distanziando nettamente Apple (3.010 miliardi). Questo sorpasso non è solo simbolico, ma un'indicazione chiara: l'intelligenza artificiale è il nuovo motore pulsante della finanza globale, e Nvidia ne è l'emblema indiscusso.Ma come è possibile questa crescita stratosferica? Analizzeremo a fondo il "segreto" di Nvidia:•Valutazione e Multipli: Scopriremo perché, nonostante cifre assolute elevatissime, il titolo di Nvidia viene ancora scambiato a multipli considerati "ragionevoli" rispetto ai suoi livelli storici e al Nasdaq 100. Il suo PEG (Price/Earnings to Growth ratio) è intorno a 0,9, indicando che il prezzo dell'azione è sorprendentemente basso rispetto al suo tasso di crescita atteso, rendendola una delle scelte più interessanti tra i Magnifici Sette del tech.•La Crescita "Sotto-Posseduta": Esploreremo il paradosso per cui Nvidia è ancora "sotto-posseduta" dal 74% dei fondi long-only, una percentuale inferiore a quella di Apple o Microsoft. Questo suggerisce un potenziale di crescita inesploso se gli investitori istituzionali dovessero aumentare la loro esposizione.•Il Vantaggio Competitivo nell'AI: Approfondiremo come la "corsa agli armamenti dell'intelligenza artificiale" continuerà ben oltre il 2025, e come il vantaggio competitivo di Nvidia si sia solo ampliato, rafforzando la sua posizione dominante.•La "Prossima Onda d'Oro" e le GPU Blackwell: Discuteremo come Loop Capital abbia alzato il target price a 250 dollari, ipotizzando una valutazione futura che potrebbe raggiungere i 6 trilioni di dollari, grazie all'entrata nella "prossima Onda d'Oro" dell'AI generativa. Un ruolo chiave sarà giocato dalle nuove GPU della serie Blackwell, che rappresentano una rivoluzione tecnologica e stanno guidando un'impennata nella domanda di calcolo non-CPU. Si stima che questa fetta del mercato possa valere 2 trilioni di dollari entro il 2028.Affronteremo anche le domande più spinose: c'è un limite al potenziale di rialzo di un'azienda così grande? La volatilità potrebbe tornare se i clienti rallentassero la spesa in AI?.Nvidia non è più solo un'azienda di semiconduttori, ma l'emblema dell'era dell'intelligenza artificiale, il punto di riferimento per ogni trend legato al machine learning, all'inferenza e al cloud AI.Unisciti a noi per questa analisi approfondita e un dibattito aperto tra esperti, per capire se la rivoluzione AI è davvero appena cominciata e se Nvidia ne sarà la regina indiscussa!LINK DIRETTO DEL MIO LIBRO SU AMAZON: https://www.amazon.it/dp/B0D6LZK23MInvesti come me:https://www.patreon.com/cambiaretutto Il sito di giuseppe scioscia: https://tinyurl.com/ytm3ns74Il gruppo:https://www.facebook.com/groups/cambiaretuttocambiaresubitoIl mio profilo:https://www.facebook.com/GiuseppeSciosciaNB: In nessun modo il mio contenuto audio e/o video vuole essere una sollecitazione all'acquisto o vendita di strumenti finanziari.
Ce n'est plus un simple objectif, c'est une véritable course à l'efficacité énergétique qu'AMD est en train de gagner. Le géant américain des semi-conducteurs, connu pour ses processeurs et cartes graphiques, vient d'annoncer avoir largement dépassé son propre défi environnemental lancé en 2021 : améliorer par 30 l'efficacité énergétique de ses puces pour l'IA et le calcul haute performance (HPC) entre 2020 et 2025.Résultat ? Objectif atteint… et même pulvérisé. Les dernières générations de puces AMD sont 38 fois plus efficaces que celles de 2020. Un bond technologique colossal qui se traduit par 97 % d'énergie consommée en moins. Concrètement, les émissions de CO₂ liées à l'entraînement d'un modèle d'intelligence artificielle passent de 3 000 à 100 tonnes. Et là où il fallait auparavant 275 racks de serveurs, un seul suffit aujourd'hui. Des chiffres qui font tourner la tête. Pour en arriver là, AMD a mis le paquet : innovations architecturales, optimisation des performances par watt, et une ingénierie de précision sur l'ensemble de ses produits, CPU comme GPU. Une stratégie efficace qui confirme sa volonté de concilier performance informatique et responsabilité environnementale.Mais AMD ne s'arrête pas là. La firme annonce déjà son prochain objectif pour 2030 : multiplier par 20 l'efficacité énergétique à l'échelle du rack, par rapport à 2024, pour l'entraînement et l'inférence en intelligence artificielle. Une ambition trois fois plus élevée que les progrès moyens du secteur sur la période 2018-2025. Et ce n'est pas tout : AMD estime que si ses matériels sont déjà très performants, le travail des développeurs pourrait amplifier les gains jusqu'à un facteur 5. Au total, la formation d'un modèle IA pourrait devenir 100 fois plus économe en énergie d'ici 2030. Pour y parvenir, AMD prévoit de repenser l'ensemble de sa chaîne de production : processeurs, mémoire, réseau, stockage, et surtout une co-conception étroite entre matériel et logiciels. L'objectif est clair : faire de l'IA une technologie à la fois plus puissante et plus respectueuse de la planète. Avec cette stratégie audacieuse, AMD espère entraîner tout le secteur dans son sillage. La société mise sur des normes ouvertes et sur la collaboration avec ses partenaires pour continuer à faire avancer une IA plus verte. Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.
En este episodio analizamos en profundidad la arquitectura de von Neumann, el modelo de programa almacenado que sustenta a casi todos los ordenadores actuales. Partimos de su contexto histórico—la transición de las máquinas de programa fijo como ENIAC a la EDVAC de 1945—para entender por qué almacenar instrucciones en la misma memoria que los datos cambió la informática para siempre. Desgranamos sus bloques esenciales (CPU, ALU, unidad de control, registros, memoria, buses e I/O) y explicamos cómo el ciclo buscar-decodificar-ejecutar sigue latiendo en los microprocesadores modernos, junto con técnicas como caché, pipelining y multinúcleo que mitigan el cuello de botella de von Neumann. Pensado para estudiantes de la UNED entre 30 y 45 años, el episodio combina rigor técnico con analogías cotidianas que facilitan la comprensión y conectan la teoría con el hardware que usamos a diario.
John is joined by Spencer Collins, Executive Vice President and Chief Legal Officer of Arm Holdings, the UK-based semiconductor design firm known for powering over 99% of smartphones globally with its energy-efficient CPU designs. They discuss the legal challenges that arise from Arm's unique position in the semiconductor industry. Arm has a unique business model, centered on licensing intellectual property rather than manufacturing processors. This model is evolving as Arm considers moving “up the stack,” potentially entering into processor production to compete more directly in the AI hardware space. Since its $31 billion acquisition by SoftBank in 2016, Arm has seen tremendous growth, culminating in an IPO in 2023 at a $54 billion valuation and its market value nearly doubling since.AI is a major strategic focus for Arm, as its CPUs are increasingly central to AI processing in cloud and edge environments. Arm's high-profile AI projects include Nvidia's Grace Hopper superchip and Microsoft's new AI server chips, both of which rely heavily on Arm CPU cores. Arm is positioned to be a key infrastructure player in AI's future based on its broad customer base, the low power consumption of its semiconductors, and their extensive security features. Nvidia's proposed $40 billion acquisition of ARM collapsed due to regulatory pushback in the U.S., Europe, and China. This led SoftBank to pivot to taking 10% of Arm public. Arm is now aggressively strengthening its intellectual property strategy, expanding patent filings, and upgrading legal operations to better protect its innovations in the AI space.Spencer describes his own career path—from law firm M&A work to a leadership role at SoftBank's Vision Fund, where he worked on deals like the $7.7 billion Uber investment—culminating in his current post. He suggests that general counsel for major tech firms must be intellectually agile, invest in best-in-class advisors, and maintain geopolitical awareness to navigate today's rapidly changing legal and regulatory landscape.Podcast Link: Law-disrupted.fmHost: John B. Quinn Producer: Alexis HydeMusic and Editing by: Alexander Rossi
The ASX 200 claws back early falls to close down just 8 points to 8524 as CBA hits a new record, up 1.5%. The Big Bank Basket continues higher, sucking in money, WBC up 0.7% and NAB up 0.3% with the Basket up to $284.69 (+1.3). Other financials lacklustre, GQG falling -4.4% on low volumes. REITs better, GMG flat and SCG up 1.1% with industrials mostly weaker, CPU down 1.5%, REA off 0.4% and retail squirming after KMD warning. LOV off 2.8% with PMV down 1.7% and travel stocks also suffering, CTD down 2.3%. Tech stocks eased, WTC off 1.9% despite two new NEDs. The All-Tech Index fell 1.1%. Defensives back in fashion, COL, WOW and WES all better.Want to invest with Marcus Today? The Managed Strategy Portfolio is designed for investors seeking exposure to our strategy while we do the hard work for you. If you're looking for personal financial advice, our friends at Clime Investment Management can help. Their team of licensed advisers operates across most states, offering tailored financial planning services. Why not sign up for a free trial? Gain access to expert insights, research, and analysis to become a better investor.
In this episode, Tara sits down with business success coach Helle Brodie, whose calm approach to business transformation offers a much-needed alternative to hustle culture. After decades of entrepreneurship - and burnout - Helle developed a framework to help others scale without losing themselves in the process. Tune in to hear her journey, her unique CPU method (Commitment, Performance, and You), and her powerful message for anyone feeling maxed out by their own success.
The Fork In Your Ear Ep#196 June The Gamers Month (Grok Summary of Transcription 6-14-25) The transcription captures a podcast episode hosted by Tim and Nate, focusing on technical setup issues, entertainment, video games, and tech updates, with a touch of personal life discussion. Here's a concise summary: Technical Setup Struggles (00:00:00 - 00:07:56): Nate and Tim troubleshoot audio and Discord issues during the podcast setup, dealing with problems like audio hijack previews and Mac output changes. They express frustration with technical glitches but eventually resolve them to start recording. Entertainment Discussion (00:08:10 - 00:30:54): The hosts discuss recent entertainment topics, starting with the tragic murder of Jonathan Ross, the voice actor for John Redcorn in King of the Hill, killed on June 1, 2025, in a hate crime. They express condolences and discuss its impact on the show's revival. They review Murderbot, a new Apple TV+ sci-fi comedy series starring Alexander Skarsgård, based on a book series. The show follows a rogue android who wants to watch soap operas but must save inept space hippies. Both hosts enjoy its humor and premise. Tim mentions watching a Gordon Ramsay show, Secret Service, where Ramsay secretly inspects restaurants, and revisiting The Drew Carey Show during a long traffic jam. Nate notes he's been too busy with work to watch much TV. They touch on the TV industry's shift to shorter seasons and the SAG-AFTRA video game strike resolution on June 11, 2025, allowing voice actors to return to major studios like Activision and EA. Video Game Showcases (00:30:54 - 01:47:15): The hosts discuss recent gaming showcases (Summer Game Fest, PlayStation State of Play, Xbox Games Showcase, and Nintendo Switch 2 launch). None stood out as exceptional, but highlights include: Summer Game Fest: A puppet boxing game with realistic physics and Resident Evil 9, set in a post-apocalyptic Raccoon City with toggleable first- and third-person perspectives. PlayStation State of Play: A James Bond game by IO Interactive, focusing on a young Bond, and a 4v4 Marvel fighting game by Arc System Works. The showcase disappointed by not showing more of Ghost of Yotei. Xbox Games Showcase: Announcements for Outer Worlds 2, Grounded 2 (out July 29, 2025), Indiana Jones DLC, and Final Fantasy VII Remake Intergrade for Xbox. Call of Duty: Black Ops 7 and Solo Leveling: Overdrive were also noted. Nintendo Switch 2: Tim secured a Switch 2 at Costco unexpectedly and praises its design, despite an LCD screen. Games like Mario Kart World and No Man's Sky were discussed, with Mario Kart World criticized for lacking polish and unclear mechanics like rail grinding. Tech Talk (01:47:15 - 02:29:03): Tim shares his Switch 2 experience, praising its sleek design, improved Joy-Cons, HD Rumble 2, and DLSS upscaling, but notes issues with older Switch games in handheld mode due to CPU limitations. The Switch 2 Pro Controller is lauded for its silent, smooth sticks and customization. Nate discusses the Asus ROG Ally X Xbox Edition, a Windows 11 handheld with an Xbox mode, 24GB RAM, and a 1TB SSD. It's compared to the Switch 2 but lacks native Xbox game support, relying on cloud and Play Anywhere titles. They lament confusing microSD Express card standards for the Switch 2 and USB-C cable compatibility issues. Life Updates (02:29:03 - End): Both hosts mention recent birthdays and improving personal circumstances. Nate shares a humorous anecdote about his father gifting him alcohol despite his sobriety, and Tim discusses a USB hub failure affecting his streaming setup. They wrap up with Father's Day wishes and podcast sign-off. The episode blends technical banter, entertainment and gaming news, and personal anecdotes, reflecting the hosts' passion for tech and gaming amidst life's challenges. Join The Fork Family On Discord: https://discord.gg/CXrFKxR8uA Find all our stuff at Remember to give us a review on iTunes or wherever you downloaded this podcast from. And don't forget you can connect to us on social media with, at, on or through: Website: http://www.dynamicworksproductions.com/ Twitter Handle: @getforkedpod eMail Address: theforkinyourearpodcast@gmail.com iTunes Podcast Store Link: https://itunes.apple.com/us/podcast/dynamic-works-productions/id703318918?mt=2&i=319887887 If you would like to catch up with each of us personally Online Twitch/Twitter: Tim K.A. Trotter's Youtube ID: Dynamicworksproductions Tim K.A. Trotter's Twitter ID: Tim_T Tim K.A. Trotter's Twitch ID: Tim_KA_Trotter Also remember to buy my Sc-Fi adventure book “The Citadel: Arrival by Tim K.A. Trotter” available right now on Amazon Kindle store & iTunes iBookstore for only $2.99 get a free preview download when you visit those stores, it's a short story only 160-190 pages depending on your screen size, again thats $2.99 on Amazon Kindle & iTunes iBookstore so buy book and support this show!
Fundamentals of Operating Systems Course https://oscourse.winktls is brilliant.TLS encryption/decryption often happens in userland. While TCP lives in the kernel. With ktls, userland can hand the keys to the kernel and the kernel does crypto. When calling write, the kernel encrypts the packet and send it to the NIC.When calling read, the kernel decrypts the packet and handed it to the userspace. This mode still taxes the host's CPU of course, so there is another mode where the kernel offloads the crypto to the NIC device! Host CPU becomes free. Incoming packets to the NIC are decrypted in device before they are DMAed to the kernel. outgoing packets are encrypted before they leave the NIC to the network.ktls still need handshake to happen in userspace. There is also enabling zerocopy in some cases (now that kernel has context) Deserves a video. So much good stuff.0:00 Intro2:00 Userspace SSL Libraries 3:00 ktls 6:00 Kernel Encrypts/Decrypts (TLS_SW)8:20 NIC offload mode (TLS_HW)10:15 NIC does it all (TLS_HW_RECORD)12:00 Write TX Example13:50 Read RX Example17:00 Zero copy (sendfile)https://docs.kernel.org/networking/tls-offload.html
“The AI-powered enterprise is here—and it demands a network that can keep up.” — Aruna Ravichandran, SVP, Cisco In a conversation recorded live at Cisco Live 2025 in San Diego, Doug Green, Publisher of Technology Reseller News, sat down with Aruna Ravichandran, Senior Vice President of Marketing for Cisco's Enterprise Connectivity and Collaboration division. The discussion centered on Cisco's major announcements aimed at future-proofing enterprise networks to meet the growing demands of AI, automation, and increasing security threats. Ravichandran explained that Cisco is preparing for a massive transformation in the global workforce—one where AI agents will soon outnumber human workers. With billions of devices and agents expected to be actively communicating, Cisco predicts massive increases in both east-west and north-south traffic, pushing legacy networks to their limits. To meet this challenge, Cisco launched a suite of new solutions including: AI Canvas: A collaborative dashboard powered by a Cisco-trained large language model (LLM), offering cross-domain data visibility, telemetry integration, and AI-driven diagnostics. Integrated with a conversational AI assistant, it enables NetOps teams to resolve complex network issues in seconds instead of days. Agentic Ops: A new paradigm using AI to simplify network operations, empowering NetOps professionals to do more with shrinking budgets. Smart Switches and Secure Routers: Featuring dual CPU architecture (one for networking and one for security), these devices are post-quantum ready and built to support Cisco's Hypershield initiative. Wi-Fi 7 Access Points: A first in the industry, offering high-performance wireless connectivity for AI-heavy environments. Live Protect: A breakthrough feature enabling live patching of switches without downtime, reinforcing Cisco's three-layer security model across infrastructure, connectivity, and applications. Unified Management: Merging the Meraki and Catalyst dashboards into a single control plane to streamline administration. Ravichandran emphasized that all new technologies are backward compatible, ensuring customers can modernize without disrupting ongoing operations. However, she strongly encouraged enterprises still relying on aging infrastructure—like CAT 9200 and 6K series—to begin refreshing now to leverage these capabilities. Finally, Ravichandran reinforced Cisco's strong commitment to its partner ecosystem, noting that the company has built extensive enablement plans for channel partners to capitalize on this refresh wave. For more, visit: https://www.cisco.com #AIinNetworking #CiscoLive2025 #NetworkRefresh #AgenticOps #SecureNetworking #WiFi7 #TechReseller #CiscoAI #SmartInfrastructure #TechnologyResellerNews
Craig Dunham is the CEO of Voltron Data, a company specializing in GPU-accelerated data infrastructure for large-scale analytics, AI, and machine learning workloads. Before joining Voltron Data, he served as CEO of Lumar, a SaaS technical SEO platform, and held executive roles at Guild Education and Seismic, where he led the integration of Seismic's acquisition of The Savo Group and drove go-to-market strategies in the financial services sector. Craig began his career in investment banking with Citi and Lehman Brothers before transitioning into technology leadership roles. He holds a MBA from Northwestern University and a BS from Hampton University. In this episode… In a world where efficiency and speed are paramount, how can companies quickly process massive amounts of data without breaking the bank on infrastructure and energy costs? With the rise of AI and increasing data volumes from everyday activities, organizations face a daunting challenge: achieving fast and cost-effective data processing. Is there a solution that can transform how businesses handle data and unlock new possibilities? Craig Dunham, a B2B SaaS leader with expertise in go-to-market strategy and enterprise data systems, tackles these challenges head-on by leveraging GPU-accelerated computing. Unlike traditional CPU-based systems, Voltron Data's technology uses GPUs to greatly enhance data processing speed and efficiency. Craig shares how their solution helps enterprises reduce processing times from hours to minutes, enabling organizations to run complex analytics faster and more cost-effectively. He emphasizes that Voltron Data's approach doesn't require a complete overhaul of existing systems, making it a more accessible option for businesses seeking to enhance their computing capabilities. In this episode of the Inspired Insider Podcast, Dr. Jeremy Weisz interviews Craig Dunham, CEO at Voltron Data, about building high-performance data systems. Craig delves into the challenges and solutions in today's data-driven business landscape, how Voltron Data's innovative solutions are revolutionizing data analytics, and the advantages of using GPU over CPU for data processing. He also shares valuable lessons on leading high-performing teams and adapting to market demands.
�️ 节目简介 《都好说》是一档「带观众录制的现场喜剧播客」,由脱口秀演员洛宾、三七和大水发起,每周六晚21:00在广州客村珠影好说剧场爆笑开麦!本期主题直击当代情感迷惑行为——「舔狗の自我修养」,4位主播+多位观众接力爆料,从校园纯爱到霸总式卑微,从异地恋PTSD到捞女提款机……每一段故事都让你拍腿惊呼:“这也能舔?!”� 如何参与录制? 添加小助手VX:robincomedy(备注“都好说”进群报名),下期可能就是你社死/高光的舞台!� 本期高能章节 (附赠「舔狗等级」打分,欢迎对号入座)02:30 � 魔兽玩家の末日狂舔 「我的链呢?!」 东北脱口秀演员初丹为93年“萝莉”踏遍沈阳30家麦当劳,只为淘绝版库洛米玩具,结果对方分手时甩出一句:“链子没配齐?”(舔狗等级:⭐️⭐️⭐️⭐️⭐️)09:30 � 校园英雄の致命滑铁卢 「我妈都没给我夹过菜!」 自诩“校园大哥”的狠毒男孩,被南美混血女神小美用一盘酸菜鱼+歪头杀CPU干烧,最终沦为朋友圈小作文素材。(舔狗等级:⭐️⭐️⭐️⭐️⭐️⭐️ 破表!)23:00 � 提款机的觉醒 「房租我出,搬家我扛,考研钱我掏!」 大水为“原生家庭不幸”的女友全职供养,直到发现对方存款4万……(三七锐评:比和我做生意还亏!舔狗等级:⭐️⭐️⭐️⭐️)31:00 � 霸总舔狗の终极奥义「你兼职一天150?我直接打钱!」 三七用“钞能力”对抗冷暴力,定凌晨闹钟查岗酒吧兼职女友,最终悟出:“爱花钱是因为缺爱。”(观众泪目:这期最像偶像剧的片段!)37:00 � 观众投稿:舔狗の千层套路十一的朋友:甘愿被绿还要打钱,“只希望她好”QQ的跨国ICU恋情:4个微信账号+雪山朋友圈の虐恋高一抠门男友:借200送100块李宁鞋还吃回扣(三七:这™是商业奇才!)东北女孩の病娇控诉:“我喜欢的人不是割腕就是被政府监控!”� 收听指南更新时间:每周三(端午“舔狗特辑”已上线!)互动有奖:在评论区分享你的「舔狗/被舔经历」,优质故事将被主播下期朗读,并赠送【好说喜剧开放麦门票】!戳心金句:“我知道她在骗我,但哪怕假的,也想让分手来得慢一点。”——三七“当你开始数‘他几天没回消息',这段感情已经死了。”——观众QQ� 为什么听这期? 比《脱口秀大会》更真实的爆笑现场 解锁「舔狗心理学」:卑微上头背后的原生家庭创伤 收获防舔指南(或…找到同类?)� 立即收听:小宇宙APP | 苹果播客 �「都好说」下期预告:《分手有什么了不起?谁没经历过啊!》
Timestamps: 0:00 Nintendo not a linus fan i guess 1:22 RTX 9060 XT 16GB Reviews 2:22 Meta, Yandex de-anonymizing users 3:31 Hoverpen Interstellar! 4:39 QUICK BITS INTRO 4:45 Witcher 4 Unreal Engine footage 5:23 Crocodilus Android malware 5:57 CPU cooler on a GTX 960 6:25 Milky Way and Andromeda might miss NEWS SOURCES: https://lmg.gg/Z0y6E Learn more about your ad choices. Visit megaphone.fm/adchoices
Akanksha Bilani of Intel shares how businesses can successfully adopt generative AI with significant performance gains while saving on costs.Topics Include:Akanksha runs go-to-market team for Amazon at IntelPersonal and business devices transformed how we communicateForrester predicts 500 billion connected devices by 20265,000 billion sensors will be smartly connected online40% of machines will communicate machine-to-machineWe're living in a world of data delugeAI and Gen AI help make data effectiveGoal is making businesses more profitable and effectiveVarious industries need Gen AI and data transformationIntel advises companies as partners with AWSThree factors determine which Gen AI use cases adoptFactor one: availability and ease of use casesHow unique and important are they for business?Does it have enough data for right analytics?Factor two: purchasing power for Gen AI adoption70% of companies target Gen AI but lack clarityLeaders must ensure capability and purchasing power existFactor three: necessary skill sets for implementationNeed access to right partnerships if lacking skillsIntel and AWS partnered for 18 years since inceptionIntel provides latest silicon customized for Amazon servicesEngineer-to-engineer collaboration on each processor generation92% of EC2 runs on Intel processorsIntel powers compute capability for EC2-based servicesIntel ensures access to skillsets making cloud aliveAWS services include Bedrock, SageMaker, DLAMIs, KinesisPerformance is the top three priorities for successNot every use case requires expensive GPU acceleratorsCPUs can power AI inference and training effectivelyEvery GPU has a CPU head node component Participants:Akanksha Bilani – Global Sales Director, IntelSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon/isv/
Installing Oracle GoldenGate 23ai is more than just running a setup file—it's about preparing your system for efficient, reliable data replication. In this episode, Lois Houston and Nikita welcome back Nick Wagner to break down system requirements, storage considerations, and best practices for installing GoldenGate. You'll learn how to optimize disk space, manage trail files, and configure network settings to ensure a smooth installation. Oracle GoldenGate 23ai: Fundamentals: https://mylearn.oracle.com/ou/course/oracle-goldengate-23ai-fundamentals/145884/237273 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Nikita: Hello and welcome to Oracle University Podcast! I'm Nikita Abraham, Team Lead of Editorial Services with Oracle University, and I'm joined by Lois Houston, Director of Innovation Programs. Lois: Hi there! Last week, we took a close look at the security strategies of Oracle GoldenGate 23ai. In this episode, we'll discuss all aspects of installing GoldenGate. 00:48 Nikita: That's right, Lois. And back with us today is Nick Wagner, Senior Director of Product Management for GoldenGate at Oracle. Hi Nick! I'm going to get straight into it. What are the system requirements for a typical GoldenGate installation? Nick: As far as system requirements, we're going to split that into two sections. We've got an operating system requirements and a storage requirements. So with memory and disk, and I know that this isn't the answer you want, but the answer is that it varies. With GoldenGate, the amount of CPU usage that is required depends on the number of extracts and replicats. It also depends on the number of threads that you're going to be using for those replicats. Same thing with RAM and disk usage. That's going to vary on the transaction sizes and the number of long running transactions. 01:35 Lois: And how does the recovery process in GoldenGate impact system resources? Nick: You've got two things that help the extract recovery. You've got the bonded recovery that will store transactions over a certain length of time to disk. It also has a cache manager setting that determines what gets written to disk as part of open transactions. It's not just the simple answer as, oh, it needs this much space. GoldenGate also needs to store trail files for the data that it's moving across. So if there's network latency, or if you expect a certain network outage, or you have certain SLAs for the target database that may not be met, you need to make sure that GoldenGate has enough room to store its trail files as it's writing them. The good news about all this is that you can track it. You can use parameters to set them. And we do have some metrics that we'll provide to you on how to size these environments. So a couple of things on the disk usage. The actual installation of GoldenGate is about 1 to 1.5 gig in size, depending on which version of GoldenGate you're going to be using and what database. The trail files themselves, they default to 500 megabytes apiece. A lot of customers keep them on disk longer than they're necessary, and so there's all sorts of purging options available in GoldenGate. But you can set up purge rules to say, hey, I want to get rid of my trail files as soon as they're not needed anymore. But you can also say, you know what? I want to keep my trail files around for x number of days, even if they're not needed. That way they can be rebuilt. I can restore from any previous point in time. 03:15 Nikita: Let's talk a bit more about trail files. How do these files grow and what settings can users adjust to manage their storage efficiently? Nick: The trail files grow at about 30% to 35% of the generated redo log data. So if I'm generating 100 gigabytes of redo an hour, then you can expect the trail files to be anywhere from 30 to 35 gigabytes an hour of generated data. And this is if you're replicating everything. Again, GoldenGate's got so many different options. There's so many different ways to use it. In some cases, if you're going to a distributed applications and analytics environment, like a Databricks or a Snowflake, you might want to write more information to the trail file than what's necessary. Maybe I want additional information, such as when this change happened, who the user was that made that change. I can add specific token data. You can also tell GoldenGate to log additional records or additional columns to the trail file that may not have been changed. So I can always say, hey, GoldenGate, replicate and store the entire before and after image of every single row change to your trail file, even if those columns didn't change. And so there's a lot of different ways to do it there. But generally speaking, the default settings, you're looking at about 30% to 35% of the generated redo log value. System swap can fill up quickly. You do want this as a dedicated disk as well. System swap is often used for just handling of the changes, as GoldenGate flushes data from memory down to disk. These are controlled by a couple of parameters. So because GoldenGate is only writing committed activity to the trail file, the log reader inside the database is actually giving GoldenGate not only committed activity but uncommitted activity, too. And this is so it can stay very high speed and very low latency. 05:17 Lois: So, what are the parameters? Nick: There's a cache manager overall feature, and there's a cache directory. That directory controls where that data is actually stored, so you can specify the location of the uncommitted transactions. You can also specify the cache size. And there's not only memory settings here, but there's also disk settings. So you can say, hey, once a cache size exceeds a certain memory usage, then start flushing to disk, which is going to be slower. This is for systems that maybe have less memory but more high-speed disk. You can optimize these parameters as necessary. 05:53 Nikita: And how does GoldenGate adjust these parameters? Nick: For most environments, you're just going to leave them alone. They're automatically configured to look at the system memory available on that system and not use it all. And then as soon as necessary, it'll overflow to disk. There's also intelligent settings built within these parameters and within the cache manager itself that if it starts seeing a lull in activity or your traditional OLTP type responses to actually free up the memory that it has allocated. Or if it starts seeing more activity around data warehousing type things where you're doing large transactions, it'll actually hold on to memory a little bit longer. So it kinda learns as it goes through your environment and starts replicating data. 06:37 Lois: Is there anything else you think we should talk about before we move on to installing GoldenGate? Nick: There's a couple additional things you need to think of with the network as well. So when you're deploying GoldenGate, you definitely want it to use the fastest network. GoldenGate can also use a reverse proxy, especially important with microservices. Reverse proxy, typically we recommend Nginx. And it allows you to access any of the GoldenGate microservices using a single port. GoldenGate also needs either host names or IP addresses to do its communication and to ensure the system is available. It does a lot of communication through TCP and IP as well as WSS. And then it also handles firewalls. So you want to make sure that the firewalls are open for ingress and egress for GoldenGate, too. There's a couple of different privileges that GoldenGate needs when you go to install it. You'll want to make sure that GoldenGate has the ability to write to the home where you're installing it. That's kind of obvious, but we need to say it anyways. There's a utility called oggca.sh. That's the GoldenGate Configuration Assistant that allows you to set up your first deployments and manage additional deployments. That needs permissions to write to the directories where you're going to be creating the new deployments. The extract process needs connection and permissions to read the transaction logs or backups. This is not important for Oracle, but for non-Oracle it is. And then we also recommend a dedicated database user for the extract and replicat connections. 08:15 Are you keen to stay ahead in today's fast-paced world? We've got your back! Each quarter, Oracle rolls out game-changing updates to its Fusion Cloud Applications. And to make sure you're always in the know, we offer New Features courses that give you an insider's look at all of the latest advancements. Don't miss out! Head over to mylearn.oracle.com to get started. 08:41 Nikita: Welcome back! So Nick, how do we get started with the installation? Nick: So when we go to the install, the first thing you're going to do is go ahead and go to Oracle's website and download the software. Because of the way that GoldenGate works, there's only a couple moving parts. You saw the microservices. There's five or six of them. You have your extract, your replicat, your distribution service, trail files. There's not a lot of moving components. So if something does go wrong, usually it affects multiple customers. And so it's very important that when you go to install GoldenGate, you're using the most recent bundle patch. And you can find this within My Oracle Support. It's not always available directly from OTN or from the Oracle e-delivery website. You can still get them there, but we really want people going to My Oracle Support to download the most recent version. There's a couple of environment variables and certificates that you'll set up as well. And then you'll run the Configuration Assistant to create your deployments. 09:44 Lois: Thanks, Nick, for taking us though the installation of GoldenGate. Because these are highly UI-driven topics, we recommend that you take a deep dive into the GoldenGate 23ai Fundamentals course, available on mylearn.oracle.com. Nikita: In our next episode, we'll talk about the Extract process. Until then, this is Nikita Abraham… Lois: And Lois Houston signing off! 10:08 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
As enterprises roll out production applications using AI model inferencing, they are finding that they are limited by the amount of memory that can be addressed by a GPU. This episode of Utilizing Tech features Steen Graham, founder of Metrum AI, discussing modern RAG and agentic AI applications with Ace Stryker and Stephen Foskett. Achieving the promise of AI requires access to data, and the memory required to deliver this is increasingly a focus of AI infrastructure providers. Technologies like DiskANN allow workloads to be offloaded to solid-state drives rather than system memory, and this surprisingly results in better performance. Another idea is to offload a large AI model to SSDs and deploy larger models on lower-cost GPUs, and this is showing a great deal of promise. Agentic AI in particular can be run in an asynchronous model, enabling them to take advantage of lower-spec hardware including older GPUs and accelerators, reduced RAM capacity and performance, and even all-CPU infrastructure. All of this suggests that AI can be run with less financial and power resources than generally assumed.Guest: Steen Graham is the Founder and CEO of Metrum AI. You can connect with Steen on LinkedIn and learn more about Metrum AI on their website.Guest Host: Ace Stryker is the Director of Product Marketing at Solidigm. You can connect with Ace on LinkedIn and learn more about Solidigm and their AI efforts on their dedicated AI landing page or watch their AI Field Day presentations from the recent event.Hosts:Stephen Foskett, President of the Tech Field Day Business Unit and Organizer of the Tech Field Day Event SeriesJeniece Wnorowski, Head of Influencer Marketing at Solidigm Scott Shadley, Leadership Narrative Director and Evangelist at SolidigmFollow Tech Field Day on LinkedIn, on X/Twitter, on Bluesky, and on Mastodon. Visit the Tech Field Day website for more information on upcoming events. For more episodes of Utilizing Tech, head to the dedicated website and follow the show on X/Twitter, on Bluesky, and on Mastodon.
Charlie and Colin reveal the shocking truth about Bitcoin Pizza Day that mainstream media got wrong. Laszlo didn't just spend 10,000 Bitcoin on pizza - he spent nearly 80,000 Bitcoin throughout 2010! We dive deep into how his GPU mining discovery revolutionized Bitcoin, why Satoshi sent him a concerned email, and how this "penance" may have actually saved Bitcoin's decentralization in its early days. **Notes:** • Laszlo spent ~80,000 Bitcoin total on pizza in 2010 • GPU mining was 10x more powerful than CPU mining • Bitcoin hash rate increased 130,000% by end of 2010 • Laszlo had 1-1.5% of entire Bitcoin supply 2009-2010 • His wallet peaked at 43,854 Bitcoin • Total wallet flows were 81,432 Bitcoin Timestamps: 00:00 Start 00:28 Lies, damn lies.. and pizza 02:21 What actually happened 05:46 It's actually WAY MORE than you think 11:15 Arch Network 11:47 Laslo "saved" Bitcoin 19:12 Pizza or penance? -
The real Bitcoin Pizza Day story: Laszlo spent nearly 80,000 Bitcoin on pizza in 2010, not just 10,000. Plus how his GPU mining discovery changed Bitcoin forever and why Satoshi wasn't happy about it.You're listening to Bitcoin Season 2. Subscribe to the newsletter, trusted by over 12,000 Bitcoiners: https://newsletter.blockspacemedia.comCharlie and Colin reveal the shocking truth about Bitcoin Pizza Day that mainstream media got wrong. Laszlo didn't just spend 10,000 Bitcoin on pizza - he spent nearly 80,000 Bitcoin throughout 2010! We dive deep into how his GPU mining discovery revolutionized Bitcoin, why Satoshi sent him a concerned email, and how this "penance" may have actually saved Bitcoin's decentralization in its early days.**Notes:**• Laszlo spent ~80,000 Bitcoin total on pizza in 2010• GPU mining was 10x more powerful than CPU mining• Bitcoin hash rate increased 130,000% by end of 2010• Laszlo had 1-1.5% of entire Bitcoin supply 2009-2010• His wallet peaked at 43,854 Bitcoin• Total wallet flows were 81,432 BitcoinTimestamps:00:00 Start00:28 Lies, damn lies.. and pizza02:21 What actually happened05:46 It's actually WAY MORE than you think11:15 Arch Network11:47 Laslo "saved" Bitcoin19:12 Pizza or penance?-
Asus Zenbook A14, le test par Yohann LemoreÀ savoir► OS : Windows 11► CPU : Snapdragon® X X1 26 100 @ 2.97GHz, 8 cores, Qualcomm®► GPU : GPU Qualcomm® Adreno™► RAM : 16 Go de LPDDR5X ► Stockage : 500GB SSD► Ecran : 14'' OLED HDR►Prix : 999.99 €Crédit Audio :Hackers by Karl Casey White Bat Audio► https://www.youtube.com/watch?v=NZ4Of3lID84
Register for Free, Live webcasts & summits:https://poweredbybhis.coma00:00 - PreShow Banter™ — Twiddle Me This02:04 - WORLDS FIRST CPU Ransomware! - Talkin' Bout [infosec] News 2025-05-1903:10 - Story # 1: Coinbase - Standing Up to Extortionists11:26 - Story # 2: World's first CPU-level ransomware15:09 - Story # 3: New Intel CPU flaws leak sensitive data from privileged memory19:04 - Story # 4: After latest kidnap attempt, crypto types tell crime bosses: Transfers are traceable21:39 - Story # 5: Chinese ‘kill switches' found hidden in US solar farms27:52 - Story # 6: Congress proposes 10-year ban on state AI regulations31:41 - Story # 7: Hackers Abuse Copilot AI in SharePoint to Steal Passwords and Sensitive Data36:02 - Story # 8: European Vulnerability Database Launches Amid US CVE Chaos37:32 - Story # 9: 89 million Steam accounts reportedly leaked. Change your password now.40:06 - Story # 10: Hackers Now Targeting US Retailers After UK Attacks, Google41:11 - Story # 11: How the Signal Knockoff App TeleMessage Got Hacked in 20 Minutes43:08 - Story # 11b: DDoSecrets publishes 410 GB of heap dumps, hacked from TeleMessage's archive server47:12 - ChickenSec: ‘A Minecraft Movie' Viral TikTok Trend Wreaks Havoc In Theaters51:20 - Story # 12: Education giant Pearson hit by cyberattack exposing customer data
Discover how Rackspace Spot is democratizing cloud infrastructure with an open-market, transparent option for cloud servers. Kevin Carter, Product Director at Rackspace Technology, discusses Rackspace Spot's hypothesis and the impact of an open marketplace for cloud resources. Discover how this novel approach is transforming the industry. TIMESTAMPS[00:00:00] – Introduction & Kevin Carter's Background[00:02:00] – Journey to Rackspace and Open Source[00:04:00] – Engineering Culture and Pushing Boundaries[00:06:00] – Rackspace Spot and Market-Based Compute[00:08:00] – Cognitive vs. Technical Barriers in Cloud Adoption[00:10:00] – Tying Spot to OpenStack and Resource Scheduling[00:12:00] – Product Roadmap and Expansion of Spot[00:16:00] – Hardware Constraints and Power Consumption[00:18:00] – Scrappy Startups and Emerging Hardware Solutions[00:20:00] – Programming Languages for Accelerators (e.g., Mojo)[00:22:00] – Evolving Role of Software Engineers[00:24:00] – Importance of Collaboration and Communication[00:28:00] – Building Personal Networks Through Open Source[00:30:00] – The Power of Asking and Offering Help[00:34:00] – A Question No One Asks: Mentors[00:38:00] – The Power of Educators and Mentorship[00:40:00] – Rackspace's OpenStack and Spot Ecosystem Strategy[00:42:00] – Open Source Communities to Join[00:44:00] – Simplifying Complex Systems[00:46:00] – Getting Started with Rackspace Spot and GitHub[00:48:00] – Human Skills in the Age of GenAI - Post Interview Conversation[00:54:00] – Processing Feedback with Emotional Intelligence[00:56:00] – Encouraging Inclusive and Clear Collaboration QUOTESCHARNA PARKEY“If you can't engage with this infrastructure in a way that's going to help you, then I guarantee you it's not up to par for the direction that we're going. [...] This democratization — if you don't know how to use it — it's not doing its job.”KEVIN CARTER“Those scrappy startups are going to be the ones that solve it. They're going to figure out new and interesting ways to leverage instructions. [...] You're going to see a push from them into the hardware manufacturers to enhance workloads on FPGAs, leveraging AVX 512 instruction sets that are historically on CPU silicon, not on a GPU.”
Canonical is giving back through thanks.dev, AMD is Hiring for Ryzen Linux work, and Rust celebrates 10 years! Then There's the End of Ten project, a Flatpak update, and AMD really hitting it out of the park with Laptop processors. Elementary OS shines, KDE does better HDR, and Live Upgrade Orchestrator is posed to be a whole new way to update your kernel. For tips we have vipe for editing piped data, pw-cli for managing remote clients, taskset for managing which CPU core a process runs on, and a quick primer on capabilities for using priveleged ports. You can find the show notes at https://bit.ly/433AdOk and see you next week! Host: Jonathan Bennett Co-Hosts: Ken McDonald, Rob Campbell, and Jeff Massie Download or subscribe to Untitled Linux Show at https://twit.tv/shows/untitled-linux-show Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
Denuvo DRM strikes again! Locked out of DOOM: TGA for 24-hours, fan curves for LACT, CPU stress testing with OOTC, SteamOS compatibility ratings, and giving up on blinky RGB.
Martin Mao is the co-founder and CEO of Chronosphere, an observability platform built for the modern containerized world. Prior to Chronosphere, Martin led the observability team at Uber, tackling the unique challenges of large-scale distributed systems. With a background as a technical lead at AWS, Martin brings unique experience in building scalable and reliable infrastructure. In this episode, he shares the story behind Chronosphere, its approach to cost-efficient observability, and the future of monitoring in the age of AI.What you'll learn:The specific observability challenges that arise when transitioning to containerized environments and microservices architectures, including increased data volume and new problem sources.How Chronosphere addresses the issue of wasteful data storage by providing features that identify and optimize useful data, ensuring customers only pay for valuable insights.Chronosphere's strategy for competing with observability solutions offered by major cloud providers like AWS, Azure, and Google Cloud, focusing on specialized end-to-end product.The innovative ways in which Chronosphere's products, including their observability platform and telemetry pipeline, improve the process of detecting and resolving problems.How Chronosphere is leveraging AI and knowledge graphs to normalize unstructured data, enhance its analytics engine, and provide more effective insights to customers.Why targeting early adopters and tech-forward companies is beneficial for product innovation, providing valuable feedback for further improvements and new features. How observability requirements are changing with the rise of AI and LLM-based applications, and the unique data collection and evaluation criteria needed for GPUs.Takeaways:Chronosphere originated from the observability challenges faced at Uber, where existing solutions couldn't handle the scale and complexity of a containerized environment.Cost efficiency is a major differentiator for Chronosphere, offering significantly better cost-benefit ratios compared to other solutions, making it attractive for companies operating at scale.The company's telemetry pipeline product can be used with existing observability solutions like Splunk and Elastic to reduce costs without requiring a full platform migration.Chronosphere's architecture is purposely single-tenanted to minimize coupled infrastructures, ensuring reliability and continuous monitoring even when core components go down.AI-driven insights for observability may not benefit from LLMs that are trained on private business data, which can be diverse and may cause models to overfit to a specific case.Many tech-forward companies are using the platform to monitor model training which involves GPU clusters and a new evaluation criterion that is unlike general CPU workload.The company found a huge potential by scrubbing the diverse data and building knowledge graphs to be used as a source of useful information when problems are recognized.Subscribe to Startup Project for more engaging conversations with leading entrepreneurs!→ Email updates: https://startupproject.substack.com/#StartupProject #Chronosphere #Observability #Containers #Microservices #Uber #AWS #Monitoring #CloudNative #CostOptimization #AI #ArtificialIntelligence #LLM #MLOps #Entrepreneurship #Podcast #YouTube #Tech #Innovation
In this episode of Insights into Technology, we explore the latest cybersecurity threats and solutions shaping the tech world. Discover the implications of a groundbreaking proof of concept that shows how ransomware could potentially hide in your CPU's microcode, evading traditional security measures. We also delve into the UK's initiative for secure software development and discuss the new vulnerabilities uncovered in Intel processors that threaten data privacy. Join us as we address pressing concerns about IoT devices being turned into malicious proxies, and find out how Microsoft is introducing new features to enhance security during Teams meetings. Stay informed and prepare to safeguard your tech environment against emerging cyber threats.
In this episode of Insights into Technology, we explore the latest cybersecurity threats and solutions shaping the tech world. Discover the implications of a groundbreaking proof of concept that shows how ransomware could potentially hide in your CPU's microcode, evading traditional security measures. We also delve into the UK's initiative for secure software development and discuss the new vulnerabilities uncovered in Intel processors that threaten data privacy. Join us as we address pressing concerns about IoT devices being turned into malicious proxies, and find out how Microsoft is introducing new features to enhance security during Teams meetings. Stay informed and prepare to safeguard your tech environment against emerging cyber threats.
In this episode of The Jerich Show, join your favorite cybersecurity duo, Erich Kron and Javvad Malik, as they dive into some truly wild cybercrime stories making headlines around the globe. Hackers who've been terrorizing UK retailers have hopped the pond to target US companies, while Japan's bold plan to double its cybersecurity workforce might mean saying sayonara to tough certifications. Meanwhile, the EU arms defenders with a shiny new vulnerability database, and the discovery of rogue communication devices lurking in Chinese-made solar inverters sparks fresh paranoia. Plus, could your CPU itself soon be held hostage by ransomware? Tune in for laughs, insights, and a healthy dose of cyber skepticism! Stories from the show: Hackers behind UK retail attacks now targeting US companies https://www.bleepingcomputer.com/news/security/google-scattered-spider-switches-targets-to-us-retail-chains/ Japan aims to double cybersecurity specialists by 2030, relax certification requirements https://asianews.network/japan-aims-to-double-cybersecurity-specialists-by-2030-relax-certification-requirements/ EU launches vulnerability database to tackle cybersecurity threats https://therecord.media/eu-launches-vulnerability-database CPU microcode hack could infect processors with ransomware directly https://www.techradar.com/pro/security/cpu-microcode-hack-could-infect-processors-with-ransomware-directly ‘Rogue' communication devices found on Chinese-made solar power inverters https://www.utilitydive.com/news/rogue-communication-devices-found-on-chinese-made-solar-power-inverters/748242/
In this episode of Your Drone Questions. Answered, Chris from Drone Launch Academy breaks down the key PC specifications you should consider for handling data-intensive photogrammetry and LiDAR processing tasks.Whether you're using software like Pix4D, DJI Terra, Trimble Business Center, or LP360, this video highlights the general hardware requirements most of these programs share—including:RAM recommendations (32GB minimum, 64GB+ ideal for heavy workloads)CPU considerations (newer Intel i7/i9 or equivalent)GPU guidance (NVIDIA preferred for many platforms, but not always required)Desktop vs. laptop performanceWindows vs. macOS compatibilityWhen cloud processing might be a better choiceChris also explains why chasing specs can sometimes lead to overspending—and what actually matters when you're setting up or upgrading your workstation.Topics covered:Minimum vs. ideal system requirementsCommon pitfalls in hardware selectionCloud processing options like DroneDeploy and Pix4D CloudHow to avoid bottlenecks when building your setup
On this week's show Patrick Gray and Adam Boileau discuss the week's cybersecurity news: Struggling to find that pesky passwords.xlsx in Sharepoint? Copilot has your back! The ransomware ecosystem is finding life a bit tough lately SAP Netweaver bug being used by Chinese APT crew Academics keep just keep finding CPU side-channel attacks And of course… bugs! Asus, Ivanti, Fortinet… and a Nissan LEAF? This week's episode is sponsored by Resourcely, who will soothe your Terraform pains. Founder and CEO Tracis McPeak joins to talk about how to get from a very red dashboard full of cloud problems to a workable future. This episode is also available on Youtube. Show notes Exploiting Copilot AI for SharePoint | Pen Test Partners MrBruh's Epic Blog Ransomware group Lockbit appears to have been hacked, analysts say | Reuters "CONTI LEAK: Video they tried to bury! 6+ Conti members on a private jet. TARGET's birthday — $10M bounty on his head. Filmed by TARGET himself. Original erased — we kept a copy." Mysterious hackers who targeted Marks and Spencer's computer systems hint at political allegiance as they warn other tech criminals not to attack former Soviet states The organizational structure of ransomware groups is evolving rapidly. SAP NetWeaver exploitation enters second wave of threat activity China-Nexus Nation State Actors Exploit SAP NetWeaver (CVE-2025-31324) to Target Critical Infrastructures DOGE software engineer's computer infected by info-stealing malware Hackers hijack Japanese financial accounts to conduct nearly $2 billion in trades FBI and Dutch police seize and shut down botnet of hacked routers Poland arrests four in global DDoS-for-hire takedown School districts hit with extortion attempts after PowerSchool breach EU launches vulnerability database to tackle cybersecurity threats Training Solo - vusec Branch Privilege Injection: Exploiting Branch Predictor Race Conditions – Computer Security Group Remote Exploitation of Nissan Leaf: Controlling Critical Body Elements from the Internet PSIRT | FortiGuard Labs EPMM Security Update | Ivanti
In this episode of Cybersecurity Today, host Jim Love covers recent cybersecurity incidents including a data breach at Mark's and Spencer, the FBI's alert on outdated routers being exploited, and critical Fortinet vulnerabilities actively used in attacks. Additionally, the episode discusses a researcher's proof of concept showing how ransomware can be embedded directly into a CPU, bypassing traditional security measures. Listeners are urged to stay vigilant and implement necessary security patches and updates. 00:00 Breaking News: Marks and Spencer Data Breach 01:37 FBI Alert: Outdated Routers at Risk 03:43 Fortinet Zero-Day Vulnerability 05:46 Ransomware Embedded in CPUs: A New Threat 08:13 Conclusion and Contact Information
Software Engineering Radio - The Podcast for Professional Software Developers
Steve Summers speaks with SE Radio host Sam Taggart about securing test and measurement equipment. They start by differentiating between IT and OT (Operational Technology) and then discuss the threat model and how security has evolved in the OT space, including a look some of the key drivers. They then examine security challenges associated with a specific device called a CompactRIO, which combines a Linux real-time CPU with a field programmable gate array (FPGA) and some analog hardware for capturing signals and interacting with real-world devices. Brought to you by IEEE Computer Society and IEEE Software magazine.
The PC has been one of the most important personal devices of our lifetime. From the use of standalone PCs for word processing in the 1980s to the emergence of the World Wide Web and powerful processors in the 1990s to the rise of laptops in the 2000s and the era of 2-in-1s in the 2010s, the PC has continually evolved.Now AI is transforming the PC as we know it. AI-powered devices are helping to automate repetitive tasks, summarise documents and meetings, make adjustments to user behaviours and are enabling everyone to become content creators. New AI devices have updated components inside—the NPU, CPU, and GPU—which means they can handle workloads far more efficiently, allowing the user to have a smooth, engaging, collaborative experience and be productive at the same time.Ronan recently caught up with Tara Gale, Client Solutions Country Lead at Dell Technologies Ireland to find out more about how AI will redefine the PC and personal devices that we all now rely upon. Tara talks about her background, pc changes, AI NPU and more.More about Tara Gale:Tara is the company's lead voice on devices in Ireland, is a clear communicator and a really good and enthusiastic conversationalist. For over ten years, she has led the devices side of the businesses at Dell Technologies Ireland and is the lead expert on AI PCs. Moreover, Dell has been at the forefront of PCs and other personal devices over the past four decades. In January, Dell unveiled a new portfolio of AI PCs.
The PC has been one of the most important personal devices of our lifetime. From the use of standalone PCs for word processing in the 1980s to the emergence of the World Wide Web and powerful processors in the 1990s to the rise of laptops in the 2000s and the era of 2-in-1s in the 2010s, the PC has continually evolved. Now AI is transforming the PC as we know it. AI-powered devices are helping to automate repetitive tasks, summarise documents and meetings, make adjustments to user behaviours and are enabling everyone to become content creators. New AI devices have updated components inside - the NPU, CPU, and GPU - which means they can handle workloads far more efficiently, allowing the user to have a smooth, engaging, collaborative experience and be productive at the same time. Ronan recently caught up with Tara Gale, Client Solutions Country Lead at Dell Technologies Ireland to find out more about how AI will redefine the PC and personal devices that we all now rely upon. Tara talks about her background, pc changes, AI NPU and more. More about Tara Gale: Tara is the company's lead voice on devices in Ireland, is a clear communicator and a really good and enthusiastic conversationalist. For over ten years, she has led the devices side of the businesses at Dell Technologies Ireland and is the lead expert on AI PCs. Moreover, Dell has been at the forefront of PCs and other personal devices over the past four decades. In January, Dell unveiled a new portfolio of AI PCs. See more podcasts here. More about Irish Tech News Irish Tech News are Ireland's No. 1 Online Tech Publication and often Ireland's No.1 Tech Podcast too. You can find hundreds of fantastic previous episodes and subscribe using whatever platform you like via our Anchor.fm page here: https://anchor.fm/irish-tech-news If you'd like to be featured in an upcoming Podcast email us at Simon@IrishTechNews.ie now to discuss. Irish Tech News have a range of services available to help promote your business. Why not drop us a line at Info@IrishTechNews.ie now to find out more about how we can help you reach our audience. You can also find and follow us on Twitter, LinkedIn, Facebook, Instagram, TikTok and Snapchat.
Ever felt like someone pulled your battery out mid-sentence? One minute you're go-go-go, and the next—you're a human puddle on the couch, brain offline, soul buffering. That, my friend, is an ADHD energy shutdown—a deeply misunderstood, very real experience where our nervous system essentially throws up the "Closed for Business" sign.
If you are bored of contemporary topics of AI and need a breather, I invite you to join me to explore a mundane, fundamental and earthy topic.The CPU.A reading of my substack article https://hnasr.substack.com/p/the-beauty-of-the-cpu
This week's EYE ON NPI is looking at itsy-bitsy-teensy-weensy little rechargeable batteries that can keep your micro-power electronics running for many years without needing any maintenance from your users: it's Panasonic's VL/ML Series Lithium Rechargeable Coin Batteries (https://www.digikey.com/en/product-highlight/p/panasonic/lithium-rechargeable-coin-batteries). These are small, solderable batteries much like the coin cell batteries you're used to replacing in watches, toys, and other gadgets..but this time they can be recharged in-circuit to make maintenance effortless. We've covered lots of batteries and battery holders on EYE ON NPI - from enormous lead acid batteries to tiny coin cells (https://www.digikey.com/en/products/filter/batteries-non-rechargeable-primary/). These are an essential part of the engineer's repertoire as so many products need to work when not plugged into the wall. We use coin cells a lot in our design work: there's nothing as compact and they have great energy density. But they're typically 'primary cells' - not rechargeable. That might be fine if you're using them for something like a remote control (https://support.apple.com/en-us/101261) or a small toy. But they do eventually need to be replaced, which can be a user frustration. For that reason, many products that used to have primary cells like AA's or coin batteries have slowly transitioned to embedding lithium polymer pouches. You can get rechargeable lipoly batteries in 100 mAh or less! (https://www.digikey.com/short/7njnd057) However, there may be some cases where you still want something really really tiny, like say 9mm diameter and under 1mm thick - a size only achievable with a coin cell - but without dealing with removing and replacing a battery every few weeks or months. Especially if we're talking about something that is going to be plugged in once in a while, or where the coin cell is a fall-back. This comes up often with devices that have real time clocks (RTCs) - such as clocks, computers, laptops, tablets, watches, GPS units (https://www.adafruit.com/product/5440), etc. They might have a main battery or power system that can run a microcontroller/CPU and display plus peripherals, but you still want to keep time and maybe an alarm setting when the main power cuts off. Historically, folks have just used coin batteries - ideally replaceable ones - but sometimes not as in the case of the DS1287 (https://theretroweb.com/chip/documentation/ds1287-647b3602989d3299594321.pdf) which had the coin battery sealed inside! If you're designing a product today that needs an RTC battery, we'd say take a good look at the Panasonic's VL/ML Series Lithium Rechargeable Coin Batteries (https://www.digikey.com/en/product-highlight/p/panasonic/lithium-rechargeable-coin-batteries). No holder required: they come with tabs to solder directly onto a PCB in vertical or horizontal orientations. And they come in a variety of sizes from the 20mm / 45mAh to 12mm / 7mAh and even smaller. Note that as expected, you're going to get less capacity than a primary cell, so these are good when you think you'll get recharged every few days or weeks. Don't forget: you will absolutely need a proper lithium charger to recharge these batteries. We've got plenty of charger breakouts you can use, we particularly like the bq25185 (https://www.digikey.com/en/products/detail/adafruit-industries-llc/6091/25805553) which you can easily configure for the 1C current charge rate to match your Panasonic Lithium Rechargeable Coin Batteries (https://www.digikey.com/en/product-highlight/p/panasonic/lithium-rechargeable-coin-batteries) They can all handle 1000+ cycles, we like the bq in particular because it has power-path which will help avoid unnecessary discharging/cycling. Ready for a tiny burst of power to keep your clocks a-tickin'? You can pick up a wide selection of Panasonic's VL/ML Series Lithium Rechargeable Coin Batteries (https://www.digikey.com/en/product-highlight/p/panasonic/lithium-rechargeable-coin-batteries) at DigiKey right now! They're in stock in a range of sizes and configurations for immediate shipment - don't forget that like all Lithium batteries you may need to ship ground, so order now and you'll get those bite-sized batteries shipped within the hour for immediate integration. See it on DigiKey https://www.digikey.com/short/wb83dfff
Ubuntu and Fedora are out! And Git turns 20! Cosmic is showing up everywhere, Framework has an impressive AMD-powered 13-inch laptop, and Thunderbird is rolling out the Thundermail service! For tips we have vidir for renaming multiple files at once, pw-mon for monitoring pipewire, g as a go replacement for ls, and todist-rs for a TUI take on todoist. It's a great show, and the notes are at https://bit.ly/4lzTAWt thanks for coming! Host: Jonathan Bennett Co-Hosts: Jeff Massie, Ken McDonald, and Rob Campbell Download or subscribe to Untitled Linux Show at https://twit.tv/shows/untitled-linux-show Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
It's been 16 frigid months since our last all-intro episode, but now we're pulling the ice tray out of the freezer and offering you another cube of cold opens, covering everything from surge protector safety to thermal paste application methods, stacking storage bins without crushing them, the crazed monitor murderer who's struck again, artifacts of our very early careers, an intensive Weird Al lyrical breakdown, a little paean for Zachtronics, and how not to forget about obligations that might get you arrested. Support the Pod! Contribute to the Tech Pod Patreon and get access to our booming Discord, a monthly bonus episode, your name in the credits, and other great benefits! You can support the show at: https://patreon.com/techpod
In this episode of the Arm Viewpoints podcast, host Brian Fuller speaks with Julien Simon, Chief Evangelist at Arcee AI, about the evolution of small language models and the significance of CPU-based AI inference. They discuss Arcee AI's journey, the advantages of small models over large ones, the importance of inference, and the innovative techniques like quantization that enable efficient performance. Julian emphasizes the need for businesses to focus on cost performance and the future of AI as a collection of microservices that can be tailored to specific needs.
Brett is back, so everyone can stop calling in and writing letters. He will try to stay out of that Turkish prison, just for you.Also in this episode, some actual technical news items like a new satellite internet, USB-C splitting and we even sneak in a review of the Sapphire Pulse RX 9070 XT.One again we are very pleased to welcome back Incogni as a sponsor!Your information is being exposed by data brokers to possible identity theft, scams, online harassment, stalking or even unwanted marketing.Take your personal data back with Incogni! Use code PCPERSPECTIVE at the link below and get 60% off an annual plan:https://incogni.com/pcperspectiveTimestamps:00:00 Intro01:28 Food with Josh03:03 RTX 5060 series MSRPs reportedly lower than last gen05:38 RX 9070 can be BIOS modded with XT firmware09:12 BIOS updates for AMD 9800X3D issues on ASRock boards11:11 AMD dominates CPU sales15:22 AMD announces Ryzen 8000HX mobile CPUs17:10 Amazon's Starlink rival now launching19:36 New month, new AIDA6421:30 Beware the USB-C splitter22:43 China's new digital interface has all the Gbps25:00 Sponsor break26:25 (in)Security Corner39:49 Gaming Quick Hits46:59 Sapphire PULSE Radeon RX 9070 XT review52:19 Picks of the Week1:03:54 Outro (or is it??)1:05:49 Reminder to watch the live version ★ Support this podcast on Patreon ★
Dan is joined by Marc Evans, director of business development and technology at Andes. Marc has over twenty years of experience in the use of CPU, DSP, and Specialized IP in SoCs from his prior positions at Lattice Semiconductor, Ceva, and Tensilica. During his early career, Marc was a processor architect, making significant contributions… Read More
Los rumores apuntan a un cambio de interfaz inspirado en Vision OS, para todos los sistemas operativos de Apple. ¿Volverá el esqueumorfismo? Analizamos qué podría significar para los usuarios y desarrolladores, y cómo podría afectar a la usabilidad.¿Por qué el Mac Studio tiene más potencia que algunos Mac Pro? Hablamos de memoria unificada, IA, el futuro del Mac Pro tras la salida del M3 Ultra.En cuanto a la WWDC de este año, nos preguntamos si Tim Cook recuperará el formato presencial. Ángel, que ha asistido a casi todas las ediciones desde 2009, revela cómo es la experiencia real para la prensa. WWDC25 - Apple Developer La Conferencia Mundial de Desarrolladores de Apple regresa la semana del 9 de junio - Apple (ES) Apple (AAPL) Readies Dramatic Design Overhauls for iOS 19, iPadOS 19, macOS 16 - Bloomberg Apple aplaza la llegada de Siri potenciada con IA: el retraso afecta también al nuevo HomePod previsto para 2025 Apple M5 Pro, Max, and Ultra could ditch much vaunted unified memory architecture for split CPU and GPU designs fabbed on TSMC N3E - NotebookCheck.net News When Will Apple Intelligence Be Released? When Is Apple Releasing M4 Macs, iPad? - Bloomberg
Get More LVWITHLOVE Content Guests: Kostas Hatalis Ph.D – Co-Founder, Alexander Labs; Developer of GoCharlie Paul Gosselin – Co-Founder, Alexander Labs In this episode of the Lehigh Valley with Love Podcast, host George Wacker sits down with Paul Gosselin and Kostas Hatalis of Alexander Labs, an AI incubator based in Allentown, Pennsylvania, that's putting the Lehigh Valley back on the innovation map. You'll hear how this growing team is not only building on-premise AI systems and custom large language models, but doing it all from the region that produced the world's first commercial transistors — long before the rise of Silicon Valley. From community collaboration and tech talent to data privacy and enterprise-level solutions, Alexander Labs is working to make AI accessible, secure, and proudly local. To learn more or get in touch, visit: https://alexanderlabs.ai Watch Episode https://youtu.be/onsjG6-KPBQ www.lvwithlove.com Thank you to our Partners! WDIY Lehigh Valley Health Network Wind Creek Event Center Michael Bernadyn of RE/MAX Real Estate Molly’s Irish Grille & Sports Pub Banko Beverage Company Episode Recap Alexander Labs and the Future of AI in the Lehigh Valley “We built the first transistor here.” Kostas Hatalis opens with a bold statement: the Lehigh Valley isn't just a place with warehouses — it was the original Silicon Valley. The region manufactured the world's first commercial transistors in the 1940s, and yet few in tech today give it credit. That legacy is exactly what inspired Alexander Labs, an AI incubator based in Allentown that's trying to bring innovation back home. From Empty Space to AI Lab Paul Gosselin, co-founder of Alexander Labs, walks us through how the project started — with an empty room, a few whiteboards, and a wild idea. Paul had been running software companies when he met Kostas, a Lehigh Ph.D. who had already built his own large language model. They talked, they brainstormed, and soon they realized: “We need to create a lab. One that brings the community in and builds something real.” That idea became Alexander Labs — an incubator not just for AI startups, but for a smarter, more connected Lehigh Valley tech ecosystem. Meet GoCharlie: AI Built in Allentown One of Alexander Labs’ most exciting projects is GoCharlie, a next-generation AI assistant that can write, analyze, and support business operations. But what makes it special isn't just what it does — it's where and how it was built. GoCharlie's models are developed right in Allentown using an approach Kostas calls “small language models.” Instead of billion-dollar infrastructure, these models run on a single GPU or CPU, making them affordable, fast, and customizable for real businesses. In a world where companies send sensitive data into massive black-box systems like ChatGPT, GoCharlie offers something better: AI you can understand, control, and host locally. On-Prem AI and the Power of Local Alexander Labs is also building its own on-prem data center — allowing businesses to plug in directly and run GoCharlie inside their own walls. Paul calls it GoPrem — and it's already helping companies avoid relying on Silicon Valley giants for AI tools. “We're really focused on enterprise,” he says, “but it's more than that — we're helping companies get their data in order, whether structured or unstructured, so they can actually use AI meaningfully.” Cost, Control, and Trust Throughout the conversation, privacy and autonomy come up again and again. Alexander Labs isn't just building AI — they're building trust. Kostas explains how big tech companies are scraping public data to train their models, while businesses are becoming increasingly wary of handing over proprietary information. That's why localized, on-premise AI is such a key differentiator. “You can't just trust Microsoft or Google with everything,” Paul says. “We can offer the same kind of power — but right here in the Valley, with local partnerships and control.” Making the Lehigh Valley a New Kind of Tech Hub Despite the region's rich tech history, the Lehigh Valley hasn't been part of the AI conversation — until now. Alexander Labs is trying to change that. With support from Ben Franklin Technology Partners and other local initiatives, they're helping seed the next wave of innovation. “We're trying to build culture here,” Paul says. “It's not easy. But if we can keep talented students and young founders in the Valley, we can build something real.” What's Next? Looking ahead, the conversation shifts to the future of AI. Kostas explains that the next wave isn't just smarter chatbots — it's AI agents: autonomous systems that can do real work across tools like Slack, Zoom, and WordPress. “If it's a task that can be done remotely, AI is coming for it,” he says. But for Alexander Labs, the goal isn't replacing people — it's empowering them. By creating scalable, local-first AI, they're making sure small businesses can keep up — and even outpace the giants. Final Takeaway This episode isn't just about AI — it's about ownership. It's about building technology that serves local communities, respects data privacy, and creates opportunity where it's needed most. Alexander Labs is betting that the next big thing in tech doesn't have to come from California — it can start in Allentown. And if they're right, the Lehigh Valley may just become the first Silicon Valley all over again.
This week we're talking Rust Coreutils in Ubntu, Intel's new CEO, and the Linux performance of AMD's newest x3d powerhouse CPU. Then Crossover releases 25, and ReactOS and Free95 battle for Windows reimplementation supremacy. There's the Zed Editor, Audacity updates and news from KDE! For tips we have the Pipewire pw-profiler, ifdata for network interface quick reference, and exch for atomically swapping two files. You can find the show notes at https://bit.ly/4hgT1xo and enjoy! Host: Jonathan Bennett Co-Hosts: Ken McDonald and Rob Campbell Download or subscribe to Untitled Linux Show at https://twit.tv/shows/untitled-linux-show Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
SXSW is in full swing, and Dave is multitasking like a pro—grabbing his SXXpress passes mid-recording while keeping the geeky goodness rolling. Quick Tips flood in, from clicking your scroll wheel to copy text in Terminal to checking CPU usage per Safari tab in Activity Monitor. Need hands-free Siri? Just […]
The PC hardware market has finally settled down with the release of AMD's new Radeon 9000 series and no more major CPU or GPU product launches later this year. So we assess the state of the PC union a bit this week, with a focus on the new AMD cards and their dramatically improved upscaling, ray-tracing, video encoding, and perhaps most of all, price. Plus, some updates on Intel's low-end Battlemage, Nvidia's mounting 50-series woes, the possible delay of Intel's next-gen Panther Lake CPU to 2026, new rumored low-power CPUs for Brad to get excited about running a Linux router on, and more. Support the Pod! Contribute to the Tech Pod Patreon and get access to our booming Discord, a monthly bonus episode, your name in the credits, and other great benefits! You can support the show at: https://patreon.com/techpod
Wes and CJ break down everything Cloudflare—from Workers and R2 Storage to Hyperdrive and AI Gateway. Get the scoop on what makes Cloudflare tick, the quirks of their ecosystem, and whether vendor lock-in is a real concern. Show Notes 00:00 Welcome to Syntax! 01:40 Brought to you by Sentry.io. 01:58 What we're talking about today. 02:48 Cloudflare Workers. 03:06 How Cloudflare Workers… work. 04:39 How Cloudflare Workers run. 06:05 Workers size limitations in JavaScript. 07:37 Cloudflare has their own way. 08:13 Potential vendor lock-in. 08:51 You pay based on CPU time, not wall time. 10:26 Cloudflare Pages. Compatibility Matrix 12:07 Durable Objects. Zeb X Post. PartyKit.io, tldraw. 16:41 Cloudflare Workflows. 19:52 How we do something similar on Syntax.fm. 20:52 Cloudflare Queues. 25:26 Files. 26:15 R2 Storage. Ep 780: Cloud Storage: Bandwidth, Storage and BIG ZIPS. 28:00 The Open Bandwidth Alliance. 28:39 Image Pipelines. 33:24 Cloudflare Stream. Streaming Video in 2025. 34:24 Data. 36:37 Key Value. 40:16 Time To Live. 41:13 Hyperdrive. How It Works. Query caching. 44:01 Vectorize Data. 45:41 AI Gateway. 47:49 Automated Rate-Limiting. 48:50 Frameworks. Orange.js. 52:13 Analytics Engine. Counterscale. Ep 761: Cloudflare Analytics Engine, Workers + more with Ben Vinegar. 52:52 WebRTC Engine. 53:01 Puppeteer API. 54:09 Sick Picks + Shameless Plugs. Sick Picks CJ: Flush MicroSD Adapter for Macbook Wes: Synology. Shameless Plugs Wes: Syntax on YouTube. Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott: X Instagram Tiktok LinkedIn Threads Randy: X Instagram YouTube Threads