POPULARITY
In this episode of Galaxy Brains, Alex Thorn speaks with Zack Pokorny, Galaxy Research Associate, about MESA, Galaxy Research's first-ever governance proposal. Designed to reduce Solana inflation through a more democratic voting framework, the proposal introduces multiple voting options rather than a binary outcome. Zack unpacks the motivation behind the proposal, compares it to SIMD-228, and explores how token emissions shape lending, staking, and yield dynamics across DeFi. Plus, Beimnet Abebe (Galaxy Trading) joins to discuss volatile market action amid tariff uncertainty, shifting global sentiment, and signs of policy indecision. This episode was recorded on Wednesday, April 23, 2025. ++ Follow us on Twitter, @glxyresearch, and read our research at www.galaxy.com/research/ to learn more! This podcast, and the information contained herein, has been provided to you by Galaxy Digital Holdings LP and its affiliates (“Galaxy Digital”) solely for informational purposes. View the full disclaimer at www.galaxy.com/disclaimer-galaxy-brains-podcast/
In this episode I hosted Chaofan from Solayer - I learned something new about Solana validators in this episode! Dive in and learn about how Solayer scales, how it works on sandwiching, and how they view talents. Show Notes:Your background and journey to SolayerIs it possible to run the same thing on Ethereum with Solayer's approach? Compare the technical approach of Eigenlayer with Solayer?In your 2025 roadmap, why did you decide to expand from a restaking protocol to a hardware accelerated SVM Chain? How does the end user experience look like? What other blockchains, from a technical standpoint, would you compare and contrast yourself with? Do you consider yourself an L1 or L2, or does this differentiation matter?How does Solayer push 1M TPS state sync efficiently to thousands of nodes It is mentioned that solayer made use of state of the art AI technologies in accelerating the chain. Can you tell us more about it?Users are sandwiched while trading on Solana with some losing millions of dollars. What's your view on this and how does Solayer plan to address this problem?How does this roadmap tie to the issuance and circulation of sUSD and sSOL? What's your personal take on SIMD-228? Why did you guys decide to sell 10% of your SIMD-0228 vote on Meteora?Why do you guys do quite a bit lot of “public good” works that may not necessarily directly benefit Solayer? For example trying to save the fund that 1inch's MM got hacked and in some other cases you tried to publish the details of security incidentsI saw that you are hiring a security researcher for 1MM a year. Is that real? If you like this episode, you're welcome to tip with Ethereum / Solana / Bitcoin:如果喜欢本作品,欢迎打赏ETH/SOL/BTC:ETH: 0x83Fe9765a57C9bA36700b983Af33FD3c9920Ef20SOL: AaCeeEX5xBH6QchuRaUj3CEHED8vv5bUizxUpMsr1KytBTC: 3ACPRhHVbh3cu8zqtqSPpzNnNULbZwaNqG Important Disclaimer: All opinions expressed by Mable Jiang, or other podcast guests, are solely their opinion. This podcast is for informational purposes only and should not be construed as investment advice. Mable Jiang may hold positions in some of the projects discussed on this show. 重要声明:Mable Jiang或嘉宾在播客中的观点仅代表他们的个人看法。此播客仅用于提供信息,不作为投资参考。Mable Jiang有时可能会在此节目中讨论的某项目中持有头寸。
In this episode, Jito Labs CEO Lucas Bruder joined us at DAS to discuss crypto's growing dialogue with regulators, and the potential for staked ETFs. We also dive into Solana governance, SIMD-228 and SIMD-123, and the evolving validator landscape. Thanks for tuning in! As always, remember this podcast is for informational purposes only, and any views expressed by anyone on the show are solely their opinions, not financial advice. -- Resources JitoSOL Securities Classification Report: https://x.com/buffalu__/status/1902008044566839601 -- Special thanks to our sponsor, dYdX! Stay up-to-date with DeFi's Pro Trading Platform by following dYdX on X: https://x.com/dYdX -- Missed DAS? Join us from June 24th-June 26th at Permissionless IV! Tickets: https://blockworks.co/event/permissionless-iv -- Follow Lucas: https://x.com/buffalu__ Follow Dan: https://x.com/smyyguy Follow Boccaccio: https://x.com/salveboccaccio Follow Blockworks Research: https://x.com/blockworksres Subscribe on YouTube: https://bit.ly/3foDS38 Subscribe on Apple: https://apple.co/3SNhUEt Subscribe on Spotify: https://spoti.fi/3NlP1hA Get top market insights and the latest in crypto news. Subscribe to Blockworks Daily Newsletter: https://blockworks.co/newsletter/ Join the 0xResearch Telegram group: https://t.me/+z0H6y2bS-dllODVh -- Timestamps: (0:00) Introduction (1:09) JitoSOL Securities Classification Report (5:00) Staked ETFs (6:31) Thoughts on SIMD-228 & SIMD-123 (13:15) dYdX Snippet (18:01) The End State For Solana Validators (22:34) Solana Governance (25:36) Jito Foundation's Tokenomics Proposal (28:58) Are There Any Efficient DAOs? (31:25) The Future of Jito Tips -- Check out Blockworks Research today! Research, data, governance, tokenomics, and models – now, all in one place Blockworks Research: https://www.blockworksresearch.com/ Free Daily Newsletter: https://blockworks.co/newsletter -- Disclaimer: Nothing said on 0xResearch is a recommendation to buy or sell securities or tokens. This podcast is for informational purposes only, and any views expressed by anyone on the show are solely our opinions, not financial advice. Boccaccio, Danny, and our guests may hold positions in the companies, funds, or projects discussed.
In this episode, Danny and Westie discuss their current outlook for the crypto market, Solana's controversial ad, and whether or not memes are truly dead. They analyze onchain data, explore governance dynamics around SIMD-228, and debate whether TradFi firms like Robinhood are poised to dominate established players in DeFi. The duo also reflects on the broader crypto landscape, offering insights on market sentiment, speculative cycles, and the importance of stepping back during uncertain times. Thanks for tuning in! As always, remember this podcast is for informational purposes only, and any views expressed by anyone on the show are solely their opinions, not financial advice. -- Resources BWR: Chain Comparison Dashboard: https://app.blockworksresearch.com/analytics/l1 SIMD-228 Voting Status: https://dune.com/kagren0/simd-0228-voting-status -- Ledger, the global leader in digital asset security, proudly sponsors 0xResearch! As Ledger celebrates 10 years of securing 20% of global crypto, it remains the top choice for securing your assets. Buy a LEDGER™ device now and build confidently, knowing your precious tokens are safe. Buy now on https://shop.ledger.com/?r=1da180a5de00. -- Doing your crypto taxes doesn't have to suck. Say goodbye to tax season stress with Crypto Tax Calculator - built for degens like you: 3,000+ integrations to support all your wallets, exchanges, and on-chain activity with deep integrations into dApps & smart contracts. A custom pricing oracle to handle even the most chaotic portfolios. Full support for IRS rules and accurate, CPA-endorsed tax reports. Generate reports your accountant will love or file directly yourself. Create an account. Import your wallets and exchanges. Review and download your tax report. Get started today and make tax season easy! You can use our exclusive discount code BW2025 to enjoy 30% off all paid CTC plans until April 15th, 2025. Follow this link to get started: https://cryptotaxcalculator.io/us/?coupon=BW2025&utm_source=blockworks&utm_medium=podcast&utm_campaign=0x -- Follow Westie: https://x.com/WestieCapital Follow Danny: https://x.com/defi_kay_ Follow Blockworks Research: https://x.com/blockworksres Subscribe on YouTube: https://bit.ly/3foDS38 Subscribe on Apple: https://apple.co/3SNhUEt Subscribe on Spotify: https://spoti.fi/3NlP1hA Get top market insights and the latest in crypto news. Subscribe to Blockworks Daily Newsletter: https://blockworks.co/newsletter/ Join the 0xResearch Telegram group: https://t.me/+z0H6y2bS-dllODVh -- Timestamps: (0:00) Introduction (1:59) Market Outlook (6:45) Solana's Advertisement and Subsequent Fallout (12:48) Ledger Ad (13:04) Are Memes Dead? (28:48) Ledger Ad (29:22) Base's Increased Memecoin Activity (35:22) SIMD-228 Fails to Pass (49:43) CryptoTaxCalculator Ad (50:14) Is TradFi going to Eat Our Lunch? (1:03:55) Closing Comments -- Check out Blockworks Research today! Research, data, governance, tokenomics, and models – now, all in one place Blockworks Research: https://www.blockworksresearch.com/ Free Daily Newsletter: https://blockworks.co/newsletter -- Disclaimer: Nothing said on 0xResearch is a recommendation to buy or sell securities or tokens. This podcast is for informational purposes only, and any views expressed by anyone on the show are solely our opinions, not financial advice. Boccaccio, Danny, and our guests may hold positions in the companies, funds, or projects discussed.
Gm! This week we're back with another weekly roundup to discuss why SIMD 228 didn't pass. We deep dive into Solana's governance problem, L1 vs L2 value accrual, Ethereum's scalability challenges, Solana's ultimate vision & more. Enjoy! -- Follow Mert: https://x.com/0xMert_ Follow Jack: https://x.com/whosknave Follow Lightspeed: https://twitter.com/Lightspeedpodhq Subscribe to the Lightspeed Newsletter: https://blockworks.co/newsletter/lightspeed Join the Lightspeed Manlets Collective Telegram: https://t.me/+QUl_ZOj2nMJlZTEx -- Accurate Crypto Taxes. No Guesswork. Say goodbye to tax season headaches with Crypto Tax Calculator: https://cryptotaxcalculator.io/us/?coupon=BW2025&utm_source=blockworks&utm_medium=referral+&utm_campaign=lightspeedpodcast Generate accurate, CPA-endorsed tax reports fully compliant with IRS rules. Seamlessly integrate with 3000+ wallets, exchanges, and on-chain platforms. Import reports directly into TurboTax or H&R Block, or securely share them with your accountant. Exclusive Offer: Use the code BW2025 to enjoy 30% off all paid plans. Don't miss out - offer expires 15 April 2025! -- Get top market insights and the latest in crypto news. Subscribe to Blockworks Daily Newsletter: https://blockworks.co/newsletter/ -- (00:00) Introduction (00:58) Why SIMD 228 Didn't Pass (09:36) Crypto Tax Calculator Ad (10:07) Validator Voting On SIMD 228 (20:37) Solana Governance (22:06) Should Solana End Its Delegation Program? (28:47) Solana's Inflation Problem (30:20) Crypto Tax Calculator Ad (30:53) L1 vs L2 Value Accrual (39:23) Ethereum Scalability (41:57) Solana's Ultimate Vision -- Disclaimers: Lightspeed was kickstarted by a grant from the Solana Foundation. Nothing said on Lightspeed is a recommendation to buy or sell securities or tokens. This podcast is for informational purposes only, and any views expressed by anyone on the show are solely our opinions, not financial advice. Mert, Jack, and our guests may hold positions in the companies, funds, or projects discussed.
In this episode, we dive into L1 governance! We discuss the implications of SIMD-228, validator voting structures, and comparisons with Ethereum, Cosmos, and Bitcoin governance. We debate whether decentralization inherently leads to better decision-making, and analyze the role of foundations, token holders, and validators in shaping networks. Finally, we ask ourselves - why should anyone hold L1 tokens? Thanks for tuning in! -- Resources 0xResearch: What Does SIMD-228 Mean For Solana?: https://apple.co/43Ngi6T Lightspeed: Why Solana Should Change Its Inflation Rate: https://apple.co/3FKFzVe SIMD-228 Deep Dive: https://x.com/0xcarlosg/status/1899570952943387023 -- Namada is the shielded asset hub rewarding you to protect the multichain. Enabling data protection for any existing asset, app, or chain, Namada introduces shielded cross-chain actions and rewards for shielding your assets, which strengthens data protection guarantees for everyone. Namada is currently entering phase 3 of its mainnet launch — follow along on https://namada.net/ -- Ledger, the global leader in digital asset security, proudly sponsors Bell Curve! As Ledger celebrates 10 years of securing 20% of global crypto, it remains the top choice for securing your assets. Buy a LEDGER™ device now, and build confidently, knowing your BTC, ETH, SOL, and more are safe. Buy now on https://shop.ledger.com/?r=1da180a5de00. -- Traders, AI enthusiasts, and degens—listen up! The game is changing, and DESK is leading the charge. Welcome to the next generation of perpetual trading—where cutting-edge technology meets AI-driven automation. Trade today at https://desk.exchange/?register-channel=BWpodcast and follow @TradeOnDESK on X! -- Morpho is a permissionless lending platform that allows anyone to earn yield and borrow assets on your terms. Its flexible, trustless infrastructure also empowers developers and businesses to build and tailor their own financial products. Whether you're an individual, fund, fintech, or institution, Morpho is for you. Try Morpho today: https://app.morpho.org/?network=mainnet&spdl=99nsk9 -- Follow Cooper: https://x.com/cooper_kunz Follow Jim: https://x.com/VelvetMilkman Follow Myles: https://x.com/MylesOneil Follow Mike: https://x.com/MikeIppolito_ Subscribe on YouTube: https://bit.ly/3R1D1D9 Subscribe on Apple: https://apple.co/3pQTfmD Subscribe on Spotify: https://spoti.fi/3cpKZXH Get top market insights and the latest in crypto news. Subscribe to Blockworks Daily Newsletter: https://blockworks.co/newsletter/ Join the Bell Curve Telegram group: https://t.me/+nzyxAvQ0Xxc3YTEx -- Timestamps: (0:00) Introduction (1:57) L1 Governance Today (14:14) Ads (Namada & Ledger) (14:48) Governance Models and the Role of Foundations (28:54) Ads (Namada & Ledger) (30:04) What Governance Models Work? (35:47) Was SIMD-228 Rushed? (44:27) Ads (Desk & Morpho) (45:22) Ossification in L1 Governance (52:18) Bitcoin Governance (57:30) Why Hold L1 Tokens? -- Disclaimer: Nothing said on Bell Curve is a recommendation to buy or sell securities or tokens. This podcast is for informational purposes only, and any views expressed by anyone on the show are solely our opinions, not financial advice. Mike, Jason, Michael, Vance and our guests may hold positions in the companies, funds, or projects discussed.
Epicenter - Learn about Blockchain, Ethereum, Bitcoin and Distributed Technologies
Solana needs no introduction. Ever since its inception, it pushed throughput scaling on a single chain, without the need of sharding or rollups. Despite its ups and downs that culminated at the bottom of the bear market after the FTX crash, it managed to not only survive, but build a vibrant community around crypto's (arguably) most prominent PMF (thus far).Join us for a fascinating discussion and learn about Anatoly's take on controversial topics such as MEV, concurrent block leaders (the equivalent of Ethereum's PBS proposal), L2 rollups, Solana economics, how to tackle potential exploits and more.Topics covered in this episode:How the original Solana vision turned outWhat makes blockchains valuableMEV & program writable accountsConcurrent block proposersCurrent bottlenecks for scaling SolanaMainnet vs. L2 rollupsFiredancer upgradeHalting the network vs. rollbacksSolana's scaling roadmapDoubleZeroWorst hacks on SolanaUI exploits, Bybit hack and smart contract securitySolana economics and the SIMD-0228 proposalFuture improvementsUse cases for blockchainsSolana mobileEpisode links:Anatoly Yakovenko on XSolana on XSolana Mobile on XSponsors:Gnosis: Gnosis builds decentralized infrastructure for the Ethereum ecosystem, since 2015. This year marks the launch of Gnosis Pay— the world's first Decentralized Payment Network. Get started today at - gnosis.ioChorus One: one of the largest node operators worldwide, trusted by 175,000+ accounts across more than 60 networks, Chorus One combines institutional-grade security with the highest yields at - chorus.oneThis episode is hosted by Brian Fabian Crain & Martin Köppelmann.
The Solana ecosystem just completed a critical governance vote. SIMD-228, a proposal to tie Solana's inflation rate to its staking participation rate, was put forward by Multicoin Capital and Anza, but despite a majority voting in favor, it failed to meet the required supermajority to pass. Tushar Jain, co-founder and managing partner at Multicoin Capital, who co-authored the proposal, joins the show to discuss: Why he believes the proposal was necessary Whether inflation is too high for Solana's long-term health If some validators voted against their own interests The silver lining of the governance process Why a smaller proposal focused fee sharing did pass Whether Multicoin Capital will resubmit a revised proposal Visit our website for breaking news, analysis, op-eds, articles to learn about crypto, and much more: unchainedcrypto.com Thank you to our sponsors! BitKey: Use code UNCHAINED for 20% off Mantle Guest Tushar Jain, Co-founder and Managing Partner at Multicoin Capital Links Unchained: Proposal to Cut SOL Inflation by 80% Fails After ‘Close Call' SIMD Vote Status Kayuza's tweet Timestamps:
In this week's Roundup, the crew dives into the growing intersection of AI and crypto, and the debate over AI agents transacting onchain. They explore the viability of zkAI, VC interest in crypto amid an AI-driven funding landscape, and the growing divergence between the OP Stack and Arbitrum. Finally, they tackle on-chain governance struggles, the fixation on block times, and why, despite bearish sentiment, the long-term future of crypto has never looked brighter. Enjoy! Resources OP Stack & Arbitrum Stack diverging: https://x.com/pumatheuma/status/1897746210351726883 Funding Rounds in Crypto: https://defillama.com/raises Mert's SIMD-228 Post: https://x.com/0xMert_/status/1900172113287565754 – L2s and L3s are history. Supra Containers give you dedicated, customizable AppSpace on Supra's Layer-1 to rescue you from the costs, complexities, and fragmentation of L2s and L3s. Containers help you build with better customization and control than appchains at a fraction of the cost. Use your own token as the gas token, create local fee markets with custom gas amounts or just go gasless, and scale on demand whenever you need to. Supra Containers are secured by Supra's L1 nodes and get access to Supra's 500k TPS throughput, sub-second consensus latency, and all their built-in services like oracle price feeds and onchain randomness without any overhead. Supra is also MultiVM compatible so you can easily deploy your EVM, Move, and SVM smart contracts here. Get all the freedom, control, and tools you need to build super dApps and bring the world onchain. To learn more, visit www.supra.com/blockworks - - Follow Jill: https://x.com/jillrgunter Follow Nick: https://x.com/nickwh8te Follow Uma: https://x.com/pumatheuma Follow Mike: https://x.com/MikeIppolito_ Follow Expansion: https://x.com/ExpansionPod_ Subscribe on YouTube: https://bit.ly/3QLwfTs Subscribe on Apple: http://apple.co/4bGKYYM Subscribe on Spotify: http://spoti.fi/3Vaubq1 Get top market insights and the latest in crypto news. Subscribe to Blockworks Daily Newsletter: https://blockworks.co/newsletter/ -- Timestamps: (00:00) Introduction (02:57) Why Would AI Agents Transact Onchain? (07:17) Is zkAI Viable? (10:45) Supra Pre-Roll (11:10) Does Grok Suck? (12:59) VC Interest in Crypto (15:30) OP Stack and Arbitrum Orbit Stack Diverging (27:02) Are Regulatory Pressures Impacting L2 Models? (34:16) Supra Mid-Roll (35:00) The Fixation on Block Times (40:11) Onchain Governance Struggles (55:57) Optimism For Crypto's Future - - Disclaimer Expansion was kickstarted by a grant from the Celestia Foundation. Nothing said on Expansion is a recommendation to buy or sell securities or tokens. This podcast is for informational purposes only, and any views expressed by anyone on the show are solely our opinions, not financial advice. Michael, Nick, and our guests may hold positions in the companies, funds, or projects discussed.
The Solana ecosystem just completed a critical governance vote. SIMD-228, a proposal to tie Solana's inflation rate to its staking participation rate, was put forward by Multicoin Capital and Anza, but despite a majority voting in favor, it failed to meet the required supermajority to pass. Tushar Jain, co-founder and managing partner at Multicoin Capital, who co-authored the proposal, joins the show to discuss: Why he believes the proposal was necessary Whether inflation is too high for Solana's long-term health If some validators voted against their own interests The silver lining of the governance process Why a smaller proposal focused on fee sharing did pass Whether Multicoin Capital will resubmit a revised proposal Visit our website for breaking news, analysis, op-eds, articles to learn about crypto, and much more: unchainedcrypto.com Thank you to our sponsors! BitKey: Use code UNCHAINED for 20% off Mantle Guest Tushar Jain, Co-founder and Managing Partner at Multicoin Capital Links Unchained: Proposal to Cut SOL Inflation by 80% Fails After ‘Close Call' SIMD Vote Status Kayuza's tweet Timestamps:
In this episode, our Blockworks Research analysts dives into key market updates, including the U.S. Strategic Bitcoin Reserve announcement, and Bitcoin's outlook going forward. They also explore Solana's SIMD-228 proposal, and its implications for validators and staking rewards. Finally, they discuss the future of real-world assets onchain Thanks for tuning in! As always, remember this podcast is for informational purposes only, and any views expressed by anyone on the show are solely their opinions, not financial advice. -- Resources Bitcoin February 2025 Update: https://app.blockworksresearch.com/flashnotes/bitcoin-february-2025-update SIMD-228 Deep Dive: https://x.com/0xcarlosg/status/1899570952943387023 The Effects of SIMD-96: https://x.com/0xcarlosg/status/1892181938770694418 -- Ledger, the global leader in digital asset security, proudly sponsors 0xResearch! As Ledger celebrates 10 years of securing 20% of global crypto, it remains the top choice for securing your assets. Buy a LEDGER™ device now and build confidently, knowing your precious tokens are safe. Buy now on https://shop.ledger.com/?r=1da180a5de00. -- Doing your crypto taxes doesn't have to suck. Say goodbye to tax season stress with Crypto Tax Calculator - built for degens like you: 3,000+ integrations to support all your wallets, exchanges, and on-chain activity with deep integrations into dApps & smart contracts. A custom pricing oracle to handle even the most chaotic portfolios. Full support for IRS rules and accurate, CPA-endorsed tax reports. Generate reports your accountant will love or file directly yourself. Create an account. Import your wallets and exchanges. Review and download your tax report. Get started today and make tax season easy! You can use our exclusive discount code BW2025 to enjoy 30% off all paid CTC plans until April 15th, 2025. Follow this link to get started: https://cryptotaxcalculator.io/us/?coupon=BW2025&utm_source=blockworks&utm_medium=podcast&utm_campaign=0x -- Join us at DAS NYC 2025! Use code 0x10 for a 10% discount: https://blockworks.co/event/digital-asset-summit-2025-new-york -- 0xResearch needs your help! We're conducting an audience survey to help us get a better picture of who our listeners are, and what you want to see from the show. What do you like about the show? What can we improve on? To contribute, follow this link: https://blockworks-research.beehiiv.com/forms/a97db4d7-5ff3-4a02-9089-d521bc64babd -- Follow Carlos: https://x.com/0xcarlosg Follow Marc: https://x.com/marcarjoon Follow Danny: https://x.com/defi_kay_ Follow Blockworks Research: https://x.com/blockworksres Subscribe on YouTube: https://bit.ly/3foDS38 Subscribe on Apple: https://apple.co/3SNhUEt Subscribe on Spotify: https://spoti.fi/3NlP1hA Get top market insights and the latest in crypto news. Subscribe to Blockworks Daily Newsletter: https://blockworks.co/newsletter/ Join the 0xResearch Telegram group: https://t.me/+z0H6y2bS-dllODVh -- Timestamps: (0:00) Introduction (1:06) Reactions to the Strategic Bitcoin Reserve (5:00) Bitcoin's Outlook Going Forward (15:37) Ledger Ad (15:53) TradFi's Plans to Enable 24-Hour Trading (23:48) Ledger Ad (24:21) The SIMD-228 Debate (37:03) Future Proposals After SIMD-228 (44:09) CryptoTaxCalculator Ad (44:41) Expectations For RWAs (57:56) DAS NYC 2025 -- Check out Blockworks Research today! Research, data, governance, tokenomics, and models – now, all in one place Blockworks Research: https://www.blockworksresearch.com/ Free Daily Newsletter: https://blockworks.co/newsletter -- Disclaimer: Nothing said on 0xResearch is a recommendation to buy or sell securities or tokens. This podcast is for informational purposes only, and any views expressed by anyone on the show are solely our opinions, not financial advice. Boccaccio, Danny, and our guests may hold positions in the companies, funds, or projects discussed.
Gm! This week we're joined by Carlos Gonzalez Campo to discuss the current state of the Solana network. We deep dive into SIMD 96 & 228, why Pump.fun is launching its own AMM, REV declines in February, stablecoin growth & more. Enjoy! -- Follow Carlos: https://x.com/0xcarlosg Follow Jack: https://x.com/whosknave Follow Lightspeed: https://twitter.com/Lightspeedpodhq Subscribe to the Lightspeed Newsletter: https://blockworks.co/newsletter/lightspeed Join the Lightspeed Manlets Collective Telegram: https://t.me/+QUl_ZOj2nMJlZTEx -- Use Code LIGHTSPEED10 for 10% off tickets to Digital Asset Summit 2025: https://blockworks.co/event/digital-asset-summit-2025-new-york -- Ledger, the global leader in digital asset security, proudly sponsors the Lightspeed podcast. As Ledger celebrates 10 years of securing 20% of global crypto, it remains the top choice for securing your Solana assets. Buy a LEDGER™ device now and build confidently, knowing your SOL are safe. Buy now on https://shop.ledger.com/?r=1da180a5de00. -- Get top market insights and the latest in crypto news. Subscribe to Blockworks Daily Newsletter: https://blockworks.co/newsletter/ -- (00:00) Introduction (01:23) Takeaway's From SIMD 96 (06:47) Staking On Solana (10:04) Solana Data Deep Dive (13:05) Ledger Ad (13:55) REV On Solana (16:56) Why Pump.fun Is Launching An AMM (21:13) Solana's DEX War (26:23) REV Declines Post LIBRA (30:11) Ledger Ad (30:59) Stablecoin Growth On Solana (36:40) SIMD 228 (42:06) Nominal vs Real Yields -- Disclaimers: Lightspeed was kickstarted by a grant from the Solana Foundation. Nothing said on Lightspeed is a recommendation to buy or sell securities or tokens. This podcast is for informational purposes only, and any views expressed by anyone on the show are solely our opinions, not financial advice. Mert, Jack, and our guests may hold positions in the companies, funds, or projects discussed.
Vector Databases for Recommendation Engines: Episode NotesIntroductionVector databases power modern recommendation systems by finding relationships between entities in high-dimensional spaceUnlike traditional databases that rely on exact matching, vector DBs excel at finding similar itemsCore application: discovering hidden relationships between products, content, or users to drive engagementKey Technical ConceptsVector/Embedding: Numerical array that represents an entity in n-dimensional spaceExample: [0.2, 0.5, -0.1, 0.8] where each dimension represents a featureSimilar entities have vectors that are close to each other mathematicallySimilarity Metrics:Cosine Similarity: Measures angle between vectors (-1 to 1)Efficient computation: dot_product / (magnitude_a * magnitude_b)Intuitively: measures alignment regardless of vector magnitudeSearch Algorithms:Exact Nearest Neighbor: Find K closest vectors (computationally expensive)Approximate Nearest Neighbor (ANN): Trades perfect accuracy for speedComputational complexity reduction: O(n) → O(log n) with specialized indexingThe "Five Whys" of Vector DatabasesTraditional databases can't find "similar" itemsRelational DBs excel at WHERE category = 'shoes'Can't efficiently answer "What's similar to this product?"Vector similarity enables fuzzy matching beyond exact attributesModern ML represents meaning as vectorsLanguage models encode semantics in vector spaceMathematical operations on vectors reveal hidden relationshipsDomain-specific features emerge from high-dimensional representationsComputation costs explode at scaleComputing similarity across millions of products is compute-intensiveSpecialized indexing structures dramatically reduce computational complexityVector DBs optimize specifically for high-dimensional similarity operationsBetter recommendations drive business metricsMajor e-commerce platforms attribute ~35% of revenue to recommendation enginesMedia platforms: 75%+ of content consumption comes from recommendationsSmall improvements in relevance directly impact bottom lineContinuous learning creates compounding advantageEach customer interaction refines the recommendation modelVector-based systems adapt without complete retrainingData advantages compound over timeRecommendation PatternsContent-Based Recommendations"Similar to what you're viewing now"Based purely on item feature vectorsKey advantage: works with zero user history (solves cold start)Collaborative Filtering via Vectors"Users like you also enjoyed..."User preference vectors derived from interaction historyItem vectors derived from which users interact with themHybrid ApproachesCombine content and collaborative signalsExample: Item vectors + recency weighting + popularity biasBalance relevance with exploration for discoveryImplementation ConsiderationsMemory vs. Disk TradeoffsIn-memory for fastest performance (sub-millisecond latency)On-disk for larger vector collectionsHybrid approaches for optimal performance/scale balanceScaling ThresholdsExact search viable to ~100K vectorsApproximate algorithms necessary beyond that thresholdDistributed approaches for internet-scale applicationsEmerging TechnologiesRust-based vector databases (Qdrant) for performance-critical applicationsWebAssembly deployment for edge computing scenariosSpecialized hardware acceleration (SIMD instructions)Business ImpactE-commerce ApplicationsProduct recommendations drive 20-30% increase in cart size"Similar items" implementation with vector similarityCross-category discovery through latent feature relationshipsContent PlatformsIncreased engagement through personalized content discoveryReduced bounce rates with relevant recommendationsBalanced exploration/exploitation for long-term engagementSocial NetworksUser similarity for community building and engagementContent discovery through user clusteringFollowing recommendations based on interaction patternsTechnical ImplementationCore Operationsinsert(id, vector): Add entity vectors to databasesearch_similar(query_vector, limit): Find K nearest neighborsbatch_insert(vectors): Efficiently add multiple vectorsSimilarity Computationfn cosine_similarity(a: &[f32], b: &[f32]) -> f32 { let dot_product: f32 = a.iter().zip(b.iter()).map(|(x, y)| x * y).sum(); let mag_a: f32 = a.iter().map(|x| x * x).sum::().sqrt(); let mag_b: f32 = b.iter().map(|x| x * x).sum::().sqrt(); if mag_a > 0.0 && mag_b > 0.0 { dot_product / (mag_a * mag_b) } else { 0.0 } } Integration TouchpointsEmbedding pipeline: Convert raw data to vectorsRecommendation API: Query for similar itemsFeedback loop: Capture interactions to improve modelPractical AdviceStart SimpleBegin with in-memory vector database for
Gm! This week we're back with another roundup episode to discuss SIMD 228 & it's impact on Solana's inflation. We deep dive into, what is SIMD 228, is Solana overpaying for security, crypto's data problem & Trump's strategic crypto reserve announcement. Enjoy! -- Follow Dan: https://x.com/smyyguy Follow Mert: https://x.com/0xMert_ Follow Jack: https://x.com/whosknave Follow Lightspeed: https://twitter.com/Lightspeedpodhq Subscribe to the Lightspeed Newsletter: https://blockworks.co/newsletter/lightspeed Join the Lightspeed Manlets Collective Telegram: https://t.me/+QUl_ZOj2nMJlZTEx -- Use Code LIGHTSPEED10 for 10% off tickets to Digital Asset Summit 2025: https://blockworks.co/event/digital-asset-summit-2025-new-york -- Just for Lightspeed listeners, visit https://bonkbets.io/ and connect your wallet to get 10% back on any losses to bet again. For a limited time, make two $10 bets on any game before March 1 2025 to get a free third $10 bet. -- Accurate Crypto Taxes. No Guesswork. Say goodbye to tax season headaches with Crypto Tax Calculator: https://cryptotaxcalculator.io/us/?coupon=BW2025 Generate accurate, CPA-endorsed tax reports fully compliant with IRS rules. Seamlessly integrate with 3000+ wallets, exchanges, and on-chain platforms. Import reports directly into TurboTax or H&R Block, or securely share them with your accountant. Exclusive Offer: Use the code BW2025 to enjoy 30% off all paid plans. Don't miss out - offer expires 15 April 2025! -- Get top market insights and the latest in crypto news. Subscribe to Blockworks Daily Newsletter: https://blockworks.co/newsletter/ -- (00:00) Introduction (01:06) What Is SIMD 228? (07:41) Bonk Bets Ad (08:20) Is Solana Overpaying For Security? (19:23) Solana's Inflation Rate (25:14) SIMD 228's Impact On Solana's Decentralization (35:44) Crypto Tax Calculator Ad (36:17) Bonk Bets Ad (36:48) Crypto's Data Problem (41:29) Validators On Solana (51:18) The U.S Strategic Crypto Reserve (57:20) Flashblocks On Base -- Disclaimers: Lightspeed was kickstarted by a grant from the Solana Foundation. Nothing said on Lightspeed is a recommendation to buy or sell securities or tokens. This podcast is for informational purposes only, and any views expressed by anyone on the show are solely our opinions, not financial advice. Mert, Jack, and our guests may hold positions in the companies, funds, or projects discussed.
In this episode, we're joined by Ian Unsworth, Co-founder of Kairos Research to discuss restaking, compressed yields, and SIMD-228. We also dive into Solana's vibe shift, Solana REV post TRUMP and LIBRA, and Unichain's slow start. Finally, we end the episode with some yapping about Kaito, new high performance networks, and what does a restaking guy look like? Thanks for tuning in! As always, remember this podcast is for informational purposes only, and any views expressed by anyone on the show are solely their opinions, not financial advice. -- Resources SIMD-228: Reducing SOL's Inflation: https://x.com/blockworksres/status/1880616152776552694 Kairos Research x Firstset: https://x.com/Kairos_Res/status/1886811200345817547 -- Special thanks to our sponsor, dYdX! Stay up-to-date with DeFi's Pro Trading Platform by following dYdX on X: https://x.com/dYdX -- Join us at DAS NYC 2025! Use code 0x10 for a 10% discount: https://blockworks.co/event/digital-asset-summit-2025-new-york -- 0xResearch needs your help! We're conducting an audience survey to help us get a better picture of who our listeners are, and what you want to see from the show. What do you like about the show? What can we improve on? To contribute, follow this link: https://blockworks-research.beehiiv.com/forms/a97db4d7-5ff3-4a02-9089-d521bc64babd -- Follow Ian: https://x.com/Ian_Unsworth Follow Dan Smith: https://x.com/smyyguy Follow Danny: https://x.com/defi_kay_ Follow Boccaccio: https://x.com/salveboccaccio Follow Blockworks Research: https://x.com/blockworksres Subscribe on YouTube: https://bit.ly/3foDS38 Subscribe on Apple: https://apple.co/3SNhUEt Subscribe on Spotify: https://spoti.fi/3NlP1hA Get top market insights and the latest in crypto news. Subscribe to Blockworks Daily Newsletter: https://blockworks.co/newsletter/ Join the 0xResearch Telegram group: https://t.me/+z0H6y2bS-dllODVh -- Timestamps: (0:00) Introduction (1:54) Restaking, Compressed Yields & SIMD-228 (10:10) Solana's Vibe Shift (20:57) Solana REV Post Trump & Libra (32:09) Unichain's Launch (34:28) dYdX Snippet (39:24) Yapping About Kaito (57:29) Kairos Research x Firstset (1:00:47) New High Performance Networks (1:14:18) Closing Comments -- Check out Blockworks Research today! Research, data, governance, tokenomics, and models – now, all in one place Blockworks Research: https://www.blockworksresearch.com/ Free Daily Newsletter: https://blockworks.co/newsletter -- Disclaimer: Nothing said on 0xResearch is a recommendation to buy or sell securities or tokens. This podcast is for informational purposes only, and any views expressed by anyone on the show are solely our opinions, not financial advice. Boccaccio, Danny, and our guests may hold positions in the companies, funds, or projects discussed.
Did you know that adding a simple Code Interpreter took o3 from 9.2% to 32% on FrontierMath? The Latent Space crew is hosting a hack night Feb 11th in San Francisco focused on CodeGen use cases, co-hosted with E2B and Edge AGI; watch E2B's new workshop and RSVP here!We're happy to announce that today's guest Samuel Colvin will be teaching his very first Pydantic AI workshop at the newly announced AI Engineer NYC Workshops day on Feb 22! 25 tickets left.If you're a Python developer, it's very likely that you've heard of Pydantic. Every month, it's downloaded >300,000,000 times, making it one of the top 25 PyPi packages. OpenAI uses it in its SDK for structured outputs, it's at the core of FastAPI, and if you've followed our AI Engineer Summit conference, Jason Liu of Instructor has given two great talks about it: “Pydantic is all you need” and “Pydantic is STILL all you need”. Now, Samuel Colvin has raised $17M from Sequoia to turn Pydantic from an open source project to a full stack AI engineer platform with Logfire, their observability platform, and PydanticAI, their new agent framework.Logfire: bringing OTEL to AIOpenTelemetry recently merged Semantic Conventions for LLM workloads which provides standard definitions to track performance like gen_ai.server.time_per_output_token. In Sam's view at least 80% of new apps being built today have some sort of LLM usage in them, and just like web observability platform got replaced by cloud-first ones in the 2010s, Logfire wants to do the same for AI-first apps. If you're interested in the technical details, Logfire migrated away from Clickhouse to Datafusion for their backend. We spent some time on the importance of picking open source tools you understand and that you can actually contribute to upstream, rather than the more popular ones; listen in ~43:19 for that part.Agents are the killer app for graphsPydantic AI is their attempt at taking a lot of the learnings that LangChain and the other early LLM frameworks had, and putting Python best practices into it. At an API level, it's very similar to the other libraries: you can call LLMs, create agents, do function calling, do evals, etc.They define an “Agent” as a container with a system prompt, tools, structured result, and an LLM. Under the hood, each Agent is now a graph of function calls that can orchestrate multi-step LLM interactions. You can start simple, then move toward fully dynamic graph-based control flow if needed.“We were compelled enough by graphs once we got them right that our agent implementation [...] is now actually a graph under the hood.”Why Graphs?* More natural for complex or multi-step AI workflows.* Easy to visualize and debug with mermaid diagrams.* Potential for distributed runs, or “waiting days” between steps in certain flows.In parallel, you see folks like Emil Eifrem of Neo4j talk about GraphRAG as another place where graphs fit really well in the AI stack, so it might be time for more people to take them seriously.Full Video EpisodeLike and subscribe!Chapters* 00:00:00 Introductions* 00:00:24 Origins of Pydantic* 00:05:28 Pydantic's AI moment * 00:08:05 Why build a new agents framework?* 00:10:17 Overview of Pydantic AI* 00:12:33 Becoming a believer in graphs* 00:24:02 God Model vs Compound AI Systems* 00:28:13 Why not build an LLM gateway?* 00:31:39 Programmatic testing vs live evals* 00:35:51 Using OpenTelemetry for AI traces* 00:43:19 Why they don't use Clickhouse* 00:48:34 Competing in the observability space* 00:50:41 Licensing decisions for Pydantic and LogFire* 00:51:48 Building Pydantic.run* 00:55:24 Marimo and the future of Jupyter notebooks* 00:57:44 London's AI sceneShow Notes* Sam Colvin* Pydantic* Pydantic AI* Logfire* Pydantic.run* Zod* E2B* Arize* Langsmith* Marimo* Prefect* GLA (Google Generative Language API)* OpenTelemetry* Jason Liu* Sebastian Ramirez* Bogomil Balkansky* Hood Chatham* Jeremy Howard* Andrew LambTranscriptAlessio [00:00:03]: Hey, everyone. Welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:12]: Good morning. And today we're very excited to have Sam Colvin join us from Pydantic AI. Welcome. Sam, I heard that Pydantic is all we need. Is that true?Samuel [00:00:24]: I would say you might need Pydantic AI and Logfire as well, but it gets you a long way, that's for sure.Swyx [00:00:29]: Pydantic almost basically needs no introduction. It's almost 300 million downloads in December. And obviously, in the previous podcasts and discussions we've had with Jason Liu, he's been a big fan and promoter of Pydantic and AI.Samuel [00:00:45]: Yeah, it's weird because obviously I didn't create Pydantic originally for uses in AI, it predates LLMs. But it's like we've been lucky that it's been picked up by that community and used so widely.Swyx [00:00:58]: Actually, maybe we'll hear it. Right from you, what is Pydantic and maybe a little bit of the origin story?Samuel [00:01:04]: The best name for it, which is not quite right, is a validation library. And we get some tension around that name because it doesn't just do validation, it will do coercion by default. We now have strict mode, so you can disable that coercion. But by default, if you say you want an integer field and you get in a string of 1, 2, 3, it will convert it to 123 and a bunch of other sensible conversions. And as you can imagine, the semantics around it. Exactly when you convert and when you don't, it's complicated, but because of that, it's more than just validation. Back in 2017, when I first started it, the different thing it was doing was using type hints to define your schema. That was controversial at the time. It was genuinely disapproved of by some people. I think the success of Pydantic and libraries like FastAPI that build on top of it means that today that's no longer controversial in Python. And indeed, lots of other people have copied that route, but yeah, it's a data validation library. It uses type hints for the for the most part and obviously does all the other stuff you want, like serialization on top of that. But yeah, that's the core.Alessio [00:02:06]: Do you have any fun stories on how JSON schemas ended up being kind of like the structure output standard for LLMs? And were you involved in any of these discussions? Because I know OpenAI was, you know, one of the early adopters. So did they reach out to you? Was there kind of like a structure output console in open source that people were talking about or was it just a random?Samuel [00:02:26]: No, very much not. So I originally. Didn't implement JSON schema inside Pydantic and then Sebastian, Sebastian Ramirez, FastAPI came along and like the first I ever heard of him was over a weekend. I got like 50 emails from him or 50 like emails as he was committing to Pydantic, adding JSON schema long pre version one. So the reason it was added was for OpenAPI, which is obviously closely akin to JSON schema. And then, yeah, I don't know why it was JSON that got picked up and used by OpenAI. It was obviously very convenient for us. That's because it meant that not only can you do the validation, but because Pydantic will generate you the JSON schema, it will it kind of can be one source of source of truth for structured outputs and tools.Swyx [00:03:09]: Before we dive in further on the on the AI side of things, something I'm mildly curious about, obviously, there's Zod in JavaScript land. Every now and then there is a new sort of in vogue validation library that that takes over for quite a few years and then maybe like some something else comes along. Is Pydantic? Is it done like the core Pydantic?Samuel [00:03:30]: I've just come off a call where we were redesigning some of the internal bits. There will be a v3 at some point, which will not break people's code half as much as v2 as in v2 was the was the massive rewrite into Rust, but also fixing all the stuff that was broken back from like version zero point something that we didn't fix in v1 because it was a side project. We have plans to move some of the basically store the data in Rust types after validation. Not completely. So we're still working to design the Pythonic version of it, in order for it to be able to convert into Python types. So then if you were doing like validation and then serialization, you would never have to go via a Python type we reckon that can give us somewhere between three and five times another three to five times speed up. That's probably the biggest thing. Also, like changing how easy it is to basically extend Pydantic and define how particular types, like for example, NumPy arrays are validated and serialized. But there's also stuff going on. And for example, Jitter, the JSON library in Rust that does the JSON parsing, has SIMD implementation at the moment only for AMD64. So we can add that. We need to go and add SIMD for other instruction sets. So there's a bunch more we can do on performance. I don't think we're going to go and revolutionize Pydantic, but it's going to continue to get faster, continue, hopefully, to allow people to do more advanced things. We might add a binary format like CBOR for serialization for when you'll just want to put the data into a database and probably load it again from Pydantic. So there are some things that will come along, but for the most part, it should just get faster and cleaner.Alessio [00:05:04]: From a focus perspective, I guess, as a founder too, how did you think about the AI interest rising? And then how do you kind of prioritize, okay, this is worth going into more, and we'll talk about Pydantic AI and all of that. What was maybe your early experience with LLAMP, and when did you figure out, okay, this is something we should take seriously and focus more resources on it?Samuel [00:05:28]: I'll answer that, but I'll answer what I think is a kind of parallel question, which is Pydantic's weird, because Pydantic existed, obviously, before I was starting a company. I was working on it in my spare time, and then beginning of 22, I started working on the rewrite in Rust. And I worked on it full-time for a year and a half, and then once we started the company, people came and joined. And it was a weird project, because that would never go away. You can't get signed off inside a startup. Like, we're going to go off and three engineers are going to work full-on for a year in Python and Rust, writing like 30,000 lines of Rust just to release open-source-free Python library. The result of that has been excellent for us as a company, right? As in, it's made us remain entirely relevant. And it's like, Pydantic is not just used in the SDKs of all of the AI libraries, but I can't say which one, but one of the big foundational model companies, when they upgraded from Pydantic v1 to v2, their number one internal model... The metric of performance is time to first token. That went down by 20%. So you think about all of the actual AI going on inside, and yet at least 20% of the CPU, or at least the latency inside requests was actually Pydantic, which shows like how widely it's used. So we've benefited from doing that work, although it didn't, it would have never have made financial sense in most companies. In answer to your question about like, how do we prioritize AI, I mean, the honest truth is we've spent a lot of the last year and a half building. Good general purpose observability inside LogFire and making Pydantic good for general purpose use cases. And the AI has kind of come to us. Like we just, not that we want to get away from it, but like the appetite, uh, both in Pydantic and in LogFire to go and build with AI is enormous because it kind of makes sense, right? Like if you're starting a new greenfield project in Python today, what's the chance that you're using GenAI 80%, let's say, globally, obviously it's like a hundred percent in California, but even worldwide, it's probably 80%. Yeah. And so everyone needs that stuff. And there's so much yet to be figured out so much like space to do things better in the ecosystem in a way that like to go and implement a database that's better than Postgres is a like Sisyphean task. Whereas building, uh, tools that are better for GenAI than some of the stuff that's about now is not very difficult. Putting the actual models themselves to one side.Alessio [00:07:40]: And then at the same time, then you released Pydantic AI recently, which is, uh, um, you know, agent framework and early on, I would say everybody like, you know, Langchain and like, uh, Pydantic kind of like a first class support, a lot of these frameworks, we're trying to use you to be better. What was the decision behind we should do our own framework? Were there any design decisions that you disagree with any workloads that you think people didn't support? Well,Samuel [00:08:05]: it wasn't so much like design and workflow, although I think there were some, some things we've done differently. Yeah. I think looking in general at the ecosystem of agent frameworks, the engineering quality is far below that of the rest of the Python ecosystem. There's a bunch of stuff that we have learned how to do over the last 20 years of building Python libraries and writing Python code that seems to be abandoned by people when they build agent frameworks. Now I can kind of respect that, particularly in the very first agent frameworks, like Langchain, where they were literally figuring out how to go and do this stuff. It's completely understandable that you would like basically skip some stuff.Samuel [00:08:42]: I'm shocked by the like quality of some of the agent frameworks that have come out recently from like well-respected names, which it just seems to be opportunism and I have little time for that, but like the early ones, like I think they were just figuring out how to do stuff and just as lots of people have learned from Pydantic, we were able to learn a bit from them. I think from like the gap we saw and the thing we were frustrated by was the production readiness. And that means things like type checking, even if type checking makes it hard. Like Pydantic AI, I will put my hand up now and say it has a lot of generics and you need to, it's probably easier to use it if you've written a bit of Rust and you really understand generics, but like, and that is, we're not claiming that that makes it the easiest thing to use in all cases, we think it makes it good for production applications in big systems where type checking is a no-brainer in Python. But there are also a bunch of stuff we've learned from maintaining Pydantic over the years that we've gone and done. So every single example in Pydantic AI's documentation is run on Python. As part of tests and every single print output within an example is checked during tests. So it will always be up to date. And then a bunch of things that, like I say, are standard best practice within the rest of the Python ecosystem, but I'm not followed surprisingly by some AI libraries like coverage, linting, type checking, et cetera, et cetera, where I think these are no-brainers, but like weirdly they're not followed by some of the other libraries.Alessio [00:10:04]: And can you just give an overview of the framework itself? I think there's kind of like the. LLM calling frameworks, there are the multi-agent frameworks, there's the workflow frameworks, like what does Pydantic AI do?Samuel [00:10:17]: I glaze over a bit when I hear all of the different sorts of frameworks, but I like, and I will tell you when I built Pydantic, when I built Logfire and when I built Pydantic AI, my methodology is not to go and like research and review all of the other things. I kind of work out what I want and I go and build it and then feedback comes and we adjust. So the fundamental building block of Pydantic AI is agents. The exact definition of agents and how you want to define them. is obviously ambiguous and our things are probably sort of agent-lit, not that we would want to go and rename them to agent-lit, but like the point is you probably build them together to build something and most people will call an agent. So an agent in our case has, you know, things like a prompt, like system prompt and some tools and a structured return type if you want it, that covers the vast majority of cases. There are situations where you want to go further and the most complex workflows where you want graphs and I resisted graphs for quite a while. I was sort of of the opinion you didn't need them and you could use standard like Python flow control to do all of that stuff. I had a few arguments with people, but I basically came around to, yeah, I can totally see why graphs are useful. But then we have the problem that by default, they're not type safe because if you have a like add edge method where you give the names of two different edges, there's no type checking, right? Even if you go and do some, I'm not, not all the graph libraries are AI specific. So there's a, there's a graph library called, but it allows, it does like a basic runtime type checking. Ironically using Pydantic to try and make up for the fact that like fundamentally that graphs are not typed type safe. Well, I like Pydantic, but it did, that's not a real solution to have to go and run the code to see if it's safe. There's a reason that starting type checking is so powerful. And so we kind of, from a lot of iteration eventually came up with a system of using normally data classes to define nodes where you return the next node you want to call and where we're able to go and introspect the return type of a node to basically build the graph. And so the graph is. Yeah. Inherently type safe. And once we got that right, I, I wasn't, I'm incredibly excited about graphs. I think there's like masses of use cases for them, both in gen AI and other development, but also software's all going to have interact with gen AI, right? It's going to be like web. There's no longer be like a web department in a company is that there's just like all the developers are building for web building with databases. The same is going to be true for gen AI.Alessio [00:12:33]: Yeah. I see on your docs, you call an agent, a container that contains a system prompt function. Tools, structure, result, dependency type model, and then model settings. Are the graphs in your mind, different agents? Are they different prompts for the same agent? What are like the structures in your mind?Samuel [00:12:52]: So we were compelled enough by graphs once we got them right, that we actually merged the PR this morning. That means our agent implementation without changing its API at all is now actually a graph under the hood as it is built using our graph library. So graphs are basically a lower level tool that allow you to build these complex workflows. Our agents are technically one of the many graphs you could go and build. And we just happened to build that one for you because it's a very common, commonplace one. But obviously there are cases where you need more complex workflows where the current agent assumptions don't work. And that's where you can then go and use graphs to build more complex things.Swyx [00:13:29]: You said you were cynical about graphs. What changed your mind specifically?Samuel [00:13:33]: I guess people kept giving me examples of things that they wanted to use graphs for. And my like, yeah, but you could do that in standard flow control in Python became a like less and less compelling argument to me because I've maintained those systems that end up with like spaghetti code. And I could see the appeal of this like structured way of defining the workflow of my code. And it's really neat that like just from your code, just from your type hints, you can get out a mermaid diagram that defines exactly what can go and happen.Swyx [00:14:00]: Right. Yeah. You do have very neat implementation of sort of inferring the graph from type hints, I guess. Yeah. Is what I would call it. Yeah. I think the question always is I have gone back and forth. I used to work at Temporal where we would actually spend a lot of time complaining about graph based workflow solutions like AWS step functions. And we would actually say that we were better because you could use normal control flow that you already knew and worked with. Yours, I guess, is like a little bit of a nice compromise. Like it looks like normal Pythonic code. But you just have to keep in mind what the type hints actually mean. And that's what we do with the quote unquote magic that the graph construction does.Samuel [00:14:42]: Yeah, exactly. And if you look at the internal logic of actually running a graph, it's incredibly simple. It's basically call a node, get a node back, call that node, get a node back, call that node. If you get an end, you're done. We will add in soon support for, well, basically storage so that you can store the state between each node that's run. And then the idea is you can then distribute the graph and run it across computers. And also, I mean, the other weird, the other bit that's really valuable is across time. Because it's all very well if you look at like lots of the graph examples that like Claude will give you. If it gives you an example, it gives you this lovely enormous mermaid chart of like the workflow, for example, managing returns if you're an e-commerce company. But what you realize is some of those lines are literally one function calls another function. And some of those lines are wait six days for the customer to print their like piece of paper and put it in the post. And if you're writing like your demo. Project or your like proof of concept, that's fine because you can just say, and now we call this function. But when you're building when you're in real in real life, that doesn't work. And now how do we manage that concept to basically be able to start somewhere else in the in our code? Well, this graph implementation makes it incredibly easy because you just pass the node that is the start point for carrying on the graph and it continues to run. So it's things like that where I was like, yeah, I can just imagine how things I've done in the past would be fundamentally easier to understand if we had done them with graphs.Swyx [00:16:07]: You say imagine, but like right now, this pedantic AI actually resume, you know, six days later, like you said, or is this just like a theoretical thing we can go someday?Samuel [00:16:16]: I think it's basically Q&A. So there's an AI that's asking the user a question and effectively you then call the CLI again to continue the conversation. And it basically instantiates the node and calls the graph with that node again. Now, we don't have the logic yet for effectively storing state in the database between individual nodes that we're going to add soon. But like the rest of it is basically there.Swyx [00:16:37]: It does make me think that not only are you competing with Langchain now and obviously Instructor, and now you're going into sort of the more like orchestrated things like Airflow, Prefect, Daxter, those guys.Samuel [00:16:52]: Yeah, I mean, we're good friends with the Prefect guys and Temporal have the same investors as us. And I'm sure that my investor Bogomol would not be too happy if I was like, oh, yeah, by the way, as well as trying to take on Datadog. We're also going off and trying to take on Temporal and everyone else doing that. Obviously, we're not doing all of the infrastructure of deploying that right yet, at least. We're, you know, we're just building a Python library. And like what's crazy about our graph implementation is, sure, there's a bit of magic in like introspecting the return type, you know, extracting things from unions, stuff like that. But like the actual calls, as I say, is literally call a function and get back a thing and call that. It's like incredibly simple and therefore easy to maintain. The question is, how useful is it? Well, I don't know yet. I think we have to go and find out. We have a whole. We've had a slew of people joining our Slack over the last few days and saying, tell me how good Pydantic AI is. How good is Pydantic AI versus Langchain? And I refuse to answer. That's your job to go and find that out. Not mine. We built a thing. I'm compelled by it, but I'm obviously biased. The ecosystem will work out what the useful tools are.Swyx [00:17:52]: Bogomol was my board member when I was at Temporal. And I think I think just generally also having been a workflow engine investor and participant in this space, it's a big space. Like everyone needs different functions. I think the one thing that I would say like yours, you know, as a library, you don't have that much control of it over the infrastructure. I do like the idea that each new agents or whatever or unit of work, whatever you call that should spin up in this sort of isolated boundaries. Whereas yours, I think around everything runs in the same process. But you ideally want to sort of spin out its own little container of things.Samuel [00:18:30]: I agree with you a hundred percent. And we will. It would work now. Right. As in theory, you're just like as long as you can serialize the calls to the next node, you just have to all of the different containers basically have to have the same the same code. I mean, I'm super excited about Cloudflare workers running Python and being able to install dependencies. And if Cloudflare could only give me my invitation to the private beta of that, we would be exploring that right now because I'm super excited about that as a like compute level for some of this stuff where exactly what you're saying, basically. You can run everything as an individual. Like worker function and distribute it. And it's resilient to failure, et cetera, et cetera.Swyx [00:19:08]: And it spins up like a thousand instances simultaneously. You know, you want it to be sort of truly serverless at once. Actually, I know we have some Cloudflare friends who are listening, so hopefully they'll get in front of the line. Especially.Samuel [00:19:19]: I was in Cloudflare's office last week shouting at them about other things that frustrate me. I have a love-hate relationship with Cloudflare. Their tech is awesome. But because I use it the whole time, I then get frustrated. So, yeah, I'm sure I will. I will. I will get there soon.Swyx [00:19:32]: There's a side tangent on Cloudflare. Is Python supported at full? I actually wasn't fully aware of what the status of that thing is.Samuel [00:19:39]: Yeah. So Pyodide, which is Python running inside the browser in scripting, is supported now by Cloudflare. They basically, they're having some struggles working out how to manage, ironically, dependencies that have binaries, in particular, Pydantic. Because these workers where you can have thousands of them on a given metal machine, you don't want to have a difference. You basically want to be able to have a share. Shared memory for all the different Pydantic installations, effectively. That's the thing they work out. They're working out. But Hood, who's my friend, who is the primary maintainer of Pyodide, works for Cloudflare. And that's basically what he's doing, is working out how to get Python running on Cloudflare's network.Swyx [00:20:19]: I mean, the nice thing is that your binary is really written in Rust, right? Yeah. Which also compiles the WebAssembly. Yeah. So maybe there's a way that you'd build... You have just a different build of Pydantic and that ships with whatever your distro for Cloudflare workers is.Samuel [00:20:36]: Yes, that's exactly what... So Pyodide has builds for Pydantic Core and for things like NumPy and basically all of the popular binary libraries. Yeah. It's just basic. And you're doing exactly that, right? You're using Rust to compile the WebAssembly and then you're calling that shared library from Python. And it's unbelievably complicated, but it works. Okay.Swyx [00:20:57]: Staying on graphs a little bit more, and then I wanted to go to some of the other features that you have in Pydantic AI. I see in your docs, there are sort of four levels of agents. There's single agents, there's agent delegation, programmatic agent handoff. That seems to be what OpenAI swarms would be like. And then the last one, graph-based control flow. Would you say that those are sort of the mental hierarchy of how these things go?Samuel [00:21:21]: Yeah, roughly. Okay.Swyx [00:21:22]: You had some expression around OpenAI swarms. Well.Samuel [00:21:25]: And indeed, OpenAI have got in touch with me and basically, maybe I'm not supposed to say this, but basically said that Pydantic AI looks like what swarms would become if it was production ready. So, yeah. I mean, like, yeah, which makes sense. Awesome. Yeah. I mean, in fact, it was specifically saying, how can we give people the same feeling that they were getting from swarms that led us to go and implement graphs? Because my, like, just call the next agent with Python code was not a satisfactory answer to people. So it was like, okay, we've got to go and have a better answer for that. It's not like, let us to get to graphs. Yeah.Swyx [00:21:56]: I mean, it's a minimal viable graph in some sense. What are the shapes of graphs that people should know? So the way that I would phrase this is I think Anthropic did a very good public service and also kind of surprisingly influential blog post, I would say, when they wrote Building Effective Agents. We actually have the authors coming to speak at my conference in New York, which I think you're giving a workshop at. Yeah.Samuel [00:22:24]: I'm trying to work it out. But yes, I think so.Swyx [00:22:26]: Tell me if you're not. yeah, I mean, like, that was the first, I think, authoritative view of, like, what kinds of graphs exist in agents and let's give each of them a name so that everyone is on the same page. So I'm just kind of curious if you have community names or top five patterns of graphs.Samuel [00:22:44]: I don't have top five patterns of graphs. I would love to see what people are building with them. But like, it's been it's only been a couple of weeks. And of course, there's a point is that. Because they're relatively unopinionated about what you can go and do with them. They don't suit them. Like, you can go and do lots of lots of things with them, but they don't have the structure to go and have like specific names as much as perhaps like some other systems do. I think what our agents are, which have a name and I can't remember what it is, but this basically system of like, decide what tool to call, go back to the center, decide what tool to call, go back to the center and then exit. One form of graph, which, as I say, like our agents are effectively one implementation of a graph, which is why under the hood they are now using graphs. And it'll be interesting to see over the next few years whether we end up with these like predefined graph names or graph structures or whether it's just like, yep, I built a graph or whether graphs just turn out not to match people's mental image of what they want and die away. We'll see.Swyx [00:23:38]: I think there is always appeal. Every developer eventually gets graph religion and goes, oh, yeah, everything's a graph. And then they probably over rotate and go go too far into graphs. And then they have to learn a whole bunch of DSLs. And then they're like, actually, I didn't need that. I need this. And they scale back a little bit.Samuel [00:23:55]: I'm at the beginning of that process. I'm currently a graph maximalist, although I haven't actually put any into production yet. But yeah.Swyx [00:24:02]: This has a lot of philosophical connections with other work coming out of UC Berkeley on compounding AI systems. I don't know if you know of or care. This is the Gartner world of things where they need some kind of industry terminology to sell it to enterprises. I don't know if you know about any of that.Samuel [00:24:24]: I haven't. I probably should. I should probably do it because I should probably get better at selling to enterprises. But no, no, I don't. Not right now.Swyx [00:24:29]: This is really the argument is that instead of putting everything in one model, you have more control and more maybe observability to if you break everything out into composing little models and changing them together. And obviously, then you need an orchestration framework to do that. Yeah.Samuel [00:24:47]: And it makes complete sense. And one of the things we've seen with agents is they work well when they work well. But when they. Even if you have the observability through log five that you can see what was going on, if you don't have a nice hook point to say, hang on, this is all gone wrong. You have a relatively blunt instrument of basically erroring when you exceed some kind of limit. But like what you need to be able to do is effectively iterate through these runs so that you can have your own control flow where you're like, OK, we've gone too far. And that's where one of the neat things about our graph implementation is you can basically call next in a loop rather than just running the full graph. And therefore, you have this opportunity to to break out of it. But yeah, basically, it's the same point, which is like if you have two bigger unit of work to some extent, whether or not it involves gen AI. But obviously, it's particularly problematic in gen AI. You only find out afterwards when you've spent quite a lot of time and or money when it's gone off and done done the wrong thing.Swyx [00:25:39]: Oh, drop on this. We're not going to resolve this here, but I'll drop this and then we can move on to the next thing. This is the common way that we we developers talk about this. And then the machine learning researchers look at us. And laugh and say, that's cute. And then they just train a bigger model and they wipe us out in the next training run. So I think there's a certain amount of we are fighting the bitter lesson here. We're fighting AGI. And, you know, when AGI arrives, this will all go away. Obviously, on Latent Space, we don't really discuss that because I think AGI is kind of this hand wavy concept that isn't super relevant. But I think we have to respect that. For example, you could do a chain of thoughts with graphs and you could manually orchestrate a nice little graph that does like. Reflect, think about if you need more, more inference time, compute, you know, that's the hot term now. And then think again and, you know, scale that up. Or you could train Strawberry and DeepSeq R1. Right.Samuel [00:26:32]: I saw someone saying recently, oh, they were really optimistic about agents because models are getting faster exponentially. And I like took a certain amount of self-control not to describe that it wasn't exponential. But my main point was. If models are getting faster as quickly as you say they are, then we don't need agents and we don't really need any of these abstraction layers. We can just give our model and, you know, access to the Internet, cross our fingers and hope for the best. Agents, agent frameworks, graphs, all of this stuff is basically making up for the fact that right now the models are not that clever. In the same way that if you're running a customer service business and you have loads of people sitting answering telephones, the less well trained they are, the less that you trust them, the more that you need to give them a script to go through. Whereas, you know, so if you're running a bank and you have lots of customer service people who you don't trust that much, then you tell them exactly what to say. If you're doing high net worth banking, you just employ people who you think are going to be charming to other rich people and set them off to go and have coffee with people. Right. And the same is true of models. The more intelligent they are, the less we need to tell them, like structure what they go and do and constrain the routes in which they take.Swyx [00:27:42]: Yeah. Yeah. Agree with that. So I'm happy to move on. So the other parts of Pydantic AI that are worth commenting on, and this is like my last rant, I promise. So obviously, every framework needs to do its sort of model adapter layer, which is, oh, you can easily swap from OpenAI to Cloud to Grok. You also have, which I didn't know about, Google GLA, which I didn't really know about until I saw this in your docs, which is generative language API. I assume that's AI Studio? Yes.Samuel [00:28:13]: Google don't have good names for it. So Vertex is very clear. That seems to be the API that like some of the things use, although it returns 503 about 20% of the time. So... Vertex? No. Vertex, fine. But the... Oh, oh. GLA. Yeah. Yeah.Swyx [00:28:28]: I agree with that.Samuel [00:28:29]: So we have, again, another example of like, well, I think we go the extra mile in terms of engineering is we run on every commit, at least commit to main, we run tests against the live models. Not lots of tests, but like a handful of them. Oh, okay. And we had a point last week where, yeah, GLA is a little bit better. GLA1 was failing every single run. One of their tests would fail. And we, I think we might even have commented out that one at the moment. So like all of the models fail more often than you might expect, but like that one seems to be particularly likely to fail. But Vertex is the same API, but much more reliable.Swyx [00:29:01]: My rant here is that, you know, versions of this appear in Langchain and every single framework has to have its own little thing, a version of that. I would put to you, and then, you know, this is, this can be agree to disagree. This is not needed in Pydantic AI. I would much rather you adopt a layer like Lite LLM or what's the other one in JavaScript port key. And that's their job. They focus on that one thing and they, they normalize APIs for you. All new models are automatically added and you don't have to duplicate this inside of your framework. So for example, if I wanted to use deep seek, I'm out of luck because Pydantic AI doesn't have deep seek yet.Samuel [00:29:38]: Yeah, it does.Swyx [00:29:39]: Oh, it does. Okay. I'm sorry. But you know what I mean? Should this live in your code or should it live in a layer that's kind of your API gateway that's a defined piece of infrastructure that people have?Samuel [00:29:49]: And I think if a company who are well known, who are respected by everyone had come along and done this at the right time, maybe we should have done it a year and a half ago and said, we're going to be the universal AI layer. That would have been a credible thing to do. I've heard varying reports of Lite LLM is the truth. And it didn't seem to have exactly the type safety that we needed. Also, as I understand it, and again, I haven't looked into it in great detail. Part of their business model is proxying the request through their, through their own system to do the generalization. That would be an enormous put off to an awful lot of people. Honestly, the truth is I don't think it is that much work unifying the model. I get where you're coming from. I kind of see your point. I think the truth is that everyone is centralizing around open AIs. Open AI's API is the one to do. So DeepSeq support that. Grok with OK support that. Ollama also does it. I mean, if there is that library right now, it's more or less the open AI SDK. And it's very high quality. It's well type checked. It uses Pydantic. So I'm biased. But I mean, I think it's pretty well respected anyway.Swyx [00:30:57]: There's different ways to do this. Because also, it's not just about normalizing the APIs. You have to do secret management and all that stuff.Samuel [00:31:05]: Yeah. And there's also. There's Vertex and Bedrock, which to one extent or another, effectively, they host multiple models, but they don't unify the API. But they do unify the auth, as I understand it. Although we're halfway through doing Bedrock. So I don't know about it that well. But they're kind of weird hybrids because they support multiple models. But like I say, the auth is centralized.Swyx [00:31:28]: Yeah, I'm surprised they don't unify the API. That seems like something that I would do. You know, we can discuss all this all day. There's a lot of APIs. I agree.Samuel [00:31:36]: It would be nice if there was a universal one that we didn't have to go and build.Alessio [00:31:39]: And I guess the other side of, you know, routing model and picking models like evals. How do you actually figure out which one you should be using? I know you have one. First of all, you have very good support for mocking in unit tests, which is something that a lot of other frameworks don't do. So, you know, my favorite Ruby library is VCR because it just, you know, it just lets me store the HTTP requests and replay them. That part I'll kind of skip. I think you are busy like this test model. We're like just through Python. You try and figure out what the model might respond without actually calling the model. And then you have the function model where people can kind of customize outputs. Any other fun stories maybe from there? Or is it just what you see is what you get, so to speak?Samuel [00:32:18]: On those two, I think what you see is what you get. On the evals, I think watch this space. I think it's something that like, again, I was somewhat cynical about for some time. Still have my cynicism about some of the well, it's unfortunate that so many different things are called evals. It would be nice if we could agree. What they are and what they're not. But look, I think it's a really important space. I think it's something that we're going to be working on soon, both in Pydantic AI and in LogFire to try and support better because it's like it's an unsolved problem.Alessio [00:32:45]: Yeah, you do say in your doc that anyone who claims to know for sure exactly how your eval should be defined can safely be ignored.Samuel [00:32:52]: We'll delete that sentence when we tell people how to do their evals.Alessio [00:32:56]: Exactly. I was like, we need we need a snapshot of this today. And so let's talk about eval. So there's kind of like the vibe. Yeah. So you have evals, which is what you do when you're building. Right. Because you cannot really like test it that many times to get statistical significance. And then there's the production eval. So you also have LogFire, which is kind of like your observability product, which I tried before. It's very nice. What are some of the learnings you've had from building an observability tool for LEMPs? And yeah, as people think about evals, even like what are the right things to measure? What are like the right number of samples that you need to actually start making decisions?Samuel [00:33:33]: I'm not the best person to answer that is the truth. So I'm not going to come in here and tell you that I think I know the answer on the exact number. I mean, we can do some back of the envelope statistics calculations to work out that like having 30 probably gets you most of the statistical value of having 200 for, you know, by definition, 15% of the work. But the exact like how many examples do you need? For example, that's a much harder question to answer because it's, you know, it's deep within the how models operate in terms of LogFire. One of the reasons we built LogFire the way we have and we allow you to write SQL directly against your data and we're trying to build the like powerful fundamentals of observability is precisely because we know we don't know the answers. And so allowing people to go and innovate on how they're going to consume that stuff and how they're going to process it is we think that's valuable. Because even if we come along and offer you an evals framework on top of LogFire, it won't be right in all regards. And we want people to be able to go and innovate and being able to write their own SQL connected to the API. And effectively query the data like it's a database with SQL allows people to innovate on that stuff. And that's what allows us to do it as well. I mean, we do a bunch of like testing what's possible by basically writing SQL directly against LogFire as any user could. I think the other the other really interesting bit that's going on in observability is OpenTelemetry is centralizing around semantic attributes for GenAI. So it's a relatively new project. A lot of it's still being added at the moment. But basically the idea that like. They unify how both SDKs and or agent frameworks send observability data to to any OpenTelemetry endpoint. And so, again, we can go and having that unification allows us to go and like basically compare different libraries, compare different models much better. That stuff's in a very like early stage of development. One of the things we're going to be working on pretty soon is basically, I suspect, GenAI will be the first agent framework that implements those semantic attributes properly. Because, again, we control and we can say this is important for observability, whereas most of the other agent frameworks are not maintained by people who are trying to do observability. With the exception of Langchain, where they have the observability platform, but they chose not to go down the OpenTelemetry route. So they're like plowing their own furrow. And, you know, they're a lot they're even further away from standardization.Alessio [00:35:51]: Can you maybe just give a quick overview of how OTEL ties into the AI workflows? There's kind of like the question of is, you know, a trace. And a span like a LLM call. Is it the agent? It's kind of like the broader thing you're tracking. How should people think about it?Samuel [00:36:06]: Yeah, so they have a PR that I think may have now been merged from someone at IBM talking about remote agents and trying to support this concept of remote agents within GenAI. I'm not particularly compelled by that because I don't think that like that's actually by any means the common use case. But like, I suppose it's fine for it to be there. The majority of the stuff in OTEL is basically defining how you would instrument. A given call to an LLM. So basically the actual LLM call, what data you would send to your telemetry provider, how you would structure that. Apart from this slightly odd stuff on remote agents, most of the like agent level consideration is not yet implemented in is not yet decided effectively. And so there's a bit of ambiguity. Obviously, what's good about OTEL is you can in the end send whatever attributes you like. But yeah, there's quite a lot of churn in that space and exactly how we store the data. I think that one of the most interesting things, though, is that if you think about observability. Traditionally, it was sure everyone would say our observability data is very important. We must keep it safe. But actually, companies work very hard to basically not have anything that sensitive in their observability data. So if you're a doctor in a hospital and you search for a drug for an STI, the sequel might be sent to the observability provider. But none of the parameters would. It wouldn't have the patient number or their name or the drug. With GenAI, that distinction doesn't exist because it's all just messed up in the text. If you have that same patient asking an LLM how to. What drug they should take or how to stop smoking. You can't extract the PII and not send it to the observability platform. So the sensitivity of the data that's going to end up in observability platforms is going to be like basically different order of magnitude to what's in what you would normally send to Datadog. Of course, you can make a mistake and send someone's password or their card number to Datadog. But that would be seen as a as a like mistake. Whereas in GenAI, a lot of data is going to be sent. And I think that's why companies like Langsmith and are trying hard to offer observability. On prem, because there's a bunch of companies who are happy for Datadog to be cloud hosted, but want self-hosted self-hosting for this observability stuff with GenAI.Alessio [00:38:09]: And are you doing any of that today? Because I know in each of the spans you have like the number of tokens, you have the context, you're just storing everything. And then you're going to offer kind of like a self-hosting for the platform, basically. Yeah. Yeah.Samuel [00:38:23]: So we have scrubbing roughly equivalent to what the other observability platforms have. So if we, you know, if we see password as the key, we won't send the value. But like, like I said, that doesn't really work in GenAI. So we're accepting we're going to have to store a lot of data and then we'll offer self-hosting for those people who can afford it and who need it.Alessio [00:38:42]: And then this is, I think, the first time that most of the workloads performance is depending on a third party. You know, like if you're looking at Datadog data, usually it's your app that is driving the latency and like the memory usage and all of that. Here you're going to have spans that maybe take a long time to perform because the GLA API is not working or because OpenAI is kind of like overwhelmed. Do you do anything there since like the provider is almost like the same across customers? You know, like, are you trying to surface these things for people and say, hey, this was like a very slow span, but actually all customers using OpenAI right now are seeing the same thing. So maybe don't worry about it or.Samuel [00:39:20]: Not yet. We do a few things that people don't generally do in OTA. So we send. We send information at the beginning. At the beginning of a trace as well as sorry, at the beginning of a span, as well as when it finishes. By default, OTA only sends you data when the span finishes. So if you think about a request which might take like 20 seconds, even if some of the intermediate spans finished earlier, you can't basically place them on the page until you get the top level span. And so if you're using standard OTA, you can't show anything until those requests are finished. When those requests are taking a few hundred milliseconds, it doesn't really matter. But when you're doing Gen AI calls or when you're like running a batch job that might take 30 minutes. That like latency of not being able to see the span is like crippling to understanding your application. And so we've we do a bunch of slightly complex stuff to basically send data about a span as it starts, which is closely related. Yeah.Alessio [00:40:09]: Any thoughts on all the other people trying to build on top of OpenTelemetry in different languages, too? There's like the OpenLEmetry project, which doesn't really roll off the tongue. But how do you see the future of these kind of tools? Is everybody going to have to build? Why does everybody want to build? They want to build their own open source observability thing to then sell?Samuel [00:40:29]: I mean, we are not going off and trying to instrument the likes of the OpenAI SDK with the new semantic attributes, because at some point that's going to happen and it's going to live inside OTEL and we might help with it. But we're a tiny team. We don't have time to go and do all of that work. So OpenLEmetry, like interesting project. But I suspect eventually most of those semantic like that instrumentation of the big of the SDKs will live, like I say, inside the main OpenTelemetry report. I suppose. What happens to the agent frameworks? What data you basically need at the framework level to get the context is kind of unclear. I don't think we know the answer yet. But I mean, I was on the, I guess this is kind of semi-public, because I was on the call with the OpenTelemetry call last week talking about GenAI. And there was someone from Arize talking about the challenges they have trying to get OpenTelemetry data out of Langchain, where it's not like natively implemented. And obviously they're having quite a tough time. And I was realizing, hadn't really realized this before, but how lucky we are to primarily be talking about our own agent framework, where we have the control rather than trying to go and instrument other people's.Swyx [00:41:36]: Sorry, I actually didn't know about this semantic conventions thing. It looks like, yeah, it's merged into main OTel. What should people know about this? I had never heard of it before.Samuel [00:41:45]: Yeah, I think it looks like a great start. I think there's some unknowns around how you send the messages that go back and forth, which is kind of the most important part. It's the most important thing of all. And that is moved out of attributes and into OTel events. OTel events in turn are moving from being on a span to being their own top-level API where you send data. So there's a bunch of churn still going on. I'm impressed by how fast the OTel community is moving on this project. I guess they, like everyone else, get that this is important, and it's something that people are crying out to get instrumentation off. So I'm kind of pleasantly surprised at how fast they're moving, but it makes sense.Swyx [00:42:25]: I'm just kind of browsing through the specification. I can already see that this basically bakes in whatever the previous paradigm was. So now they have genai.usage.prompt tokens and genai.usage.completion tokens. And obviously now we have reasoning tokens as well. And then only one form of sampling, which is top-p. You're basically baking in or sort of reifying things that you think are important today, but it's not a super foolproof way of doing this for the future. Yeah.Samuel [00:42:54]: I mean, that's what's neat about OTel is you can always go and send another attribute and that's fine. It's just there are a bunch that are agreed on. But I would say, you know, to come back to your previous point about whether or not we should be relying on one centralized abstraction layer, this stuff is moving so fast that if you start relying on someone else's standard, you risk basically falling behind because you're relying on someone else to keep things up to date.Swyx [00:43:14]: Or you fall behind because you've got other things going on.Samuel [00:43:17]: Yeah, yeah. That's fair. That's fair.Swyx [00:43:19]: Any other observations just about building LogFire, actually? Let's just talk about this. So you announced LogFire. I was kind of only familiar with LogFire because of your Series A announcement. I actually thought you were making a separate company. I remember some amount of confusion with you when that came out. So to be clear, it's Pydantic LogFire and the company is one company that has kind of two products, an open source thing and an observability thing, correct? Yeah. I was just kind of curious, like any learnings building LogFire? So classic question is, do you use ClickHouse? Is this like the standard persistence layer? Any learnings doing that?Samuel [00:43:54]: We don't use ClickHouse. We started building our database with ClickHouse, moved off ClickHouse onto Timescale, which is a Postgres extension to do analytical databases. Wow. And then moved off Timescale onto DataFusion. And we're basically now building, it's DataFusion, but it's kind of our own database. Bogomil is not entirely happy that we went through three databases before we chose one. I'll say that. But like, we've got to the right one in the end. I think we could have realized that Timescale wasn't right. I think ClickHouse. They both taught us a lot and we're in a great place now. But like, yeah, it's been a real journey on the database in particular.Swyx [00:44:28]: Okay. So, you know, as a database nerd, I have to like double click on this, right? So ClickHouse is supposed to be the ideal backend for anything like this. And then moving from ClickHouse to Timescale is another counterintuitive move that I didn't expect because, you know, Timescale is like an extension on top of Postgres. Not super meant for like high volume logging. But like, yeah, tell us those decisions.Samuel [00:44:50]: So at the time, ClickHouse did not have good support for JSON. I was speaking to someone yesterday and said ClickHouse doesn't have good support for JSON and got roundly stepped on because apparently it does now. So they've obviously gone and built their proper JSON support. But like back when we were trying to use it, I guess a year ago or a bit more than a year ago, everything happened to be a map and maps are a pain to try and do like looking up JSON type data. And obviously all these attributes, everything you're talking about there in terms of the GenAI stuff. You can choose to make them top level columns if you want. But the simplest thing is just to put them all into a big JSON pile. And that was a problem with ClickHouse. Also, ClickHouse had some really ugly edge cases like by default, or at least until I complained about it a lot, ClickHouse thought that two nanoseconds was longer than one second because they compared intervals just by the number, not the unit. And I complained about that a lot. And then they caused it to raise an error and just say you have to have the same unit. Then I complained a bit more. And I think as I understand it now, they have some. They convert between units. But like stuff like that, when all you're looking at is when a lot of what you're doing is comparing the duration of spans was really painful. Also things like you can't subtract two date times to get an interval. You have to use the date sub function. But like the fundamental thing is because we want our end users to write SQL, the like quality of the SQL, how easy it is to write, matters way more to us than if you're building like a platform on top where your developers are going to write the SQL. And once it's written and it's working, you don't mind too much. So I think that's like one of the fundamental differences. The other problem that I have with the ClickHouse and Impact Timescale is that like the ultimate architecture, the like snowflake architecture of binary data in object store queried with some kind of cache from nearby. They both have it, but it's closed sourced and you only get it if you go and use their hosted versions. And so even if we had got through all the problems with Timescale or ClickHouse, we would end up like, you know, they would want to be taking their 80% margin. And then we would be wanting to take that would basically leave us less space for margin. Whereas data fusion. Properly open source, all of that same tooling is open source. And for us as a team of people with a lot of Rust expertise, data fusion, which is implemented in Rust, we can literally dive into it and go and change it. So, for example, I found that there were some slowdowns in data fusion's string comparison kernel for doing like string contains. And it's just Rust code. And I could go and rewrite the string comparison kernel to be faster. Or, for example, data fusion, when we started using it, didn't have JSON support. Obviously, as I've said, it's something we can do. It's something we needed. I was able to go and implement that in a weekend using our JSON parser that we built for Pydantic Core. So it's the fact that like data fusion is like for us the perfect mixture of a toolbox to build a database with, not a database. And we can go and implement stuff on top of it in a way that like if you were trying to do that in Postgres or in ClickHouse. I mean, ClickHouse would be easier because it's C++, relatively modern C++. But like as a team of people who are not C++ experts, that's much scarier than data fusion for us.Swyx [00:47:47]: Yeah, that's a beautiful rant.Alessio [00:47:49]: That's funny. Most people don't think they have agency on these projects. They're kind of like, oh, I should use this or I should use that. They're not really like, what should I pick so that I contribute the most back to it? You know, so but I think you obviously have an open source first mindset. So that makes a lot of sense.Samuel [00:48:05]: I think if we were probably better as a startup, a better startup and faster moving and just like headlong determined to get in front of customers as fast as possible, we should have just started with ClickHouse. I hope that long term we're in a better place for having worked with data fusion. We like we're quite engaged now with the data fusion community. Andrew Lam, who maintains data fusion, is an advisor to us. We're in a really good place now. But yeah, it's definitely slowed us down relative to just like building on ClickHouse and moving as fast as we can.Swyx [00:48:34]: OK, we're about to zoom out and do Pydantic run and all the other stuff. But, you know, my last question on LogFire is really, you know, at some point you run out sort of community goodwill just because like, oh, I use Pydantic. I love Pydantic. I'm going to use LogFire. OK, then you start entering the territory of the Datadogs, the Sentrys and the honeycombs. Yeah. So where are you going to really spike here? What differentiator here?Samuel [00:48:59]: I wasn't writing code in 2001, but I'm assuming that there were people talking about like web observability and then web observability stopped being a thing, not because the web stopped being a thing, but because all observability had to do web. If you were talking to people in 2010 or 2012, they would have talked about cloud observability. Now that's not a term because all observability is cloud first. The same is going to happen to gen AI. And so whether or not you're trying to compete with Datadog or with Arise and Langsmith, you've got to do first class. You've got to do general purpose observability with first class support for AI. And as far as I know, we're the only people really trying to do that. I mean, I think Datadog is starting in that direction. And to be honest, I think Datadog is a much like scarier company to compete with than the AI specific observability platforms. Because in my opinion, and I've also heard this from lots of customers, AI specific observability where you don't see everything else going on in your app is not actually that useful. Our hope is that we can build the first general purpose observability platform with first class support for AI. And that we have this open source heritage of putting developer experience first that other companies haven't done. For all I'm a fan of Datadog and what they've done. If you search Datadog logging Python. And you just try as a like a non-observability expert to get something up and running with Datadog and Python. It's not trivial, right? That's something Sentry have done amazingly well. But like there's enormous space in most of observability to do DX better.Alessio [00:50:27]: Since you mentioned Sentry, I'm curious how you thought about licensing and all of that. Obviously, your MIT license, you don't have any rolling license like Sentry has where you can only use an open source, like the one year old version of it. Was that a hard decision?Samuel [00:50:41]: So to be clear, LogFire is co-sourced. So Pydantic and Pydantic AI are MIT licensed and like properly open source. And then LogFire for now is completely closed source. And in fact, the struggles that Sentry have had with licensing and the like weird pushback the community gives when they take something that's closed source and make it source available just meant that we just avoided that whole subject matter. I think the other way to look at it is like in terms of either headcount or revenue or dollars in the bank. The amount of open source we do as a company is we've got to be open source. We're up there with the most prolific open source companies, like I say, per head. And so we didn't feel like we were morally obligated to make LogFire open source. We have Pydantic. Pydantic is a foundational library in Python. That and now Pydantic AI are our contribution to open source. And then LogFire is like openly for profit, right? As in we're not claiming otherwise. We're not sort of trying to walk a line if it's open source. But really, we want to make it hard to deploy. So you probably want to pay us. We're trying to be straight. That it's to pay for. We could change that at some point in the future, but it's not an immediate plan.Alessio [00:51:48]: All right. So the first one I saw this new I don't know if it's like a product you're building the Pydantic that run, which is a Python browser sandbox. What was the inspiration behind that? We talk a lot about code interpreter for lamps. I'm an investor in a company called E2B, which is a code sandbox as a service for remote execution. Yeah. What's the Pydantic that run story?Samuel [00:52:09]: So Pydantic that run is again completely open source. I have no interest in making it into a product. We just needed a sandbox to be able to demo LogFire in particular, but also Pydantic AI. So it doesn't have it yet, but I'm going to add basically a proxy to OpenAI and the other models so that you can run Pydantic AI in the browser. See how it works. Tweak the prompt, et cetera, et cetera. And we'll have some kind of limit per day of what you can spend on it or like what the spend is. The other thing we wanted to b
In this episode, our Blockworks Research analysts discuss the launch of Trump's memecoin, Solana's boom in activity, and the SIMD 228 proposal. They also dive into the Base ecosystem, ETH blobs, and the Coinbase x Morpho partnership. Thanks for tuning in! As always, remember this podcast is for informational purposes only, and any views expressed by anyone on the show are solely their opinions, not financial advice. -- Resources SIMD 228: Reducing SOL's Inflation: https://app.blockworksresearch.com/flashnotes/simd-228-reducing-sol-s-inflation Base DEX Volume: https://x.com/smyyguy/status/1881417994322214985 Coinbase x Morpho: https://x.com/MorphoLabs/status/1879903267146309805 -- SKALE is the next evolution in Layer 1 blockchains with a gas-free invisible user experience, instant finality, high speed, and robust security. SKALE is built different as it allows for limitless scalability and has already saved its 45 Million users over $9 Billion in gas fees. SKALE is high-performance and cost-effective, making it ideal for compute-intensive applications like AI, gaming, and consumer-facing dApps. Learn more at skale.space and stay up to date with the gas-free invisible blockchain on X at @skalenetwork -- Ledger, the global leader in digital asset security, proudly sponsors 0xResearch! As Ledger celebrates 10 years of securing 20% of global crypto, it remains the top choice for securing your assets. Buy a LEDGER™ device now and build confidently, knowing your precious tokens are safe. Buy now on https://shop.ledger.com/?r=1da180a5de00. -- Join us at DAS NYC 2025! Use code 0x10 for a 10% discount: https://blockworks.co/event/digital-asset-summit-2025-new-york -- Follow Carlos: https://x.com/0xcarlosg Follow Dan: https://x.com/smyyguy Follow Marc: https://x.com/marcarjoon Follow Danny: https://x.com/defi_kay_ Follow Blockworks Research: https://x.com/blockworksres Subscribe on YouTube: https://bit.ly/3foDS38 Subscribe on Apple: https://apple.co/3SNhUEt Subscribe on Spotify: https://spoti.fi/3NlP1hA Get top market insights and the latest in crypto news. Subscribe to Blockworks Daily Newsletter: https://blockworks.co/newsletter/ Join the 0xResearch Telegram group: https://t.me/+z0H6y2bS-dllODVh -- Timestamps: (0:00) Introduction (1:39) Trump's Memecoin Launch (11:58) Ads (SKALE, Ledger) (12:38) Solana's Immense Rise in Activity (28:41) Solana's Liveness Struggles (35:56) Prioritizing Token Holders vs Stakers (42:07) Understanding SIMD 228 (47:56) Ads (SKALE, Ledger) (49:12) The Base Ecosystem (57:54) Where Are ETH Blobs Going? (1:07:01) Coinbase X Morpho Partnership (1:18:51) Closing Thoughts -- Check out Blockworks Research today! Research, data, governance, tokenomics, and models – now, all in one place Blockworks Research: https://www.blockworksresearch.com/ Free Daily Newsletter: https://blockworks.co/newsletter -- Disclaimer: Nothing said on 0xResearch is a recommendation to buy or sell securities or tokens. This podcast is for informational purposes only, and any views expressed by anyone on the show are solely our opinions, not financial advice. Boccaccio, Dan, and our guests may hold positions in the companies, funds, or projects discussed.
Gm! This week, we're joined by Tushar Jain to discuss his most recent Solana improvement proposal to reduce Solana's inflation & switch to a market based inflation rate. We deep dive into Solana's current emission schedule, how this could be improved, Solana the asset vs the network & another SIMD proposal Tushar is preparing to announce. Enjoy! Find out more about Tushar's inflaion proposal here: https://x.com/TusharJain_/status/1879942468726079773 -- Follow Tushar: https://x.com/TusharJain_ Follow Jack: https://x.com/whosknave Follow Mert: https://x.com/0xMert_ Follow Lightspeed: https://twitter.com/Lightspeedpodhq Subscribe to the Lightspeed Newsletter: https://blockworks.co/newsletter/lightspeed Utilize the Solana Dashboard by Blockworks Research: http://solana.blockworksresearch.com/ -- Ledger, the global leader in digital asset security, proudly sponsors the Lightspeed podcast. As Ledger celebrates 10 years of securing 20% of global crypto, it remains the top choice for securing your Solana assets. Buy a LEDGER™ device now and build confidently, knowing your SOL are safe. Buy now on https://shop.ledger.com/?r=1da180a5de00. -- Get top market insights and the latest in crypto news. Subscribe to Blockworks Daily Newsletter: https://blockworks.co/newsletter/ -- (00:00) Introduction (01:39) Solana's Inflation Problem (07:42) Ledger Ad (08:32) The Role Of Staking (16:46) Solana: The Asset vs The Network (20:24) Multicoin's Solana Inflation Proposal (33:17) The Role Of MEV (37:37) Risks To Reducing Inflation (47:21) Multicoin's Staking Proposal (54:53) Market Based Inflation (57:16) MEV On Solana -- Disclaimers: Lightspeed was kickstarted by a grant from the Solana Foundation. Nothing said on Lightspeed is a recommendation to buy or sell securities or tokens. This podcast is for informational purposes only, and any views expressed by anyone on the show are solely our opinions, not financial advice. Mert, Jack, and our guests may hold positions in the companies, funds, or projects discussed.
Eduardo Madrid joins Phil and Timur. Eduardo talks to us about the Zoo libraries, including his advanced type-erasure library, as well as the SWAR library which simulates ad-hoc SIMD within a register. We also discuss how he has taken inspiration and cues from the worlds of Biology and Physics to arrive at new thinking around software development, design and architecture. Show Notes News QT 6.8 is released "Named Loops" proposal adopted into C - will C++ follow? C++ Online Call for Speakers is open Links The Zoo libraries "C++ Software Design" (book) - Klaus Iglberger Klaus Iglberger's talks on Type Erasure: "A Design Analysis" "The Implementation Details" (Some of ) Ed's talks: "Using Integers as Arrays of Bitfields a.k.a. SWAR Techniques - CppCon 2019" "Rehashing Hash Tables And Associative Containers" - C++ Now 2022" "Empowerment with the C++ Generic Programming Paradigm" - C++ Online 2024
An airhacks.fm conversation with Jonathan Ellis (@spyced) about: discussion of JVector 3 features and improvements, compression techniques for vector indexes, binary quantization vs product quantization, anisotropic product quantization for improved accuracy, indexing Wikipedia example, Cassandra integration, SIMD acceleration with Fused ADC, optimization with Chronicle Map, E5 embedding models, comparison of CPU vs GPU for vector search, implementation details and low-level optimizations in Java, use of Java Panama API and foreign function interface, JVector's performance advantages, memory footprint reduction, integration with Cassandra and Astra DB, challenges of vector search at scale, trade-offs between RAM usage and CPU performance, Eventual Consistency in distributed vector search, comparison of different embedding models and their accuracy, importance of re-ranking in vector search, advantages of JVector over other vector search implementations Jonathan Ellis on twitter: @spyced
Fredrik talks to Evan Czaplicki, creator of Elm about figuring out a good path for yourself. What do you do when you have a job which seems like it would be your dream job, but it turns out to be the wrong thing for you? And how do you escape from that? You can’t put the success of something you build before your own personal and mental health, no matter how right the decision may be for the thing you build. Is there ever a reproducible path? Aren’t most or all successful things in large part a result of their circumstances? Platform languages and productivity languages - which do you prefer? Thoughts on the tradeoffs of when and how to roll things out and when to present ideas. Evan’s development mindset and environment, and the ways it has affected Elm’s design - all the way down to the error messages. Finally, of course, the benefits of country life - out of the radiation of San Francisco. Thank you Cloudnet for sponsoring our VPS! Comments, questions or tips? We a re @kodsnack, @tobiashieta, @oferlund and @bjoreman on Twitter, have a page on Facebook and can be emailed at info@kodsnack.se if you want to write longer. We read everything we receive. If you enjoy Kodsnack we would love a review in iTunes! You can also support the podcast by buying us a coffee (or two!) through Ko-fi. Links Evan Elm Prezi Guido van Rossum Brendan Eich Bjarne Stroustrup Hindley–Milner type inference Gary Bernhardt Talks by Gary SIMD Standard ML Ocaml Haskell Lambda calculus Algebraic data types Type inference Virtual DOM Webbhuset Dart Safari’s no performance regressions rule Sublime text GHC Nano Emacs Titles The personal aspects A culture clash I wasn’t supposed to be here This numb feeling I’ve never really been to the real world Is this even real? The path that Guido did This is you This isn’t for me, and it’s your fault Valuing my own health Reckless indifference A dispute between colleagues A nice solution will come out if you’re patient enough Here’s your error message: good luck Farmer’s disposition These are good years Getting paid in chickens for web development Finding a place
An airhacks.fm conversation with Alfonso Peterssen (@TheMukel) about: Alfonso previously appeared on "#294 LLama2.java: LLM integration with A 100% Pure Java file", discussion of llama2.java and llama3.java projects for running LLMs in Java, performance comparison between Java and C implementations, use of Vector API in Java for matrix multiplication, challenges and potential improvements in Vector API implementation, integration of various LLM models like Mistral, phi, qwen or gemma, differences in model sizes and capabilities, tokenization and chat format challenges across different models, potential for Java Community Process (JCP) standardization of gguf parsing, quantization techniques and their impact on performance, plans for integrating with langchain4j, advantages of pure Java implementations for AI models, potential for GraalVM and native image optimizations, discussion on the future of specialized AI models for specific tasks, challenges in training models with language capabilities but limited world knowledge, importance of SIMD instructions and vector operations for performance optimization, potential improvements in Java's handling of different float formats like float16 and bfloat16, discussion on the role of smaller, specialized AI models in enterprise applications and development tools Alfonso Peterssen on twitter: @TheMukel
An airhacks.fm conversation with Jonathan Ellis (@spyced) about: discussion of JVector, a Java-based vector search engine, Apache Kudu as an alternative to Cassandra for wide-column databases, FoundationDB - is a NoSQL database, explanation of vectors and embeddings in machine learning, different embedding models and their dimensions, the Hamming distance, binary quantization and product quantization for vector compression, DiskANN algorithm for efficient vector search on disk, optimistic concurrency control in JVector, challenges in implementing academic papers, the Neon database, JVector's performance characteristics and typical database sizes, advantages of astra DB over Cassandra, separation of compute and storage in cloud databases, Vector's use of Panama and SIMD instructions, the potential for contributions to the JVector project, Upstash uses of JVector for their vector search service, the cutting-edge nature of JVector in the Java ecosystem, the logarithmic performance of JVector for index construction and search, typical search latencies in the 30-50 millisecond range, the young and rapidly evolving field of vector search, the self-contained nature of the JVector codebase Jonathan Ellis on twitter: @spyced
Kwabena Agyeman joined Chris and Elecia to talk about optimization, cameras, machine learning, and vision systems. Kwabena is the head of OpenMV (openmv.io), an open source and open hardware system that runs machine learning algorithms on vision data. It uses MicroPython as a development environment so getting started is easy. Their github repositories are under github.com/openmv. You can find some of the SIMD details we talked about on the show: 150% faster: openmv/src/omv/imlib/binary.c 1000% faster: openmv/src/omv/imlib/filter.c Double Pumping: openmv/src/omv/modules/py_tv.c Kwabena has been creating a spreadsheet of different algorithms in camera frames per second (FPS) for Arm processors: Performance Benchmarks - Google Sheets. As time moves on, it will grow. Note: this is a link on the OpenMV website under About. When M55 stuff hits the market expect 4-8x speed gains. The OpenMV YouTube channel is also a good place to get more information about the system (and vision algorithms). Kwabena spoke with us about (the beginnings of) OpenMV on Embedded 212: You Are in Seaworld. Transcript Elecia is giving a free talk for O'Reilly to advertise her Making Embedded Systems, 2nd Edition book. The talk will be an introduction to embedded systems, geared towards software engineers who are suddenly holding a device and want to program it. The talk is May 23, 2024 at 9:00 AM PDT. Sign up here. A video will be available afterward for folks who sign up.
Fredrik has Matt Topol and Lars Wikman over for a deep and wide chat about Apache Arrow and many, many topics in the orbit of the language-independent columnar memory format for flat and hierarchical data. What does that even mean? What is the point? And why does Arrow only feel more and more interesting and useful the more you think about deeply integrating it into your systems? Feeding data to systems fast enough is a problem which is focused on much less than it ought to be. With Arrow you can send data over the network, process it on the CPU - or GPU for that matter- and send it along to the database. All without parsing, transformation, or copies unless absolutely necessary. Thank you Cloudnet for sponsoring our VPS! Comments, questions or tips? We are @kodsnack, @tobiashieta, @oferlund and @bjoreman on Twitter, have a page on Facebook and can be emailed at info@kodsnack.se if you want to write longer. We read everything we receive. If you enjoy Kodsnack we would love a review in iTunes! You can also support the podcast by buying us a coffee (or two!) through Ko-fi. Links Lars Matt Øredev Matt's Øredev presentations: State of the Apache Arrow ecosystem: How your project can leverage Arrow! and Leveraging Apache Arrow for ML workflows Kallbadhuset Apache Arrow Lars talks about his Arrow rabbit hole in Regular programming SIMD/vectorization Spark Explorer - builds on Polars Null bitmap Zeromq Airbyte Arrow flight Dremio Arrow flight SQL Influxdb Arrow flight RPC Kafka Pulsar Opentelemetry Arrow IPC format - also known as Feather ADBC - Arrow database connectivity ODBC and JDBC Snowflake DBT - SQL to SQL Jinja Datafusion Ibis Substrait Meta's Velox engine Arrow's project management committee (PMC) Voltron data Matt's Arrow book - In-memory analytics with Apache Arrow Rapids and Cudf The Theseus engine - accelerator-native distributed compute engine using Arrow The composable codex The standards chapter Dremio Hugging face Apache Hop - orchestration data scheduling thing Directed acyclic graph UCX - libraries for finding fast routes for data Infiniband NUMA CUDA GRPC Foam bananas Turkish pepper - Tyrkisk peber Plopp Marianne Titles For me, it started during the speaker's dinner Old, dated, and Java A real nerd snipe Identical representation in memory Working on columns It's already laid out that way Pass the memory, as is Null plus null is null A wild perk Arrow into the thing So many curly brackets you need to store Arrow straight through Something data people like to do So many backends The SQL string is for people I'm rude, and he's polite Feed the data fast enough A depressing amount of JSON Arrow the whole way through These are the problems in data Reference the bytes as they are Boiling down to Arrow Data lakehouses Removing inefficiency
Fredrik has Matt Topol and Lars Wikman over for a deep and wide chat about Apache Arrow and many, many topics in the orbit of the language-independent columnar memory format for flat and hierarchical data. What does that even mean? What is the point? And why does Arrow only feel more and more interesting and useful the more you think about deeply integrating it into your systems? Feeding data to systems fast enough is a problem which is focused on much less than it ought to be. With Arrow you can send data over the network, process it on the CPU - or GPU for that matter- and send it along to the database. All without parsing, transformation, or copies unless absolutely necessary. Thank you Cloudnet for sponsoring our VPS! Comments, questions or tips? We are @kodsnack, @tobiashieta, @oferlund and @bjoreman on Twitter, have a page on Facebook and can be emailed at info@kodsnack.se if you want to write longer. We read everything we receive. If you enjoy Kodsnack we would love a review in iTunes! You can also support the podcast by buying us a coffee (or two!) through Ko-fi. Links Lars Matt Øredev Matt’s Øredev presentations: State of the Apache Arrow ecosystem: How your project can leverage Arrow! and Leveraging Apache Arrow for ML workflows Kallbadhuset Apache Arrow Lars talks about his Arrow rabbit hole in Regular programming SIMD/vectorization Spark Explorer - builds on Polars Null bitmap Zeromq Airbyte Arrow flight Dremio Arrow flight SQL Influxdb Arrow flight RPC Kafka Pulsar Opentelemetry Arrow IPC format - also known as Feather ADBC - Arrow database connectivity ODBC and JDBC Snowflake DBT - SQL to SQL Jinja Datafusion Ibis Substrait Meta’s Velox engine Arrow’s project management committee (PMC) Voltron data Matt’s Arrow book - In-memory analytics with Apache Arrow Rapids and Cudf The Theseus engine - accelerator-native distributed compute engine using Arrow The composable codex The standards chapter Dremio Hugging face Apache Hop - orchestration data scheduling thing Directed acyclic graph UCX - libraries for finding fast routes for data Infiniband NUMA CUDA GRPC Foam bananas Turkish pepper - Tyrkisk peber Plopp Marianne Titles For me, it started during the speaker’s dinner Old, dated, and Java A real nerd snipe Identical representation in memory Working on columns It’s already laid out that way Pass the memory, as is Null plus null is null A wild perk Arrow into the thing So many curly brackets you need to store Arrow straight through Something data people like to do So many backends The SQL string is for people I’m rude, and he’s polite Feed the data fast enough A depressing amount of JSON Arrow the whole way through These are the problems in data Reference the bytes as they are Boiling down to Arrow Data lakehouses Removing inefficiency
YT : Onyx Rising --- Support this podcast: https://podcasters.spotify.com/pod/show/onyxtarot/support
Matthias Kretz joins Phil and Timur. Matthias talks about SIMD, including what it is, how it works, and what its useful for. We also discuss his proposal to introduce SIMD vocabulary types and functionality into the C++ standard and how it relates to what was in the Parallelism TS. Show Notes News MISRA C++ 2023 published Sonar webinar on MISRA C++ 2023 with Andreas Weis nlohmann/json 3.11.3 released reflect-cpp - Reflection library for C++20 Links P1928R7 - "std::simd — merge data-parallel types from the Parallelism TS 2" Matthias' CppCon 2023 talk on std::simd
Weronika Michaluk Digital Health Principal and SaMD Lead at HTD, a company specialising in the planning, designing and development of custom healthcare software. Weronika is an experienced professional with a diverse background in the fields of biomedical engineering, international business and public health. Her career began as a Biomedical Engineer, where she contributed to the development of various biomedical devices, including a wireless ECG system, then she worked in South Korea in the Neuroscience Department and after that, she focused on digital health solutions and consulting in the medical device space. In this episode, we delve into the world of software medical devices, explore the agile and waterfall approaches in software development and their application to regulated medical devices. We discover the crucial role of software in medical devices, uncover how some companies unintentionally market unregulated medical products and learn how to stay updated with the ever-evolving regulations. Timestamps: [00:00:10] What makes a software a Medical Device [00:05:57] Complexity of regulating software [00:11:52] Agile individuals and interactions over processes [00:17:30] Software medical device: engineering, research, business and coding [00:23:00] How to keep up with regulations Get in touch with Weronika Michaluk - https://www.linkedin.com/in/weronika-michaluk-mba-43811698/ https://htdhealth.com/ Get in touch with Karandeep Badwal - https://www.linkedin.com/in/karandeepbadwal/ Follow Karandeep on YouTube - https://www.youtube.com/@KarandeepBadwal --- Support this podcast: https://podcasters.spotify.com/pod/show/themedtechpodcast/support
Mark Gillard joins Timur and guest co-host Jason Turner. Mark talks to us about reflection, SIMD, and his library soagen, a structure-of-arrays generator for C++. Show Notes News What is Low Latency C++? C++Now 2023, part 1 What is Low Latency C++? C++Now 2023, part 2 Inside STL: The vector Inside STL: The string Experimenting with Modules in Flux pycmake cpptrace Links Soagen on GitHub Soagen documentation Mike Acton: Data-Oriented Design and C++ at CppCon 2014 Bryce Adelstein Lelbach on SoA and reflection at ACCU 2023 Data-Oriented Design and Modern C++ at CppNow 2023 Godbolt's law toml++ on GitHub PVS-Studio: 60 terrible tips for a C++ developer
Dans cet épisode estival Guillaume, Emmanuel et Arnaud parcourent les nouvelles du début d'été. Du Java, du Rust, du Go du coté des langages, du Micronaut, du Quarkus pour les frameworks, mais aussi du WebGPU, de l'agilité, du DDD, des sondages, de nombreux outils et surtout de l'intelligence artificielle à toutes les sauces (dans les bases de données, dans les voitures…). Enregistré le 21 juillet 2023 Téléchargement de l'épisode LesCastCodeurs-Episode-298.mp3 News Langages La release candidate de Go 1.21 supporte WASM et WASI nativement https://go.dev/blog/go1.21rc StringBuilder ou contatenation de String https://reneschwietzke.de/java/the-stringbuilder-advise-is-dead-or-isnt-it.html StringBuilder était la recommendation ca cela créait moins d'objects notamment. Mais la JVM a évolué et le compilateur ou JIT remplace cela par du code efficace Quelques petites exceptions le code froid (e.g. startup time) qui est encore interprété peut beneficier de StringBuilder autre cas, la concatenation dans des boucles où le JIT ne pourrait peut etre pas optimiser le StringBuilder “fluid” est plus efficace (inliné?) ces regles ne changement pas si des objects sont stringifié pour etre concaténés GPT 4 pas une revolution https://thealgorithmicbridge.substack.com/p/gpt-4s-secret-has-been-revealed rumeur ca beaucou de secret pas u modele a 1 trillion de parametres maus 8 a 220 Milliards combinés intelligeament les chercheurs attendaient un breakthrough amis c'est une envolution et pas particulierement nouveau methode deja implem,entee par des cherchers chez google (maintenant chez ooenai ils ont retarde la competition avec ces rumeurs de breakthrough amis 8 LLaMA peut peut etre rivaliser avec GPT4 Le blog Open Source de Google propose un article sur 5 mythes ou non sur l'apprentissage et l'utilisation de Rust https://opensource.googleblog.com/2023/06/rust-fact-vs-fiction-5-insights-from-googles-rust-journey-2022.html Il faut plus de 6 mois pour apprendre Rust : plutôt faux; quelques semaines à 3-4 mois max Le compilateur Rust est pas aussi rapide qu'on le souhaiterait — vrai ! Le code unsafe et l'interop sont les plus gros challanges — faux, c'est plutôt les macros, l'owernship/borrowing, et la programmation asynchrone Rust fournit des messages d'erreur de compilation géniaux — vrai Le code Rust est de haute qualité — vrai InfoQ sort un nouveau guide sur le Pattern Matching pour le switch de Java https://www.infoq.com/articles/pattern-matching-for-switch/ Le pattern matching supporte tous les types de référence L'article parle du cas de la valeur null L'utilisation des patterns “guarded” avec le mot clé when L'importance de l'ordre des cases Le pattern matching peut être utilisé aussi avec le default des switchs Le scope des variables du pattern Un seul pattern par case label Un seul case match-all dans un bloc switch L'exhaustivité de la couverture des types L'utilisation des generics La gestion d'erreur avec MatchException Librairies Sortie de Micronaut 4 https://micronaut.io/2023/07/14/micronaut-framework-4-0-0-released/ Langage minimal : Java 17, Groovy 4 et Kotlin 1.8 Support de la dernière version de GraalVM Utilisation des GraalVM Reachability Metadata Repository pour faciliter l'utilisation de Native Image Gradle 8 Nouveau Expression Language, à la compilation, pas possible au runtime (pour des raisons de sécurité et de support de pré-compilation) Support des Virtual Threads Nouvelle couche HTTP, éliminant les stack frames réactives quand on n'utilise pas le mode réactif Support expérimental de IO Uring et HTTP/3 Des filtres basés sur les annotations Le HTTP Client utilise maintenant le Java HTTP Client Génération de client et de serveur en Micronaut à partir de fichier OpenAPI L'utilisation YAML n'utilise plus la dépendance SnakeYAML (qui avait des problèmes de sécurité) Transition vers Jackarta terminé Et plein d'autres mises à jour de modules Couverture par InfoQ https://www.infoq.com/news/2023/07/micronaut-brings-virtual-thread/ Quarkus 3.2 et LTS https://quarkus.io/blog/quarkus-3-2-0-final-released/ https://quarkus.io/blog/quarkus-3-1-0-final-released/ https://quarkus.io/blog/lts-releases/ Infrastructure Red Hat partage les sources de sa distribution au travers de son Customer Portal, et impacte la communauté qui se base dessus https://almalinux.org/blog/impact-of-rhel-changes/ RedHat a annoncé un autre changement massif qui affecte tous les rebuilds et forks de Red Hat Enterprise Linux. À l'avenir, Red Hat publiera uniquement le code source pour les RHEL RPMs derrière leur portail client. Comme tous les clones de RHEL dépendent des sources publiées, cela perturbe encore une fois l'ensemble de l'écosystème Red Hat. Une analyse du choix de red hat sur la distribution du code source de rhel https://dissociatedpress.net/2023/06/24/red-hat-and-the-clone-wars/ Une reponse de red hat aux feux démarrés par l'annonce de la non distribution des sources de RHEL en public https://www.redhat.com/en/blog/red-hats-commitment-open-source-response-gitcentosorg-changes et un lien vers une de ces feux d'une personne proheminente dans la communauté Ansible https://www.jeffgeerling.com/blog/2023/im-done-red-hat-enterprise-linux Oracle demande a garder un Linux ouvert et gratuit https://www.oracle.com/news/announcement/blog/keep-linux-open-and-free-2023-07-10/ Suite à l'annonce d'IBM/RedHat, Oracle demande à garder Linux ouvert et gratuit IBM ne veut pas publier le code de RHEL car elle doit payer ses ingénieurs Alors que RedHat a pu maintenir son modèle économique durante des années L'article revient sur CentOS qu'IBM “a tué” en 2020 Oracle continue ses éfforts de rendre Linux ouvert et libre Oracle Linux continuera à être compatible avec RHEL jusqu'à la version 9.2, après ça sera compliqué de maintenir une comptabilité Oracle embauche des dev Linux Oracle demande à IBM de récupérer le downstream d'Oracle et de le distribuer SUSE forke RHEL https://www.suse.com/news/SUSE-Preserves-Choice-in-Enterprise-Linux/ SUSE est la société derrière Rancher, NeuVector, et SUSE Linux Enterprise (SLE) Annonce un fork de RHEL $10M d'investissement dans le projet sur les prochaines années Compatibilité assurée de RHEL et CentOS Web Google revent sont service de nom de domaine a Squarespace https://www.reddit.com/r/webdev/comments/14agag3/squarespace_acquires_google_domains/ et c'était pas gratuit donc on n'est pas censé etre le produit :wink: Squarespace est une entreprise américaine spécialisée dans la création de site internet Squarespace est un revendeur de Google Workspace depuis longtemps La vente devrait se finaliser en Q3 2023 Petite introduction à WebGPU en français https://blog.octo.com/connaissez-vous-webgpu/ Data Avec la mode des Large Language Models, on parle de plus en plus de bases de données vectorielles, pour stocker des “embeddings” (des vecteurs de nombre flottant représentant sémantiquement du texte, ou même des images). Un article explique que les Vecteurs sont le nouveau JSON dans les bases relationnelles comme PostgreSQL https://jkatz05.com/post/postgres/vectors-json-postgresql/ L'article parle en particulier de l'extension pgVector qui est une extension pour PostgreSQL pour rajouter le support des vectors comme type de colonne https://github.com/pgvector/pgvector Google Cloud annonce justement l'intégration de cette extension vectorielle à CloudSQL pour PostgreSQL et à AlloyDB pour PostgreSQL https://cloud.google.com/blog/products/databases/announcing-vector-support-in-postgresql-services-to-power-ai-enabled-applications Il y a également une vidéo, un notebook Colab, et une article plus détaillé techniquement utilisant LangChain https://cloud.google.com/blog/products/databases/using-pgvector-llms-and-langchain-with-google-cloud-databases Mais on voit aussi également Elastic améliorer Lucene pour utiliser le support des instructions SIMD pour accélérer les calculs vectoriels (produit scalaire, distance euclidienne, similarité cosinus) https://www.elastic.co/fr/blog/accelerating-vector-search-simd-instructions Outillage Le sondage de StackOverflow 2023 https://survey.stackoverflow.co/2023/ L'enquête a été réalisée auprès de 90 000 développeurs dans 185 pays. Les développeurs sont plus nombreux (+2%) que l'an dernier à travailler sur site (16% sur site, 41% remote, 42% hybrid) Les développeurs sont également de plus en plus nombreux à utiliser des outils d'intelligence artificielle, avec 70 % d'entre eux déclarant les utiliser (44%) ou prévoyant de les utiliser (25) dans leur travail. Les langages de programmation les plus populaires sont toujours JavaScript, Python et HTML/CSS. Les frameworks web les plus populaires sont Node, React, JQuery. Les bases de données les plus populaires sont PostgreSQL, MySQL, et SQLite. Les systèmes d'exploitation les plus populaires sont Windows puis macOS et Linux. Les IDE les plus populaires sont Visual Studio Code, Visual Studio et IDEA IntelliJ. Les différents types de déplacement dans Vim https://www.barbarianmeetscoding.com/boost-your-coding-fu-with-vscode-and-vim/moving-blazingly-fast-with-the-core-vim-motions/ JetBrains se mets aussi à la mode des assistants IA dans l'IDE https://blog.jetbrains.com/idea/2023/06/ai-assistant-in-jetbrains-ides/ une intégration avec OpenAI mais aussi de plus petits LLMs spécifiques à JetBrains un chat intégré pour discuter avec l'assistant, puis la possibilité d'intégrer les snippets de code là où se trouve le curseur possibilité de sélectionner du code et de demander à l'assistant d'expliquer ce que ce bout de code fait, mais aussi de suggérer un refactoring, ou de régler les problèmes potentiels on peut demander à générer la JavaDoc d'une méthode, d'une classe, etc, ou à suggérer un nom de méthode (en fonction de son contenu) génération de message de commit il faut avoir un compte JetBrains AI pour y avoir accès Des commandes macOS plus ou moins connues https://saurabhs.org/advanced-macos-commands caffeinate — pour garder le mac éveillé pbcopy / pbpaste — pour interagir avec le clipboard networkQuality — pour mesurer la rapidité de l'accès à internet sips — pour manipuler / redimensionner des images textutil — pour covertir des fichers word, texte, HTML screencapture — pour faire un screenshot say — pour donner une voix à vos commandes Le sondage de la communauté ArgoCD https://blog.argoproj.io/cncf-argo-cd-rollouts-2023-user-survey-results-514aa21c21df Un client d'API open-source et cross-platform pour GraphQL, REST, WebSockets, Server-sent events et gRPC https://github.com/Kong/insomnia Architecture Moderniser l'architecture avec la decouverte via le domain driven discovery https://www.infoq.com/articles/architecture-modernization-domain-driven-discovery/?utm_source=twitter&utm_medium=link&utm_campaign=calendar Un article très détaillé pour moderniser son architecture en utilisant une approche Domain-Driven Discovery qui se fait en 5 étapes: Encadrer le problème – Clarifier le problème que vous résolvez, les personnes touchées, les résultats souhaités et les contraintes de solution. Analyser l'état actuel – Explorer les processus opérationnels et l'architecture des systèmes existants afin d'établir une base de référence pour l'amélioration. Explorer l'état futur – Concevoir une architecture modernisée fondée sur des contextes délimités, établir des priorités stratégiques, évaluer les options et créer des solutions pour l'état futur. Créer une feuille de route – Créer un plan pour moderniser l'architecture au fil du temps en fonction des flux de travail ou des résultats souhaités. Récemment, Sfeir a lancé son blog de développement sur https://www.sfeir.dev/ plein d'articles techniques sur de nombreux thèmes : front, back, cloud, data, AI/ML, mobile aussi des tendances, des success stories par exemple dans les derniers articles : on parle d'Alan Turing, du Local Storage en Javascript, des la préparation de certifications React, l'impact de la cybersécurité sur le cloud Demis Hassabis annonce travailler sur une IA nommée Gemini qui dépassera ChatGPT https://www.wired.com/story/google-deepmind-demis-hassabis-chatgpt/ Demis Hassabis CEO de Google DeepMind créateur de AlphaGOet AlphaFold Travaille sur une IA nommé Gemini qui dépasserait ChatGPT de OpenAI Similair à GPT-4 mais avec des techniques issues de AlphaGO Encore en developpement, va prendre encore plusieurs mois Un remplaçant a Bard? Méthodologies Approcher l'agilité par les traumatismes (de developement) passés des individus https://www.infoq.com/articles/trauma-informed-agile/?utm_campaign=infoq_content&utm_source=twitter&utm_medium=feed&utm_term=culture-methods Nous subissons tous un traumatisme du développement qui rend difficile la collaboration avec d'autres - une partie cruciale du travail dans le développement de logiciels agiles. Diriger d'une manière tenant compte des traumatismes n'est pas pratiquer la psychothérapie non sollicitée, et ne justifie pas les comportements destructeurs sans les aborder. Être plus sensible aux traumatismes dans votre leadership peut aider tout le monde à agir de façon plus mature et plus disponible sur le plan cognitif, surtout dans des situations émotionnellement difficiles. Dans les milieux de travail tenant compte des traumatismes, les gens accordent plus d'attention à leur état physique et émotionnel. Ils s'appuient aussi davantage sur le pouvoir de l'intention, fixent des objectifs d'une manière moins manipulatrice et sont capables d'être empathiques sans s'approprier les problèmes des autres. Loi, société et organisation Mercedes va rajouter de l'intelligence artificielle dans ses voitures https://azure.microsoft.com/en-us/blog/mercedes-benz-enhances-drivers-experience-with-azure-openai-service/ Programme béta test de 3 mois pour le moment Assistance vocale “Hey Mercedes” Permet de discuter avec la voiture pour trouver son chemin, concocter une recette, ou avoir tout simplement des discussions Ils travaillent sur des plugin pour reserver un resto, acheter des tickets de cinéma Free software vs Open Source dans le contexte de l'intelligence artificielle par Sacha Labourey https://medium.com/@sachalabourey/ai-free-software-is-essential-to-save-humanity-86b08c3d4777 on parle beaucoup d'AI et d'open source mais il manque la dimension de controle des utilisateurs finaux Stallman a crée la FSF par peur de la notion d'humain augmenté par des logiciels qui sont controllés par d'autres (implants dans le cerveau etc) d'ou la GPL et sa viralité qui propage la capacité a voir et modifier le conde que l'on fait tourner dans le debat AI, ce n'est pas seulement open source (casser oligopolie) mais aissu le free software qui est en jeu La folie du Cyber Resilience Act (CRA) europeen https://news.apache.org/foundation/entry/save-open-source-the-impending-tragedy-of-the-cyber-resilience-act Au sein de l'UE, la loi sur la cyber-résilience (CRA) fait maintenant son chemin à travers les processus législatifs (et doit faire l'objet d'un vote clé le 19 juillet 2023). Cette loi s'appliquera à un large éventail de logiciels (et de matériel avec logiciel intégré) dans l'UE. L'intention de ce règlement est bonne (et sans doute attendue depuis longtemps) : rendre le logiciel beaucoup plus sûr. Le CRA a une approche binaire: oui/non et considère tout le monde de la même manière Le CRA réglementerait les projets à source ouverte à moins qu'ils n'aient « un modèle de développement entièrement décentralisé ». Mais les modèles OSS sont de complexes mélanges de pur OSS et éditeurs de logiciels les entreprises commerciales et les projets open source devront être beaucoup plus prudents quant à ce que les participants peuvent travailler sur le code, quel financement ils prennent, et quels correctifs ils peuvent accepter. Certaines des obligations sont pratiquement impossibles à respecter, par exemple l'obligation de « livrer un produit sans vulnérabilités exploitables connues ». Le CRA exige la divulgation de vulnérabilités graves non corrigées et exploitées à l'ENISA (une institution de l'UE) dans un délai mesuré en heures, avant qu'elles ne soient corrigées. (complètement opposé aux bonnes pratiques de sécu) Une fois de plus une bonne idée à l'origine mais très mal implémentée qui risque de faire beaucoup de dommages Octave Klaba, avec Miro, son frère, et la Caisse des Dépôts, finalisent la création de Synfonium qui va maintenant racheter 100% de Qwant et 100% fe Shadow. Synfonium est détenue à 75% par Jezby Venture & Deep Code et à 25% par la CDC. https://twitter.com/i/web/status/1673555414938427392 L'un de rôles de Synfonium est de créer la masse critique des utilisateurs et des clients B2C & B2B qui vont pouvoir utiliser tous ces services gratuits et payants Vous y retrouverez le moteur de recherche, les services gratuits, la suite collaborative, le social login, mais aussi les services de nos partenaires tech. Le but est de créer une plateforme dans le Cloud SaaS EU qui respectent nos valeurs et nos lois européennes Yann LeCun : «L'intelligence artificielle va amplifier l'intelligence humaine» https://www.europe1.fr/emissions/linterview-politique-dimitri-pavlenko/yann-lecun-li[…]gence-artificielle-va-amplifier-lintelligence-humaine-4189120 Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 2-3 septembre 2023 : SRE France SummerCamp - Chambéry (France) 6 septembre 2023 : Cloud Alpes - Lyon (France) 8 septembre 2023 : JUG Summer Camp - La Rochelle (France) 14 septembre 2023 : Cloud Sud - Remote / Toulouse (France) 18 septembre 2023 : Agile Tour Montpellier - Montpellier (France) 19-20 septembre 2023 : Agile en Seine - Paris (France) 19 septembre 2023 : Salon de la Data Nantes - Nantes (France) & Online 21-22 septembre 2023 : API Platform Conference - Lille (France) & Online 22 septembre 2023 : Agile Tour Sophia Antipolis - Valbonne (France) 25-26 septembre 2023 : BIG DATA & AI PARIS 2023 - Paris (France) 28-30 septembre 2023 : Paris Web - Paris (France) 2-6 octobre 2023 : Devoxx Belgium - Antwerp (Belgium) 6 octobre 2023 : DevFest Perros-Guirec - Perros-Guirec (France) 10 octobre 2023 : ParisTestConf - Paris (France) 11-13 octobre 2023 : Devoxx Morocco - Agadir (Morocco) 12 octobre 2023 : Cloud Nord - Lille (France) 12-13 octobre 2023 : Volcamp 2023 - Clermont-Ferrand (France) 12-13 octobre 2023 : Forum PHP 2023 - Marne-la-Vallée (France) 19-20 octobre 2023 : DevFest Nantes - Nantes (France) 19-20 octobre 2023 : Agile Tour Rennes - Rennes (France) 26 octobre 2023 : Codeurs en Seine - Rouen (France) 25-27 octobre 2023 : ScalaIO - Paris (France) 26-27 octobre 2023 : Agile Tour Bordeaux - Bordeaux (France) 26-29 octobre 2023 : SoCraTes-FR - Orange (France) 10 novembre 2023 : BDX I/O - Bordeaux (France) 15 novembre 2023 : DevFest Strasbourg - Strasbourg (France) 16 novembre 2023 : DevFest Toulouse - Toulouse (France) 23 novembre 2023 : DevOps D-Day #8 - Marseille (France) 30 novembre 2023 : PrestaShop Developer Conference - Paris (France) 30 novembre 2023 : WHO run the Tech - Rennes (France) 6-7 décembre 2023 : Open Source Experience - Paris (France) 7 décembre 2023 : Agile Tour Aix-Marseille - Gardanne (France) 8 décembre 2023 : DevFest Dijon - Dijon (France) 7-8 décembre 2023 : TechRocks Summit - Paris (France) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via twitter https://twitter.com/lescastcodeurs Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/
If you have developed a Software and you discover that it is considered a Medical Device, then what to do? Or your Software is inside a Medical Device so is this different than normal Medical Devices? In this episode with Anindya Mookerjea from Scube-technologies, we will explain to you the step that you need to follow to create a Design Dossier for your product. This is an important step as this is required for certification by the authorities. So let's dig into it. Who is Anindya Mookerjea? Anindya is the founder and CEO of S-Cube Technologies. He has over 20 years of experience in the medical device & Software industry. Experienced in taking medical device concepts through design & development, regulatory submission, and to market. Expert in the design, development, and implementation of FDA/ISO compliant quality management systems for Medical devices & Software medical devices. Industry expert in Agile methodology implementation for Software medical devices. Strong background in testing Software medical devices. The brainchild of SmartEye product. Who is Monir El Azzouzi? Monir El Azzouzi is a Medical Device Expert specializing in Quality and Regulatory Affairs. After working for many years with big Healthcare companies and particularly Johnson and Johnson, he decided to create EasyMedicalDevice.com to help people have a better understanding of the Medical Device Regulations all over the world. He has now created the consulting firm Easy Medical Device GmbH and developed many ways to deliver knowledge through videos, podcasts, online courses… His company is also acting as Authorized Representative for the EU, UK, and Switzerland. Easy Medical Device becomes a one-stop shop for medical device manufacturers that need support on Quality and Regulatory Affairs. Links Anindya Mookerjea Linkedin Profile: https://www.linkedin.com/in/anindya-mookerjea-6b5367209/ SCube Technologies website: https://scube-technologies.com/ SCube Technologies Linkedin Page: https://www.linkedin.com/company/scube-technologies/ Twitter account: https://twitter.com/scubetechno Facebook account: https://www.facebook.com/i.scubetechnologies Article Medical Device development: https://easymedicaldevice.com/medical-device-development/
What is software as a medical device (SaMD)? How does one test SaMD? What testing is required? What are the risks related to your SaMD and its testing? In this episode of the Global Medical Device Podcast, Etienne Nichols talks to Rahul Kallampunathil, Vice President of Arbour Group's Digital Compliance practice, about digitizing your SaMD testing.Rahul builds teams that help companies proactively manage compliance risks toward a true digital enterprise. He is an innovative thinker with more than 18 years of experience in risk management, compliance, and internal controls focused on information technology and data. Some of the highlights of this episode include:Rahul defines SaMD as software used to perform a medical purpose without being part of a hardware medical device. SaMD is capable of running on any general purpose platform, such as your Android or iOS mobile phone.SaMD is different from software in a medical device (SiMD). SaMD is independent from hardware; SiMD is embedded in a physical medical device.Also, there's differences between SaMD and medical device data systems (MDDS). MDDS only transfers, stores, or displays medical device data, but it does not have an algorithm or business rule to help make medical decisions.IEC 62304 impacts SaMD organizations and how they approach the risk of their solution. All SaMDs are not equal, and it's important to understand the level of risk in every SaMD.Companies should prepare for SaMD testing with a clinical evaluation to demonstrate a valid clinical association between SaMD's output and targeted clinical condition.Before thinking about designing and developing a product, a quality management system (QMS) should be established. Software companies need to adopt and modify their QMS to serve their product and its users, while fulfilling FDA requirements.Rahul discusses the pros and cons of manual versus automated/electronic documentation and testing, including risk management for patient safety.Best practices for SaMD testing are using agile and devops methodologies. Potential pitfalls are not testing continuously, even after the product is on the market. Memorable quotes from Rahul Kallampunathil:“Software as a medical device - it is software that is intended to be used for a medical purpose, and it performs that purpose without being part of a hardware medical device.”“There is no physical device, in this case, that you can touch and feel. It's purely software.”“The nature of software - it's something that you cannot touch or see. It's all code. From a testing perspective, especially, there's a lot of things that you need to pay attention to.” “The quality management system or quality management process, that has to be established even before you think about the product, even before you design the product.”Links:Rahul Kallampunathil on LinkedInEtienne Nichols on LinkedInArbour Group LLCFDA - SaMDFDA - Medical Device Data Systems (MDDS)FDA - CybersecurityIEC 62304International Medical Device Regulators Forum (IMDRF)Greenlight Guru YouTube ChannelGreenlight Guru CommunityGreenlight Guru AcademyMedTech NationGreenlight Guru
Dr. Mark Korson from Tufts Floating Hospital for Children gives a "crash course" in interpreting lab values! Most patients with mitochondrial disease have faced a page of test results comprised of letters and numbers that would help them understand their current illness if the information made sense. CBC, CMP, LFTs, CPK, OAA and more...join us as we figure it out! CBC CMP Lactic acid pyruvic acid amino acids organic amino acids ammonia electrolytes glucose bicarbonate/co2 metabolic labs CPK LFTs carnitine TFTs: TSH, T4, T3 cerebral folate About The Speaker: Mark Korson graduated from the University of Toronto medical school and completed his pediatric residency nearby at The Hospital for Sick Children. He came to Boston to do a fellowship in genetics and metabolism at Children's Hospital. Following that, he directed the Metabolism Clinic at Children's until 2000, transferring then to Tufts Medical Center's Floating Hospital for Children. He is currently the Director of the Metabolism Service and an Associate Professor of Pediatrics at Tufts University School of Medicine. Besides clinical medicine, a key focus for Dr. Korson is education. He is concerned about the growing crisis in metabolic health care due to the shortage of clinicians available to treat this community. To complicate this situation, there are too few people entering this subspecialty. In the fall of 2007, Dr. Korson launched the Metabolic Outreach Service, for which he has travelled on a regular basis to five teaching hospitals in the northeastern US where there is no on-site metabolic service. The goal is to provide educational and consultative support so that non-metabolic clinicians can learn how to participate more in the diagnosis and management of patients with metabolic disease. A component of this effort is the Patient-As-Teacher Project, which engages patients and family members to participate actively in the teaching of medical students, house-staff, primary care providers and specialists. The Outreach Service is funded by a consortium of corporate and disease foundation sponsors. In addition, Dr. Korson co-directs the North American Metabolic Academy, a one-week intensive course about metabolic disease for genetic and metabolic trainees. NAMA is sponsored by the SIMD, the Society for Inherited Metabolic Diseases.
Rob and Jason are joined by Joël Falcou and Denis Yaroshevskiy. They first talk about the 6.2 release of Qt and the range-based for loop bug that won't be getting fixed in C++23. Then they talk to Joel and Denis about EVE, a C++20 SIMD library that evolved from Boost.SIMD. News QT 6.2 LTS Released GDBFrontend C++ Committee don't want to fix range-based for loop in C++23 (broken for 10yrs) Links EVE on GitHub EVE example on Compiler Explorer CppCon 2021: SIMD in C++20: EVE of a new Era Meeting C++ 2021 - EVE: A new, powerful open source C++20 SIMD library C++Russia EVE Talk Denis Yaroshevskiy - my first SIMD - Meeting C++ online Sponsors Use code JetBrainsForCppCast during checkout atJetBrains.com for a 25% discount
В этом выпуске: Обсуждаем компилятор ISPC с открытым исходным кодом, SIMD, DPC++, и другие интересные концепции. Упоминаем о выходе нового релиза PostgreSQL. Советуем не смотреть очередное видео про TileDB из серии Vaccination DB Talks, и, конечно, темы слушателей. Шоуноты: Темы и вопросы слушателей для 0354 https://devzen.ru/themes-0354/ https://web.stanford.edu/class/ee384m/Handouts/HowtoReadPaper.pdf Интервью с гостем https://github.com/ispc/ispc/ https://twitter.com/dmitrybabokin https://github.com/intel/llvm/blob/sycl/sycl/doc/extensions/ExplicitSIMD/dpcpp-explicit-simd.md https://software.intel.com/content/www/us/en/develop/tools/oneapi/data-parallel-c-plus-plus.html#gs.c91xxj https://pharr.org/matt/blog/2018/04/18/ispc-origins… Читать далее →
Software Engineering Radio - The Podcast for Professional Software Developers
Luis Ceze of OctoML discusses Apache TVM, an open source machine learning model compiler for a variety of different hardware architectures with host Akshay Manchale. Luis talks about the challenges in deploying models on specialized hardware and how TVM.
Actualités de l'écosystème · Annonce du Memorandum of Understanding (MOU) entre la France et les Pays-Bas.https://www.government.nl/documents/diplomatic-statements/2021/08/31/joint-statement-of-france-and-the-netherlands · Levée de fonds de $450M de PsiQuantum.https://www.hpcwire.com/off-the-wire/psiquantum-closes-450m-funding-to-build-commercially-viable-quantum-computer/ · Levée de fonds de $50M de Quantum Machines.https://www.timesofisrael.com/quantum-machines-nabs-50m-investment-to-make-quantum-computers-more-accessible/ · Conférence #GEN2021 avec Pascale Senellart le 10 septembre à Metz. · Le SIDO à Lyon avec Alexia Auffèves, Marc Kaplan (Veriqloud) et Marc Porcheron (EDF) pour aborder de manière un peu différente le sujet du quantique.https://pulse.sido-lyon.com/fr/session/11c297ef-aab1-eb11-94b3-000d3a219024 · Nouveau cursus de mineure sur l'informatique quantique à l'EPITA. · Teasing : le nouveau livre Quantique d'Olivier en Anglais, yes it is !!! Actualités scientifiques Premiers qubits logiquesPendant l'été, deux papiers ont été publiés par IBM et Honeywell (HQS). Processeurs quantiques multicœurs Plusieurs annonces pendant l'été sur le sujet du « scale-out » du calcul quantique, pour augmenter la puissance de calcul en reliant plusieurs unités de traitement quantiques.· Rigetti et ses processeurs multicœurs. Cœurs de seulement 4 qubits. https://arxiv.org/pdf/2102.13293.pdf · IonQ et une approche voisine, avec ses ions piégés gérés en grappes de 16 (dont 12 utiles, les 4 restant servant au refroidissement). Reconfigurable Multicore Quantum Architecture (RMQA).https://ionq.com/news/august-25-2021-reconfigurable-multicore-quantum-architecture · Publication d'un brevet d'AMD sur le SIMD. Distribution de traitements quantiques sur plusieurs unités de calcul qui partagent les mêmes instructions.https://www.tomshardware.com/news/amd-teleportation-quantum-computing-patent Scalabilité des qubits siliciumGoogle Time Crystalshttps://www.washingtonpost.com/technology/2021/08/12/timecrystal-google/ CloudOQC qui se lance dans le cloud en UK, sans préciser le nombre de qubits.Quantum Computing-as-a-Service (QCaaS).https://oxfordquantumcircuits.com/oqc-delivers-first-eu-qcaas Déploiement d'un ordinateur quantique à Adu Dhabihttps://gulfnews.com/business/abu-dhabi-builds-regions-first-ever-quantum-computer-1.81644751 Hype. Quantum Computing Hype is Bad for Science de Victor Galitski, University of Maryland, July 2021.
Gudrun spricht in dieser Folge mit Sarah Bischof, Timo Bohlig und Jonas Albrecht. Die drei haben im Sommersemester 2021 am Projektorientiertes Softwarepraktikum teilgenommen. Das Praktikum wurde 2010 als forschungsnaher Lernort konzipiert. Studierende unterschiedlicher Studiengänge arbeiten dort ein Semester lang an konkreten Strömungssimulationen. Es wird regelmäßig im Sommersemester angeboten. Seit 2014 liegt als Programmiersprache die Open Source Software OpenLB zugrunde, die ständig u.a. in der Karlsruher Lattice Boltzmann Research Group (LBRG) weiter entwickelt wird. Konkret läuft das Praktikum etwa folgendermaßen ab: Die Studierenden erhalten eine theoretische Einführung in Strömungsmodelle, die Idee von Lattice-Boltzmann-Methoden und die Nutzung der Hochleistungrechner am KIT. Außerdem finden sie sich für ein einführendes kleines Projekt in Gruppen zusammen. Anschließend wählen sie aus einem Katalog eine Frage aus, die sie bis zum Ende des Semesters mit Hilfe von Computersimulationen gemeinsam beantworten. Diese Fragen sind Teile von Forschungsthemen der Gruppe, z.B. aus Promotionsprojekten oder Drittmittelforschung. Während der Projektphase werden die Studierenden von dem Doktoranden/der Doktorandin der Gruppe, die die jeweilige Frage gestellt haben, betreut. Am Ende des Semesters werden die Ergebnisse in Vorträgen vorgestellt und diskutiert oder es wird eine Podcastfolge aufgenommen. In einer Ausarbeitung werden außerdem die Modellbildung, die Umsetzung in OpenLB und die konkreten Simulationsergebnisse ausführlich dargelegt und in den aktuellen Forschungsstand eingeordnet. Sarah, Timo und Jonas sind am KIT im Masterstudiengang Chemieingenieurwesen eingeschrieben. Neben den verschiedenen Masterstudiengängen Mathematik kommen aus diesem Studiengang die meisten Interessenten für das Softwarepraktikum. Im Podcast erläutern sie, was sie an der Strömungssimulation interessiert und inwieweit sie sich gut oder auch nicht so gut auf die Anforderungen vorbereitet gefühlt haben, wie sie sich die Arbeit in der Gruppe aufgeteilt haben und was sie an fachlichen und überfachlichen Dingen dort gelernt haben. Das Thema des Projektes war ein Benchmark für die Durchströmung der Aorta. Dies ist einer der Showcases für OpenLB, die auf den ersten Blick die Leistungsfähigkeit der Software demonstrieren sollen. Das Projekt wurde von der Gruppe in drei Teile gegliedert: Benchmark Test auf dem bwUniCluster 2.0 (High Performance Computer) Performance Analyse mit selbstgeschriebener Source Code Erweiterung Performance Analyse mit externer Software (Validierung der Source Code Erweiterung) Mit Hilfe der Benchmark Tests auf dem HPC konnte die maximale Skalierbarkeit des Aorta Source Codes in Abhängigkeit der Problemgröße gefunden werden. Sie gibt an, auf wie vielen Computerprozessoren der Showcase mit der höchsten Performance simuliert werden kann. Des Weiteren wurde die parallele Effizienz mit Hilfe der Speedup Kennzahl untersucht. Diese beschreibt inwiefern sich die Simulationszeit infolge von Erhöhung der Prozessoranzahl verringert. In beiden Fällen zeigten die Performanceindikatoren ein Maximum bei 400-700 Prozessoreinheiten für Problemgrößen bis zu einer Resolution von N = 80. Das Softwarepaket OpenLB beinhaltet in Release 1.4r0 keine detaillierten Schnittstellen zur Performancemessung. Durch eine Source Code Erweiterung konnte eine interne Zeitmessung der einzelnen Funktionen des Codes realisiert werden. Dabei wurden so genannte Bottlenecks identifiziert und dokumentiert, welche durch Updates in zukünftigen Versionen der Software eliminiert werden sollen. Des Weiteren konnte auch durch die Code Erweiterung eine Aussage über die Parallelisierung getroffen werden. Im Vergleich zu den Benchmark Tests können direkt Funktionen des Source Codes, die die Parallelisierung hemmen, bestimmt werden. Die Performance Analyse durch das externe Programm und durch die Source Code Erweiterung bestätigen eine gut funktionierende Parallelisierung. Die Realisierung erfolgte dabei durch die Messung der Laufzeit der Hauptschritte einer OpenLB Simulation, sowie der detaillierten Analyse einzelner Funktionen. Diese finden sich zum aktuellen Zeitpunkt im Post-Processing des "Collide And Stream" Schrittes der Simulation. Collide And Stream beschreibt einen lokalen Berechnungsschritt, einen lokalen und einen nicht lokalen Übertragungsschritt. Der Kollisionsschritt bestimmt ein lokales Gleichgewicht der Massen-, Momenten- und Energiebilanzen. Im nicht-lokalen Streaming Schritt werden diese Werte auf die angrenzenden Blöcke des Simulationsgitters übertragen. Dies ermöglicht im Vergleich zu CFD-Simulationen, die auf Basis der Finite-Volumen-Methode (FVM) die Navier-Stokes Gleichungen lösen, effizientere Parallelisierung insbesondere bei Einsatz einer HPC-Einheit. Die Post Prozessoren im Collide And Stream wenden unter anderem bestimmte, im vorangegangenen Schritt gesetzte Randbedingungen auf definierte Bereiche der Simulationsgeometrie an. Sie werden dabei nur für nicht-lokale Randbedingungen verwendet, weitere Randbedingungen können auch innerhalb des Kollisionsschrittes modelliert werden. Im Showcase der Aorta ist für das Fluid (Blut) am Eingang der Simulation eine Geschwindigkeits-Randbedingung nach Bouzidi mit Poiseuille-Strömungsprofil und am Ausgang eine "stress-free" Bedingung gewählt. Für die Aortawand ist eine no-slip Bedingung mit Fluidgeschwindigkeit null implementiert (Für genauere Informationen zum Simulationsaufbau hier und hier. Die Laufzeit der Post-Processor Funktionen, deren Aufgabe es ist die Randbedingungen anzuwenden, können mit dem Timer des Release 1.4r0 nicht analysiert werden. Mit Blick auf spätere Releases ist es mit der Source Code Erweiterung nun möglich mit geringem Aufwand Daten über die Effizienz der vorhandenen, neuer oder verbesserter Funktionen in OpenLB zu gewinnen. Eine integrierte Zeitmessung als Analysetool kann einen direkten Einfluss auf die Performance des Source Codes haben, weshalb mit Hilfe der externen Software AMDµProf die Bottlenecks validiert wurden. Sowohl bei der internen als auch externe Performance Analyse sind die selben Post-Processing Schritte als Bottlenecks erkennbar, welches die Code Erweiterung validiert. Zusätzlich konnte mit der AMDμProf-Software die aktuelle OpenLB Version 1.4r0 mit der vorherigen Version 1.3r1 verglichen werden. Dabei fällt auf, dass sich die Bottlenecks vom Berechnungsschritt in Collide And Stream (Release 1.3r1) zum Post-Processing Schritt in Collide And Stream (Release 1.4r0) verschoben haben. Abschließend wurde eine vektorisierte Version von OpenLB erfolgreich getestet und ebenfalls auf Bottlenecks untersucht. Eine Vektorisierung eines Codes, auch bekannt als SIMD, soll die Parallelisierung verbessern und der Aorta Simulation eine bessere Performance verleihen. Das Bottleneck des Post-Processing Schritts in Collide And Stream, speziell durch Implementierung neuer Bouzidi Boundaries, wurde durch eine weitere Gruppe im Rahmen des Projektorientierten Softwarepraktikums optimiert. Es konnte eine Performance Verbesserung um einen Faktor 3 erreicht werden (mit OpenMP Compiler). Durch eine gezielte Analyse der Bottlenecks im Code konnte das Potential für die Beschleunigung der Simulation erweitert werden. Aber natürlich lohnt es sich hier weiterhin anzusehen, wo noch konkretes Potential für die Beschleunigung der Simulation liegt. Zumal seit dem letzten Relounch einige Pardigmen in der Software OpenLB verändert wurden. Podcasts L. Dietz, J. Jeppener, G. Thäter: Gastransport - Gespräch im Modellansatz Podcast, Folge 214, Fakultät für Mathematik, Karlsruher Institut für Technologie (KIT) 2019. A. Akboyraz, A. Castillo, G. Thäter: Poiseuillestrom - Gespräch im Modellansatz Podcast, Folge 215, Fakultät für Mathematik, Karlsruher Institut für Technologie (KIT) 2019.A. Bayer, T. Braun, G. Thäter: Binärströmung, Gespräch im Modellansatz Podcast, Folge 218, Fakultät für Mathematik, Karlsruher Institut für Technologie (KIT), 2019. Literatur und weiterführende Informationen Showcase blood flow simulation auf der Seite der Software OpenLBAortic Coarctation Simulation Based on the Lattice Boltzmann Method: Benchmark Results, Henn, Thomas;Heuveline, Vincent;Krause, Mathias J.;Ritterbusch, SebastianMRI-based computational hemodynamics in patients with aortic coarctation using the lattice Boltzmann methods: Clinical validation study; Mirzaee, Hanieh;Henn, Thomas;Krause, Mathias J.;Goubergrits, Leonid; Schumann, Christian; Neugebauer, Mathias; Kuehne, Titus; Preusser, Tobias; Hennemuth, Anja
SIMD, ARM и все-все-все Как полететь на Луну? Первый телефон от Tesla Отмена iPhone 12 mini Отмена Apple Home Pod Новый почти-N900 телефон Эмодзи домены для почты
This episode is about several of the z/OS V2.5 new functions, which were recently announced, for both the Mainframe and Performance topics. Our Topics topic is on Martin's Open Source tool filterCSV. Full long show notes are here. We have a guest for Performance: Nick Matsakis, z/OS Development, IBM Poughkeepsie. Many of the enhancements you'll see in the z/OS V2.5 Preview were provided on earlier z/OS releases via Continuous Delivery PTFs. The APARs are provided in the announce. If you use FTPS for your IBM software electonic delivery, a change is taking place on April 30, 2021. We strongly recommend you use HTTPS instead, but if you still want to use FTPS see IBM software electronic delivery change - take notice! Mainframe - Selected z/OS V2.5 enhancements IBM will have z/OS installable with z/OSMF, in a portable software instance format! z/OS V2.4 will not be installable with z/OSMF, and z/OS V2.4 driving system requirements remain the same. z/OS V2.5 will be installable via z/OSMF, so that is a big driving system change. Learn z/OSMF Software Management at this website. Notification of availability of TCP/IP extended services Workload Manager (WLM) batch initiator management takes into account availability of zIIP capacity Change the Master Catalog without IPL IDCAMS DELETE mask takes TEST and EXCLUDE. IDCAMS REPRO moves I/O buffers above line. New RMF Concept for CF data gathering RMF has been restructured, but all the functions are still intact. z/OS V2.5 RMF is still a priced feature. Performance - zCX enhancements zCX is important for co-locating Linux on Z containers with z/OS. Popular use cases can be found here and in the Redbook here. Another helpful source is Ready for the Cloud with IBM zCX. zIIP eligibility enhancements New enhancements include support 1 MB and 2 GB large pages (still fixed) for backing guests. Guest memory is planned to be configured up to 1 TB Another relief is in Disk space limits Monitor and log zCX resource usage of the root disk, guest memory, swap disk, and data disks in the servers job log. zCX resource shortage z/OS alerts to improve monitoring and automated operations. SIMD (or Vector). SIMD is a performance feature, and can be used for analytics. Nick's presentation (with Mike Fitzpatrick) is here. Topics - filterCSV and tree manipulation Mindmapping leads to trees. Thinking of z/OS: Sysplex -> System -> Db2 -> Connected CICS leads to trees. Has very little automation of its own. But crucially you can mangle the CSV file outside of iThoughts, which is what filterCSV does. filterCSV is a python open source program that manipulates iThoughts CSV files, colouring CICS regions according to naming conventions So it goes.
In this episode, David Delabassee discusses the new Vector API with John Rose and Paul Sandoz. For more episodes, check out https://inside.java/podcast. Resources: JEP 338: Vector API (Incubator) jdk.incubator.vector Javadoc OpenJDK 16 EA Builds Daniel Lemire's blog
Nell Shamrell-Harrington — lead editor of This Week in Rust — takes you through highlights from TWir 345, published on June 29, 2020. Contributing to Rustacean Station Rustacean Station is a community project; get in touch with us if you’d like to suggest an idea for an episode or offer your services as a host or audio editor! Twitter: @rustaceanfm Discord: Rustacean Station Github: @rustacean-station Email: hello@rustacean-station.org Referenced resources Faster Rust Development on AWS EC2 with VSCode Rust Verification Tools Extremely Simple Rust Rocket Framework Tutorial Build a Smart Bookmarking Tool with Rust and Rocket Secure Rust Guidelines Examining ARM vs x86 Memory Models with Rust Rust Stream: Iterators Manipulating ports, virtual ports and pseudo terminals Database Project Gooseberry Ruma Crates.io token scopes Linking modifiers for native libraries Portable packed SIMD vector types Hierarchic anonymous life-time Inline const expressions and patterns Inline Assembly Deduplicate Cargo workspace information Credits Hosting Infrastructure: Jon Gjengset Show Notes: Nell Shamrell-Harrington Hosts: Nell Shamrell-Harrington
Январский "застой" закончился, и мы представляем Вам первый выпуск в новом сезоне. Почему все говорят о Blazor? Зачем векторизация в .Net? Сравниваем производительность .Net Framework и .Net Core и не только. P.S.: мы запустили новую рубрику "Расскажи про свой проект". Если у вас есть интересный и полезный опыт, пожалуйста, поделитесь им в социальных сетях, а лучше, приходите в выпуск. Спасибо всем кто нас слушает. Не стесняйтесь оставлять обратную связь и предлагать свои темы. Ссылка для скачивания: https://dotnetmore.ru/wp-content/uploads/2020/02/DotNetAndMore-28-BlazorAgain.mp3 Shownotes: - [0:01:34] Blazor - [0:26:41] Soft Skills - [0:38:20] A small overview of SIMD in .NET/C# - [0:46:42] Building a self-contained game in C# under 8 kilobytes - [0:53:12] Benchmark - ASP.NET 4.8 Vs ASP.NET Core 3.0 - [1:05:31] Using Local Functions to Replace Comments - [1:16:00] C# Coding Standards - [1:26:58] "Расскажи про свой проект" - [1:36:28] Legacy code и неизбежность Ссылки: - https://habr.com/en/post/484596/ : Blazor Client Side Интернет Магазин: Часть 1 — Авторизация oidc (oauth2) + Identity Server4 - https://hackernoon.com/how-blazor-is-going-to-change-web-development-y32i3zvw : How Blazor Is Going to Change Web Development - https://habr.com/en/post/484822 : Blazor: как не дать компоненту заболеть или два подхода для отделения кода от разметки - https://jimbuck.io/building-desktop-apps-with-blazor : Building Desktop Apps with Blazor - http://blog.stevensanderson.com/2019/11/01/exploring-lighter-alternatives-to-electron-for-hosting-a-blazor-desktop-app : Exploring lighter alternatives to Electron for hosting a Blazor desktop app - https://www.infoq.com/news/2020/01/mobile-blazor-bindings-apps : Blazor Makes Its Way into Cross-Platform Mobile App Development - https://devblogs.microsoft.com/aspnet/mobile-blazor-bindings-experiment/ : Announcing Experimental Mobile Blazor Bindings - https://habr.com/en/post/467689 : A small overview of SIMD in .NET/C# - https://tirania.org/blog/archive/2008/Nov-03.html : Mono's SIMD Support: Making Mono safe for Gaming - https://medium.com/@MStrehovsky/building-a-self-contained-game-in-c-under-8-kilobytes-74c3cf60ea04 : Building a self-contained game in C# under 8 kilobytes - https://www.c-sharpcorner.com/article/benchmark-asp-net-4-8-vs-asp-net-core-3-0/ : Benchmark - ASP.NET 4.8 Vs ASP.NET Core 3.0 - https://habr.com/en/post/481558 : .NET Core vs Framework. Производительность коллекций - https://aakinshin.net/posts/stopwatch/ : Stopwatch under the hood - http://dontcodetired.com/blog/post/Using-Local-Functions-to-Replace-Comments : Using Local Functions to Replace Comments - http://jesseliberty.com/2020/01/29/c-coding-standards : C# Coding Standards - https://habr.com/en/post/486456/ : Цензура в исходном коде .NET Framework - https://habr.com/en/company/microsoft/blog/483344/ : .NET docs what's new (December 2019) - https://www.infoq.com/news/2020/01/roslynator-analyzers-231 : C# Static Analysis Tool Roslynator.Analyzers Now Has over 500 Ways to Improve Code Слушайте и скачивайте нас на сайте: https://dotnetmore.ru/podcast/28-blazor-again/ Не забывайте оставлять комментарии: https://vk.com/dotnetmore?w=wall-175299940_215
Январский “застой” закончился, и мы представляем Вам первый выпуск в новом сезоне. Почему все говорят о Blazor? Зачем векторизация в .Net? Сравниваем производительность .Net Framework и .Net Core и не только. P.S.: мы запустили новую рубрику “Расскажи про свой проект”. Если у вас есть интересный и полезный опыт, пожалуйста, поделитесь им в социальных сетях, а … Continue reading "#28 выпуск подкаста DotNet&More: Blazor, SIMD, Performance и не только"
On today’s show, we welcome Justin Schneck and Frank Hunleth, luminaries from the Nerves team! We take a dive into the world of Nerves with them, covering themes of performance, problem-solving, transitioning to hardware, and breakthroughs in the field. We begin with a conversation on how Elixir handles performance issues on the range of devices they support and Frank gets into how the team solved an early boot time discrepancy between a PC and a Raspberry Pi board. Other big themes for today are ironing out the kinks in the system registry model and merging Erlang and into hard real-time. After squeezing some information out of the guys about their use of ugly code hacks we get into some visionary decisions as well as things the team wished they could have done differently at Elixir (see the release of the new networking stack). Finally, we end off with what Frank and Justin are excited about as far as developments in the Nerves community, so be sure to plug into this one! **Key Points From This Episode: What Justin did in Tokyo, from soaking in hot springs to debugging in Kanji. An explanation of The Erlang Ecosystem Foundation, an embedded systems working group. The use of the VintageNet library for setting up multi-hold nerve networks. How Elixir handles performance issues on the range of devices they support. A breakdown of troubleshooting processes as far as acceleration with FPGAs. Issues with dependencies that occur when starting a network node on a Nerves device. How Elixir is trying to evolve past the system registry model. Identifying the challenge of reconfiguring early boot time which Elixir is facing. How Elixir solved a load time discrepancy between a PC and the Raspberry Pi board. Which situations require hardware when Elixir is too slow, such as video encoding. Japanese research into GPU, FPGA and SIMD optimization involving wrapping code blocks. Merging Erlang which is soft real-time into hard real-time. Examples of ugly but fast code hacks in Elixir. Hacks and the pitfalls of system registry such as returning to a prompt when an app crashes. Things Elixir would have done differently in working with Nerves if could they rewind time. Why releasing a new networking stack means Elixir could have done things differently. Lessons Justin and Frank learned moving from OTP to functional programming at Elixir. Exciting new developments and releases in the Nerves community. Links Mentioned in Today’s Episode: Nerves Project — https://nerves-project.org/ SmartLogic — https://smartlogic.io/ ElixirConf US — https://elixirconf.com/events The Erlang Ecosystem Foundation — https://erlef.org/ GRiSP — https://www.grisp.org/ Vintage Net — https://github.com/nerves-networking/vintage_net Joe Armstrong — https://joearms.github.io/ Erlang — https://www.erlang.org/ Linux — https://www.linux.org/ Special Guest: Frank Hunleth.
Сегодня у нашего подкаста праздник - ровно год назад вышел нулевой, пилотный выпуск. Поздравляем всех тех кто с нами все это время и, конечное же, новоприбывших. В честь дня рождения мы предлагаем послушать интервью с Егором Богатовым, который рассказал про секреты производительности .Net, будущее Mono и не только. Спасибо всем кто нас слушает. Не стесняйтесь оставлять обратную связь и предлагать свои темы. PS: радостная новость для наших слушателей из Краснодара, 6 декабря состоится 2-й митап KrdDotNet! Подробности: https://krddotnet.timepad.ru/event/1118367/ Ссылка для скачивания: https://dotnetmore.ru/wp-content/uploads/2019/11/DotNetAndMore-25-Anniversary.mp3 Ссылки: - https://youtu.be/n3-j_sTtGb0: Егор Богатов — Оптимизации внутри .NET Core - https://devblogs.microsoft.com/dotnet/hardware-intrinsics-in-net-core/: Hardware Intrinsics in .NET Core - https://habr.com/en/post/435840/: Небольшой обзор SIMD в .NET/C# Слушайте и скачивайте нас на сайте: https://dotnetmore.ru/podcast/25-anniversary/ Не забывайте оставлять комментарии: https://vk.com/dotnetmore?w=wall-175299940_210
Nation-state adversaries have shown the ability to disrupt critical infrastructure through cyber-attacks targeting systems of networked, embedded computers. This knowledge raises concern that space systems could face similar threats. This project will research and develop moving target defense algorithms that will add cyber resilience to space systems by improving their ability to withstand cyber-attacks. Most proposed cyber resilience solutions focus on or require detection of threats before mitigative actions can be taken, a significant technical challenge. Our novel approach avoids this requirement while creating informational asymmetry that favors defenders over attackers.We hypothesize that moving target defenses (MTD) can create dynamic, uncertain environments on space systems and be used to defeat cyber threats against these systems. Many proposed solutions focus on or require detection (e.g. anomaly detection, AI, data analytics) before mitigative actions can be taken, a significant technical challenge. We propose a novel approach that avoids this requirement while creating informational asymmetry that favors defenders over attackers. About the speaker: Dr. Chris Jenkins is a principal member of technical staff at Sandia National Laboratories in Albuquerque, NM. His primary responsibility focuses on cybersecurity. Under the cybersecurity umbrella, he focuses on two areas. First, he conducts assessments for a variety of government customers by analyzing devices and systems for vulnerabilities and design flaws. Second, he leads a moving target defense (MTD) research project. His MTD project looks to build cyber resiliency into the design of non-IP based networks. For example, his current research seeks to dynamically change addresses of devices on a non-IP bus where by adversaries have difficulty attacking nodes on the bus. In addition, he works on a high-performance computing (HPC) project called qthreads, which is a general-purpose multithreading library for HPC systems. He plans to port the library to the ASTRA supercomputer purchase by the department of energy. This supercomputer differs as it does not use x86 CPUs. Instead, the supercomputer uses ARM processors based on the ARMv8 architecture.Chris received his bachelor's degree in computer engineering from the University of Illinois at Urbana-Champaign. He finished his PhD at the University of Wisconsin-Madison focusing on accelerating cryptographic algorithms utilizing SIMD execution units on a software-defined radio DSP.
Recorded at Øredev 2018, Fredrik talks to Steve Klabnik about Rust and Webassembly. We talk a lot about error messages, based on Steve’s talk on how Rust handles and displays error messages. We discuss Rust’s error messages thinking an handling, but also error messages more in general, such how to think in order to produce error messages both developers and end users have a chance of understanding. Steve explains how and why the Rust compiler is switching from a pass-based compilation approach to a query-based approach to better facilitate partial recompilation upon smaller code changes. We also talk about Rust 2018, how Rust puts out new releases and what major features are on their way. We then switch to talking about Webassembly. We discuss how Webassembly is moving along, among other things how it is getting better at playing well with others, enabling people to rely on Webassembly code without necessarily even needing to know about it. Thank you Cloudnet for sponsoring our VPS! Comments, questions or tips? We are @kodsnack, @tobiashieta, @iskrig and @bjoreman on Twitter, have a page on Facebook and can be emailed at info@kodsnack.se if you want to write longer. We read everything we receive. If you enjoy Kodsnack we would love a review in iTunes! You can also support the podcast by buying us a coffee (or two!) through Ko-fi. Links Steve Klabnik Steve was also in episode 245, talking about Rust, why the lucky stiffand a lot more Mozilla Rust Steve’s presentation about error messages in Rust Steve’s second presentation, about Webassembly Rust’s Github label for diagnostics/confusing error messages ICE - internal compiler error AST - abstract syntax tree IR - intermediate representation Linkchecker The Rust book Rust by example Async/await for Rust Webassembly Emscripten Wasmpack - bundles Webassembly code as a npm package - and puts it on npm Spectre and Meltdown The host bindings proposal The DOM Wasm-bindgen Polyfill Ethereum’s work with Webassembly SIMD - Single instruction multiple data SIMD-support in Webassembly webassembly.org The Webassembly spec C and C++ through Emscripten Blazor - C# to Webassembly Yes, there was a talk about Blazor by Steve Sanderson Spidermonkey - Mozilla’s Javascript engine Titles Something that should not be an afterthought Hard actual work What messages to give or how to give them Any error message that’s confusing is a bug Git blame always returns your own name The internal deadline is tomorrow The harder problem The real test of being usable More useful to more people Broader than just the DOM A host can do these things The design is sort of not there We need more teachers and explainers
Recorded at Øredev 2018, Fredrik talks to Steve Klabnik about Rust and Webassembly. We talk a lot about error messages, based on Steve’s talk on how Rust handles and displays error messages. We discuss Rust’s error messages thinking an handling, but also error messages more in general, such how to think in order to produce error messages both developers and end users have a chance of understanding. Steve explains how and why the Rust compiler is switching from a pass-based compilation approach to a query-based approach to better facilitate partial recompilation upon smaller code changes. We also talk about Rust 2018, how Rust puts out new releases and what major features are on their way. We then switch to talking about Webassembly. We discuss how Webassembly is moving along, among other things how it is getting better at playing well with others, enabling people to rely on Webassembly code without necessarily even needing to know about it. Thank you Cloudnet for sponsoring our VPS! Comments, questions or tips? We are @kodsnack, @tobiashieta, @oferlund and @bjoreman on Twitter, have a page on Facebook and can be emailed at info@kodsnack.se if you want to write longer. We read everything we receive. If you enjoy Kodsnack we would love a review in iTunes! You can also support the podcast by buying us a coffee (or two!) through Ko-fi. Links Steve Klabnik Steve was also in episode 245, talking about Rust, why the lucky stiff and a lot more Mozilla Rust Steve’s presentation about error messages in Rust Steve’s second presentation, about Webassembly Rust’s Github label for diagnostics/confusing error messages ICE - internal compiler error AST - abstract syntax tree IR - intermediate representation Linkchecker The Rust book Rust by example Async/await for Rust Webassembly Emscripten Wasmpack - bundles Webassembly code as a npm package - and puts it on npm Spectre and Meltdown The host bindings proposal The DOM Wasm-bindgen Polyfill Ethereum’s work with Webassembly SIMD - Single instruction multiple data SIMD-support in Webassembly webassembly.org The Webassembly spec C and C++ through Emscripten Blazor - C# to Webassembly Yes, there was a talk about Blazor by Steve Sanderson Spidermonkey - Mozilla’s Javascript engine Titles Something that should not be an afterthought Hard actual work What messages to give or how to give them Any error message that’s confusing is a bug Git blame always returns your own name The internal deadline is tomorrow The harder problem The real test of being usable More useful to more people Broader than just the DOM A host can do these things The design is sort of not there We need more teachers and explainers
We take a look at the amazing abilities of the Apollo Guidance Computer and Jim breaks down everything you need to know about the ZFS ARC. Plus an update on ZoL SIMD acceleration, your feedback, and an interesting new neuromorphic system from Intel.
In unserer elften Episode reden wir mit Gerrit über Python in der Wissenschaft. Themen waren diesmal das Veröffentlichen von Code, das Setzen von Code in Veröffentlichungen und Codegolf. Es war etwas warm im Wintergarten, aber falls Auphonic es schafft, das Ventilatorengeräusch herauszufiltern, sollte zumindest die Audioqualität diesmal wieder passen. Apropos Audioqualität, einer der Sprecher hatte ein schlechteres Headset als die Anderen. Könnt ihr heraushören wer? Würde mich mal interessieren, ob man das überhaupt hören kann... Shownotes Unsere E-Mail für Fragen, Anregungen & Kommentare: hallo@python-podcast.de News aus der Szene PyOxidizer Russell Keith-Magee - Keynote - PyCon 2019 PyRun - funktioniert auch mit 3.7 Jessica Garson - Making Music with Python, SuperCollider and FoxDot - PyCon 2019 Jordan Adler, Joe Gordon - Migrating Pinterest from Python2 to Python3 - PyCon 2019 Codegolf Code Golf Stack Exchange LSD Radix Python in der Wissenschaft Differentialgleichungen SIMD Efficiently and easily integrating differential equations with JiTCODE, JiTCDDE, and JiTCSDE - JiTCODE, JiTCDDE, JiTCSDE SymPy SageMath MATLAB GNU Octave Cython arXiv gnuplot Altair Picks NumPy Data Classes Per object permissions for Django Bandit is a tool designed to find common security issues in Python code Öffentliches Tag auf konektom
Jim and Wes are joined by OpenZFS developer Richard Yao to explain why the recent drama over Linux kernel 5.0 is no big deal, and how his fix for the underlying issue might actually make things faster. Plus the nitty-gritty details of vectorized optimizations and kernel preemption, and our thoughts on the future of the relationship between ZFS and Linux. Special Guest: Richard Yao.
Rob and Jason are joined by Jeff Amstutz to discuss SIMD and SIMD wrapper libraries. Jeff is a Software Engineer at Intel, where he leads the open source OSPRay project. He enjoys all things ray tracing, high performance and heterogeneous computing, and code carefully written for human consumption. Prior to joining Intel, Jeff was an HPC software engineer at SURVICE Engineering where he worked on interactive simulation applications for the U.S. Army Research Laboratory, implemented using high performance C++ and CUDA. News Freestanding in San Diego Getting Started Qt with WebAssembly Trip Report: Fall ISO C++ standards meeting (San Diego) Jeff Amstutz @jeffamstutz Links CppCon 2018: Jefferson Amstutz "Compute More in Less Time Using C++ SIMD Wrapper Libraries" tsimd - Fundamental C++ SIMD types for Intel CPUs (sse to avx512) OSPRay Sponsors Download PVS-Studio We Checked the Android Source Code by PVS-Studio, or Nothing is Perfect JetBrains Hosts @robwirving @lefticus
El mundo en el que nos desenvolvemos está cambiado de forma radical en los últimos tiempos. Los sistemas informáticos no hacen más que conquistar cimas que hace tan solo unas décadas parecían patrimonio exclusivo de la inteligencia humana. Los grandes campeones mundiales de ajedrez, primero, y ahora los de Go, ya no son seres humanos sino programas de ordenador. No obstante, ellos son solamente la cabeza más mediática de todo un proceso que abarca casi todas las facetas del conocimiento. Los ordenadores han dejado de ser meras calculadoras para convertirse en entes con capacidad para aprender por sí mismos, pensar de una manera similar a como nosotros lo hacemos y encontrar con inteligencia caminos novedosos y soluciones que la mente humana ni siquiera había sospechado. Hoy hablamos de estas cosas con José Antonio Gámez Martín, director del Grupo de Sistemas Inteligentes y Minería de Datos (SIMD) de la UCLM.
Arcan and OpenBSD, running OpenBSD 6.3 on RPI 3, why C is not a low-level language, HardenedBSD switching back to OpenSSL, how the Internet was almost broken, EuroBSDcon CfP is out, and the BSDCan 2018 schedule is available. Headlines Towards Secure System Graphics: Arcan and OpenBSD Let me preface this by saying that this is a (very) long and medium-rare technical article about the security considerations and minutiae of porting (most of) the Arcan ecosystem to work under OpenBSD. The main point of this article is not so much flirting with the OpenBSD crowd or adding further noise to software engineering topics, but to go through the special considerations that had to be taken, as notes to anyone else that decides to go down this overgrown and lonesome trail, or are curious about some less than obvious differences between how these things “work” on Linux vs. other parts of the world. A disclaimer is also that most of this have been discovered by experimentation and combining bits and pieces scattered in everything from Xorg code to man pages, there may be smarter ways to solve some of the problems mentioned – this is just the best I could find within the time allotted. I’d be happy to be corrected, in patch/pull request form that is 😉 Each section will start with a short rant-like explanation of how it works in Linux, and what the translation to OpenBSD involved or, in the cases that are still partly or fully missing, will require. The topics that will be covered this time are: Graphics Device Access Hotplug Input Backlight Xorg Pledging Missing Installing OpenBSD 6.3 (snapshots) on Raspberry pi 3 The Easy way Installing the OpenBSD on raspberry pi 3 is very easy and well documented which almost convinced me of not writing about it, but still I felt like it may help somebody new to the project (But again I really recommend reading the document if you are interested and have the time). Note: I'm always running snapshots and recommend anybody to do it as well. But the snapshots links will change to the next version every 6 month, so I changed the links to the 6.3 version to keep the blog post valid over times. If you're familiar to the OpenBSD flavors, feel free to use the snapshots links instead. Requirements Due to the lack of driver, the OpenBSD can not boot directly from the SD Card yet, So we'll need an USB Stick for the installtion target aside the SD Card for the U-Boot and installer. Also, a Serial Console connection is required. I Used a PL2303 USB to Serial (TTL) adapter connected to my Laptop via USB port and connected to the Raspberry via TX, RX and GND pins. iXsystems https://www.ixsystems.com/blog/truenas-m-series-veeam-pr-2018/ Why Didn’t Larrabee Fail? Every month or so, someone will ask me what happened to Larrabee and why it failed so badly. And I then try to explain to them that not only didn't it fail, it was a pretty huge success. And they are understandably very puzzled by this, because in the public consciousness Larrabee was like the Itanic and the SPU rolled into one, wasn't it? Well, not quite. So rather than explain it in person a whole bunch more times, I thought I should write it down. This is not a history, and I'm going to skip a TON of details for brevity. One day I'll write the whole story down, because it's a pretty decent escapade with lots of fun characters. But not today. Today you just get the very start and the very end. When I say "Larrabee" I mean all of Knights, all of MIC, all of Xeon Phi, all of the "Isle" cards - they're all exactly the same chip and the same people and the same software effort. Marketing seemed to dream up a new codeword every week, but there was only ever three chips: Knights Ferry / Aubrey Isle / LRB1 - mostly a prototype, had some performance gotchas, but did work, and shipped to partners. Knights Corner / Xeon Phi / LRB2 - the thing we actually shipped in bulk. Knights Landing - the new version that is shipping any day now (mid 2016). That's it. There were some other codenames I've forgotten over the years, but they're all of one of the above chips. Behind all the marketing smoke and mirrors there were only three chips ever made (so far), and only four planned in total (we had a thing called LRB3 planned between KNC and KNL for a while). All of them are "Larrabee", whether they do graphics or not. When Larrabee was originally conceived back in about 2005, it was called "SMAC", and its original goals were, from most to least important: Make the most powerful flops-per-watt machine for real-world workloads using a huge array of simple cores, on systems and boards that could be built into bazillo-core supercomputers. Make it from x86 cores. That means memory coherency, store ordering, memory protection, real OSes, no ugly scratchpads, it runs legacy code, and so on. No funky DSPs or windowed register files or wacky programming models allowed. Do not build another Itanium or SPU! Make it soon. That means keeping it simple. Support the emerging GPGPU market with that same chip. Intel were absolutely not going to build a 150W PCIe card version of their embedded graphics chip (known as "Gen"), so we had to cover those programming models. As a bonus, run normal graphics well. Add as little graphics-specific hardware as you can get away with. That ordering is important - in terms of engineering and focus, Larrabee was never primarily a graphics card. If Intel had wanted a kick-ass graphics card, they already had a very good graphics team begging to be allowed to build a nice big fat hot discrete GPU - and the Gen architecture is such that they'd build a great one, too. But Intel management didn't want one, and still doesn't. But if we were going to build Larrabee anyway, they wanted us to cover that market as well. ... the design of Larrabee was of a CPU with a very wide SIMD unit, designed above all to be a real grown-up CPU - coherent caches, well-ordered memory rules, good memory protection, true multitasking, real threads, runs Linux/FreeBSD, etc. Larrabee, in the form of KNC, went on to become the fastest supercomputer in the world for a couple of years, and it's still making a ton of money for Intel in the HPC market that it was designed for, fighting very nicely against the GPUs and other custom architectures. Its successor, KNL, is just being released right now (mid 2016) and should do very nicely in that space too. Remember - KNC is literally the same chip as LRB2. It has texture samplers and a video out port sitting on the die. They don't test them or turn them on or expose them to software, but they're still there - it's still a graphics-capable part. But it's still actually running FreeBSD on that card, and under FreeBSD it's just running an x86 program called DirectXGfx (248 threads of it). News Roundup C Is Not a Low-level Language : Your computer is not a fast PDP-11. In the wake of the recent Meltdown and Spectre vulnerabilities, it's worth spending some time looking at root causes. Both of these vulnerabilities involved processors speculatively executing instructions past some kind of access check and allowing the attacker to observe the results via a side channel. The features that led to these vulnerabilities, along with several others, were added to let C programmers continue to believe they were programming in a low-level language, when this hasn't been the case for decades. Processor vendors are not alone in this. Those of us working on C/C++ compilers have also participated. What Is a Low-Level Language? Computer science pioneer Alan Perlis defined low-level languages this way: "A programming language is low level when its programs require attention to the irrelevant." While, yes, this definition applies to C, it does not capture what people desire in a low-level language. Various attributes cause people to regard a language as low-level. Think of programming languages as belonging on a continuum, with assembly at one end and the interface to the Starship Enterprise's computer at the other. Low-level languages are "close to the metal," whereas high-level languages are closer to how humans think. For a language to be "close to the metal," it must provide an abstract machine that maps easily to the abstractions exposed by the target platform. It's easy to argue that C was a low-level language for the PDP-11. They both described a model in which programs executed sequentially, in which memory was a flat space, and even the pre- and post-increment operators cleanly lined up with the PDP-11 addressing modes. Fast PDP-11 Emulators The root cause of the Spectre and Meltdown vulnerabilities was that processor architects were trying to build not just fast processors, but fast processors that expose the same abstract machine as a PDP-11. This is essential because it allows C programmers to continue in the belief that their language is close to the underlying hardware. C code provides a mostly serial abstract machine (until C11, an entirely serial machine if nonstandard vendor extensions were excluded). Creating a new thread is a library operation known to be expensive, so processors wishing to keep their execution units busy running C code rely on ILP (instruction-level parallelism). They inspect adjacent operations and issue independent ones in parallel. This adds a significant amount of complexity (and power consumption) to allow programmers to write mostly sequential code. In contrast, GPUs achieve very high performance without any of this logic, at the expense of requiring explicitly parallel programs. The quest for high ILP was the direct cause of Spectre and Meltdown. A modern Intel processor has up to 180 instructions in flight at a time (in stark contrast to a sequential C abstract machine, which expects each operation to complete before the next one begins). A typical heuristic for C code is that there is a branch, on average, every seven instructions. If you wish to keep such a pipeline full from a single thread, then you must guess the targets of the next 25 branches. This, again, adds complexity; it also means that an incorrect guess results in work being done and then discarded, which is not ideal for power consumption. This discarded work has visible side effects, which the Spectre and Meltdown attacks could exploit. On a modern high-end core, the register rename engine is one of the largest consumers of die area and power. To make matters worse, it cannot be turned off or power gated while any instructions are running, which makes it inconvenient in a dark silicon era when transistors are cheap but powered transistors are an expensive resource. This unit is conspicuously absent on GPUs, where parallelism again comes from multiple threads rather than trying to extract instruction-level parallelism from intrinsically scalar code. If instructions do not have dependencies that need to be reordered, then register renaming is not necessary. Consider another core part of the C abstract machine's memory model: flat memory. This hasn't been true for more than two decades. A modern processor often has three levels of cache in between registers and main memory, which attempt to hide latency. The cache is, as its name implies, hidden from the programmer and so is not visible to C. Efficient use of the cache is one of the most important ways of making code run quickly on a modern processor, yet this is completely hidden by the abstract machine, and programmers must rely on knowing implementation details of the cache (for example, two values that are 64-byte-aligned may end up in the same cache line) to write efficient code. Backup URL Hacker News Commentary HardenedBSD Switching Back to OpenSSL Over a year ago, HardenedBSD switched to LibreSSL as the default cryptographic library in base for 12-CURRENT. 11-STABLE followed suit later on. Bernard Spil has done an excellent job at keeping our users up-to-date with the latest security patches from LibreSSL. After recently updating 12-CURRENT to LibreSSL 2.7.2 from 2.6.4, it has become increasingly clear to us that performing major upgrades requires a team larger than a single person. Upgrading to 2.7.2 caused a lot of fallout in our ports tree. As of 28 Apr 2018, several ports we consider high priority are still broken. As it stands right now, it would take Bernard a significant amount of his spare personal time to fix these issues. Until we have a multi-person team dedicated to maintaining LibreSSL in base along with the patches required in ports, HardenedBSD will use OpenSSL going forward as the default cryptographic library in base. LibreSSL will co-exist with OpenSSL in the source tree, as it does now. However, MK_LIBRESSL will default to "no" instead of the current "yes". Bernard will continue maintaining LibreSSL in base along with addressing the various problematic ports entries. To provide our users with ample time to plan and perform updates, we will wait a period of two months prior to making the switch. The switch will occur on 01 Jul 2018 and will be performed simultaneously in 12-CURRENT and 11-STABLE. HardenedBSD will archive a copy of the LibreSSL-centric package repositories and binary updates for base for a period of six months after the switch (expiring the package repos on 01 Jan 2019). This essentially gives our users eight full months for an upgrade path. As part of the switch back to OpenSSL, the default NTP daemon in base will switch back from OpenNTPd to ISC NTP. Users who have localopenntpdenable="YES" set in rc.conf will need to switch back to ntpd_enable="YES". Users who build base from source will want to fully clean their object directories. Any and all packages that link with libcrypto or libssl will need to be rebuilt or reinstalled. With the community's help, we look forward to the day when we can make the switch back to LibreSSL. We at HardenedBSD believe that providing our users options to rid themselves of software monocultures can better increase security and manage risk. DigitalOcean http://do.co/bsdnow -- $100 credit for 60 days How Dan Kaminsky Almost Broke the Internet In the summer of 2008, security researcher Dan Kaminsky disclosed how he had found a huge flaw in the Internet that could let attackers redirect web traffic to alternate servers and disrupt normal operations. In this Hacker History video, Kaminsky describes the flaw and notes the issue remains unfixed. “We were really concerned about web pages and emails 'cause that’s what you get to compromise when you compromise DNS,” Kaminsky says. “You think you’re sending an email to IBM but it really goes to the bad guy.” As the phone book of the Internet, DNS translates easy-to-remember domain names into IP addresses so that users don’t have to remember strings of numbers to reach web applications and services. Authoritative nameservers publish the IP addresses of domain names. Recursive nameservers talk to authoritative servers to find addresses for those domain names and saves the information into its cache to speed up the response time the next time it is asked about that site. While anyone can set up a nameserver and configure an authoritative zone for any site, if recursive nameservers don’t point to it to ask questions, no one will get those wrong answers. We made the Internet less flammable. Kaminsky found a fundamental design flaw in DNS that made it possible to inject incorrect information into the nameserver's cache, or DNS cache poisoning. In this case, if an attacker crafted DNS queries looking for sibling names to existing domains, such as 1.example.com, 2.example.com, and 3.example.com, while claiming to be the official "www" server for example.com, the nameserver will save that server IP address for “www” in its cache. “The server will go, ‘You are the official. Go right ahead. Tell me what it’s supposed to be,’” Kaminsky says in the video. Since the issue affected nearly every DNS server on the planet, it required a coordinated response to address it. Kaminsky informed Paul Vixie, creator of several DNS protocol extensions and application, and Vixie called an emergency summit of major IT vendors at Microsoft’s headquarters to figure out what to do. The “fix” involved combining the 16-bit transaction identifier that DNS lookups used with UDP source ports to create 32-bit transaction identifiers. Instead of fixing the flaw so that it can’t be exploited, the resolution focused on making it take more than ten seconds, eliminating the instantaneous attack. “[It’s] not like we repaired DNS,” Kaminsky says. “We made the Internet less flammable.” DNSSEC (Domain Name System Security Extensions), is intended to secure DNS by adding a cryptographic layer to DNS information. The root zone of the internet was signed for DNSSEC in July 2010 and the .com Top Level Domain (TLD) was finally signed for DNSSEC in April 2011. Unfortunately, adoption has been slow, even ten years after Kaminsky first raised the alarm about DNS, as less than 15 percent of users pass their queries to DNSSEC validating resolvers. The Internet was never designed to be secure. The Internet was designed to move pictures of cats. No one expected the Internet to be used for commerce and critical communications. If people lose faith in DNS, then all the things that depend on it are at risk. “What are we going to do? Here is the answer. Some of us gotta go out fix it,” Kaminsky says. OpenIndiana Hipster 2018.04 is here We have released a new OpenIndiana Hipster snapshot 2018.04. The noticeable changes: Userland software is rebuilt with GCC 6. KPTI was enabled to mitigate recent security issues in Intel CPUs. Support of Gnome 2 desktop was removed. Linked images now support zoneproxy service. Mate desktop applications are delivered as 64-bit-only. Upower support was integrated. IIIM was removed. More information can be found in 2018.04 Release notes and new medias can be downloaded from http://dlc.openindiana.org. Beastie Bits EuroBSDCon - Call for Papers OpenSSH 7.7 pkgsrc-2018Q1 released BSDCan Schedule Michael Dexter's LFNW talk Tarsnap ad Feedback/Questions Bob - Help locating FreeBSD Help Alex - Convert directory to dataset Adam - FreeNAS Question Florian - Three Questions Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv iX Ad spot: iXsystems TrueNAS M-Series Blows Away Veeam Backup Certification Tests
We chat about wasm-pack, SIMD, IntelliJ, VSCode, cargo src, hackfests, rustfmt, and redox.
Panel: Charles Max Wood Cory House Aimee Knight Special Guests: Ben Titzer In this episode, the JavaScript Jabber panelists discuss WebAssembly and JavaScript with Ben Titzer. Ben is a JavaScript VM engineer and is on the V8 team at Google. He was one of the co-inventors of WebAssembly and he now works on VM engineering as well as other things for WebAssembly. They talk about how WebAssembly came to be and when it would be of most benefit to you in your own code. In particular, we dive pretty deep on: Ben intro JavaScript Co-inventor of WebAssembly (Wasm) Joined V8 in 2014 asm.js Built a JIT compiler to make asm.js faster TurboFan What is the role of JavaScript? What is the role of WebAssembly? SIMD.js JavaScript is not a statically typed language Adding SIMD to Wasm was easier Easy to add things to Wasm Will JavaScript benefit? Using JavaScript with Wasm pros and cons Pros to compiling with Wasm Statically typed languages The more statically typed you are, the more you will benefit from Wasm TypeScript Is WebAssembly headed towards being used in daily application? Rust is investing heavily in Wasm WebAssembly in gaming And much, much more! Links: JavaScript V8 WebAssembly asm.js TurboFan TypeScript Rust WebAssembly GitHub Ben’s GitHub Picks: Charles Ready Player One Movie DevChat.tv YouTube Alexa Flash Briefings: Add skill for “JavaScript Rants” Cory npm Semantic Version Calculator Kent Beck Tweet Aimee MDN 418 Status code Quantity Always Trumps Quality blog post Ben American Politics
Panel: Charles Max Wood Cory House Aimee Knight Special Guests: Ben Titzer In this episode, the JavaScript Jabber panelists discuss WebAssembly and JavaScript with Ben Titzer. Ben is a JavaScript VM engineer and is on the V8 team at Google. He was one of the co-inventors of WebAssembly and he now works on VM engineering as well as other things for WebAssembly. They talk about how WebAssembly came to be and when it would be of most benefit to you in your own code. In particular, we dive pretty deep on: Ben intro JavaScript Co-inventor of WebAssembly (Wasm) Joined V8 in 2014 asm.js Built a JIT compiler to make asm.js faster TurboFan What is the role of JavaScript? What is the role of WebAssembly? SIMD.js JavaScript is not a statically typed language Adding SIMD to Wasm was easier Easy to add things to Wasm Will JavaScript benefit? Using JavaScript with Wasm pros and cons Pros to compiling with Wasm Statically typed languages The more statically typed you are, the more you will benefit from Wasm TypeScript Is WebAssembly headed towards being used in daily application? Rust is investing heavily in Wasm WebAssembly in gaming And much, much more! Links: JavaScript V8 WebAssembly asm.js TurboFan TypeScript Rust WebAssembly GitHub Ben’s GitHub Picks: Charles Ready Player One Movie DevChat.tv YouTube Alexa Flash Briefings: Add skill for “JavaScript Rants” Cory npm Semantic Version Calculator Kent Beck Tweet Aimee MDN 418 Status code Quantity Always Trumps Quality blog post Ben American Politics
Panel: Charles Max Wood Cory House Aimee Knight Special Guests: Ben Titzer In this episode, the JavaScript Jabber panelists discuss WebAssembly and JavaScript with Ben Titzer. Ben is a JavaScript VM engineer and is on the V8 team at Google. He was one of the co-inventors of WebAssembly and he now works on VM engineering as well as other things for WebAssembly. They talk about how WebAssembly came to be and when it would be of most benefit to you in your own code. In particular, we dive pretty deep on: Ben intro JavaScript Co-inventor of WebAssembly (Wasm) Joined V8 in 2014 asm.js Built a JIT compiler to make asm.js faster TurboFan What is the role of JavaScript? What is the role of WebAssembly? SIMD.js JavaScript is not a statically typed language Adding SIMD to Wasm was easier Easy to add things to Wasm Will JavaScript benefit? Using JavaScript with Wasm pros and cons Pros to compiling with Wasm Statically typed languages The more statically typed you are, the more you will benefit from Wasm TypeScript Is WebAssembly headed towards being used in daily application? Rust is investing heavily in Wasm WebAssembly in gaming And much, much more! Links: JavaScript V8 WebAssembly asm.js TurboFan TypeScript Rust WebAssembly GitHub Ben’s GitHub Picks: Charles Ready Player One Movie DevChat.tv YouTube Alexa Flash Briefings: Add skill for “JavaScript Rants” Cory npm Semantic Version Calculator Kent Beck Tweet Aimee MDN 418 Status code Quantity Always Trumps Quality blog post Ben American Politics
Panel: Charles Max Wood Cory House Aimee Knight Special Guests: Ben Titzer In this episode, the JavaScript Jabber panelists discuss WebAssembly and JavaScript with Ben Titzer. Ben is a JavaScript VM engineer and is on the V8 team at Google. He was one of the co-inventors of WebAssembly and he now works on VM engineering as well as other things for WebAssembly. They talk about how WebAssembly came to be and when it would be of most benefit to you in your own code. In particular, we dive pretty deep on: Ben intro JavaScript Co-inventor of WebAssembly (Wasm) Joined V8 in 2014 asm.js Built a JIT compiler to make asm.js faster TurboFan What is the role of JavaScript? What is the role of WebAssembly? SIMD.js JavaScript is not a statically typed language Adding SIMD to Wasm was easier Easy to add things to Wasm Will JavaScript benefit? Using JavaScript with Wasm pros and cons Pros to compiling with Wasm Statically typed languages The more statically typed you are, the more you will benefit from Wasm TypeScript Is WebAssembly headed towards being used in daily application? Rust is investing heavily in Wasm WebAssembly in gaming And much, much more! Links: JavaScript V8 WebAssembly asm.js TurboFan TypeScript Rust WebAssembly GitHub Ben’s GitHub Picks: Charles Ready Player One Movie DevChat.tv YouTube Alexa Flash Briefings: Add skill for “JavaScript Rants” Cory npm Semantic Version Calculator Kent Beck Tweet Aimee MDN 418 Status code Quantity Always Trumps Quality blog post Ben American Politics
Panel: Charles Max Wood Cory House Aimee Knight Special Guests: Ben Titzer In this episode, the JavaScript Jabber panelists discuss WebAssembly and JavaScript with Ben Titzer. Ben is a JavaScript VM engineer and is on the V8 team at Google. He was one of the co-inventors of WebAssembly and he now works on VM engineering as well as other things for WebAssembly. They talk about how WebAssembly came to be and when it would be of most benefit to you in your own code. In particular, we dive pretty deep on: Ben intro JavaScript Co-inventor of WebAssembly (Wasm) Joined V8 in 2014 asm.js Built a JIT compiler to make asm.js faster TurboFan What is the role of JavaScript? What is the role of WebAssembly? SIMD.js JavaScript is not a statically typed language Adding SIMD to Wasm was easier Easy to add things to Wasm Will JavaScript benefit? Using JavaScript with Wasm pros and cons Pros to compiling with Wasm Statically typed languages The more statically typed you are, the more you will benefit from Wasm TypeScript Is WebAssembly headed towards being used in daily application? Rust is investing heavily in Wasm WebAssembly in gaming And much, much more! Links: JavaScript V8 WebAssembly asm.js TurboFan TypeScript Rust WebAssembly GitHub Ben’s GitHub Picks: Charles Ready Player One Movie DevChat.tv YouTube Alexa Flash Briefings: Add skill for “JavaScript Rants” Cory npm Semantic Version Calculator Kent Beck Tweet Aimee MDN 418 Status code Quantity Always Trumps Quality blog post Ben American Politics
Panel: Charles Max Wood Cory House Aimee Knight Special Guests: Ben Titzer In this episode, the JavaScript Jabber panelists discuss WebAssembly and JavaScript with Ben Titzer. Ben is a JavaScript VM engineer and is on the V8 team at Google. He was one of the co-inventors of WebAssembly and he now works on VM engineering as well as other things for WebAssembly. They talk about how WebAssembly came to be and when it would be of most benefit to you in your own code. In particular, we dive pretty deep on: Ben intro JavaScript Co-inventor of WebAssembly (Wasm) Joined V8 in 2014 asm.js Built a JIT compiler to make asm.js faster TurboFan What is the role of JavaScript? What is the role of WebAssembly? SIMD.js JavaScript is not a statically typed language Adding SIMD to Wasm was easier Easy to add things to Wasm Will JavaScript benefit? Using JavaScript with Wasm pros and cons Pros to compiling with Wasm Statically typed languages The more statically typed you are, the more you will benefit from Wasm TypeScript Is WebAssembly headed towards being used in daily application? Rust is investing heavily in Wasm WebAssembly in gaming And much, much more! Links: JavaScript V8 WebAssembly asm.js TurboFan TypeScript Rust WebAssembly GitHub Ben’s GitHub Picks: Charles Ready Player One Movie DevChat.tv YouTube Alexa Flash Briefings: Add skill for “JavaScript Rants” Cory npm Semantic Version Calculator Kent Beck Tweet Aimee MDN 418 Status code Quantity Always Trumps Quality blog post Ben American Politics
Paths and matches and SIMD, cargo new changes, and tons of community-driven learning materials! Show Notes Rust 1.25.0 blog post RFC #1358 – #[repr(align)] RFC #2325 – SIMD stabilization RustConf CFP Hello Rust “Functional and Concurrent Programming in Rust” Sponsors Aaron Turon Alexander Payne Anthony Deschamps Anthony Scotti Antonin Carette Aleksey Pirogov Andreas Fischer Andrew Thompson Austin LeSure Behnam Esfahbod Benjamin Wasty Brent Vatne Brian Casiello Chap Lovejoy Charlie Egan Chris Jones Chris Palmer Coleman McFarland Dan Abrams Daniel Collin Daniel P. Clark David W. Allen David Hewson Derek Buckley Derek Morr Eugene Bulkin [Hans Fjällemark] Henri Sivonen Ian Jones Jakub “Limeth” Hlusička James Cooper Jerome Froelich John Rudnick Jonathan Turner Jupp Müller Justin Ossevoort Karl Hobley Keith Gray Kilian Rault Laurie Hedge Luca Schmid Luiz Irber Mark LeMoine Masashi Fujita Matt Rudder Matthew Brenner Matthias Ruszala Max Jacobson Messense Lv Micael Bergeron Nathan Sculli Nick Stevens Oluseyi Sonaiya Ovidiu Curcan Pascal Hertleif Patrick O’Doherty [Paul Naranja] Peter Tillemans Ralph Giles (“rillian”) Raj Venkalil Ramon Buckley Randy MacLeod Raph Levien reddraggone9 Ryan Blecher Sebastián Ramírez Magrí Shane Utt Simon G. Steve Jenson Steven Knight Steven Murawski Stuart Hinson Tim Brooks Timm Preetz Tom Prince Ty Overby Tyler Harper Vesa Kaihlavirta Victor Kruger Will Greenberg William Roe Yaacov Finkelman Zachary Snyder Zaki (Thanks to the couple people donating who opted out of the reward tier, as well. You know who you are!) Become a sponsor Patreon Venmo Dwolla Cash.me Flattr PayPal.me Contact New Rustacean: Twitter: @newrustacean Email: hello@newrustacean.com Chris Krycho GitHub: chriskrycho Twitter: @chriskrycho
We chat about Rust 1.24.1, the 2018 roadmap, compile times, SIMD, and Pathfinder.
We chat about SIMD, WebAssembly for performance, the embedded working group, the Rust+WebAssembly working group, and the return of the Servo newsletter.
We chat about new teams, being humble, SIMD, being special, quicktype, and deps.rs. “A Memory Away” by Tanner Helland is licensed under a Creative Commons Attribution-ShareAlike 3.0 License. Permissions beyond the scope of this license may be obtained here.
Всё что вы хотели узнать о OpenBSD и даже намного больше информации ждёт вас в 69-м выпуске SDCast'а! У меня в гостях Миша Белопухов, разработчик OpenBSD. В начале Миша рассказал про то, как он сам познакомился с OpenBSD, как начал изучать операционные системы и как его интерес неожиданно превратился во вполне оплачиваемую работу :) Миша адаптировал OpenBSD для работы на различном железе и в различном окружении, в том числе и виртуальном, поэтому он портировал разные драйвера устройств. Миша рассказал интересные истории из своего опыта портирования, как работают различные драйверы, механизмы взаимодействия с железом и ядром ОС. Обсудили мы и в целом операционную систему OpenBSD, как она устроена, на каких принципах построена, как работает ядро системы, драйвера и user space код. Обсудили вопросы, связанные с безопасностью и защищенностью, как самой ОС, так и прикладного кода, работающего в системе. OpenBSD известна своим слоганом “Secure by Default” и тем, что вопросам безопасности там уделяется большое внимание. Миша рассказал про различные подсистемы обеспечения безопасности, применяемые в OpenBSD, такие как: * Рандомизация адресного пространства ядра, KARL (Kernel Address Randomized Link) * Рандомизация размещения адресного пространства, ASLR (address space layout randomization) * strlcpy() и strlcat() - нестандартные функции, созданные в качестве замены часто используемых некорректным образом аналогов стандартной библиотеки * fork+exec, PIE, pledge и другие. Отдельно поговорили о криптографических алгоритмах, способах их реализации с использованием возможностей современных процессоров, таких как SIMD, а так же о их применении в SSH и SSL. Ссылки на ресурсы по темам выпуска: * Доклад Михаила “Implementation of Xen PVHVM drivers in OpenBSD” с BSDCan (видео (https://www.youtube.com/watch?v=GWwhgIPdKH0), слайды (https://www.openbsd.org/papers/bsdcan2016-xen.pdf)) * Доклад Тео де Раадта про Pledge с EuroBSDCon 2017 (видео (https://www.youtube.com/watch?v=FzJJbNRErVQ), слайды (https://www.openbsd.org/papers/eurobsdcon2017-pledge.pdf)) * Доклад Тео де Раадта "arc4random - randomization for all occasions" с Hackfest 2014 (видео (https://www.youtube.com/watch?v=aWmLWx8ut20), слайды (https://www.openbsd.org/papers/hackfest2014-arc4random/index.html)) * Доклад Ilja van Sprundel “Are all BSDs created equally? A survey of BSD kernel vulnerabilities” с DEF CON (видео (https://www.youtube.com/watch?v=1j1UaLsPv3k), слайды (https://media.defcon.org/DEF%20CON%2025/DEF%20CON%2025%20presentations/DEFCON-25-Ilja-van-Sprundel-BSD-Kern-Vulns.pdf)) * Статья про сравнение защищённости OpenBSD и FreeBSD (https://networkfilter.blogspot.ru/2014/12/security-openbsd-vs-freebsd.html) * Слайды “Security features in the OpenBSD operating system (https://homepages.laas.fr/matthieu/talks/min2rien-openbsd.pdf)” от Matthieu Herrb * Описание технологии ASLR от Pax Team (https://pax.grsecurity.net/docs/aslr.txt) * Статья-заметка “KASLR: An Exercise in Cargo Cult Security” (https://forums.grsecurity.net/viewtopic.php?f=7&t=3367&sid=c757c2f8e8db817dabb7b7c501156fc0) от Brad "spender" Spengler * Видео доклада Михаила “OpenBSD: Куда катится крипто?” (https://events.yandex.ru/lib/talks/1489/) * Пост “AES timing attacks on OpenSSL (https://access.redhat.com/blogs/766093/posts/1976303)” от Redhat * Whitepaper “Cache Games – Bringing Access-Based Cache Attacks on AES to Practice (https://eprint.iacr.org/2010/594.pdf)” * 130+ уязвимостей в tcpdump (https://www.cvedetails.com/vulnerability-list/vendor_id-6197/Tcpdump.html) * Книга “The Design and Implementation of the 4.4BSD Operating System” Marshall Kirk McKusick и др. Вторая глава доступна бесплатно (https://www.freebsd.org/doc/en/books/design-44bsd/index.html). Понравился выпуск? — Поддержи подкаст на patreon.com/KSDaemon (https://www.patreon.com/KSDaemon) а так же ретвитом, постом и просто рассказом друзьям!
This week on Practical Chrome we review the Acer C720P. The C720 we all know and love as a fast, light, Intel powered performer get just a little better with this P variant. We look at the differences, how it stands apart, and why these changes are touching. That’s right we have our hands on a touch model at sub $350, and it is amazing! Also in the news intel is bringing SIMD to
Рассказ про подход SIMD (SSE, AVX)
As the field of determined and increasingly sophisticated adversaries multiplies, the confidence in the integrity of deployed computing devices magnifies. Given the ubiquitous connectivity, substantial storage, and accessibility, the increased reliance on computer platforms make them a substantial target for attackers. Over the past decade, malware transitioned from attacking a single program to subverting the OS kernel by means of what is known as a rootkit. While computer systems require patches to fix newly discovered vulnerabilities, undiscovered vulnerabilities potentially remain. Signature-based schemes seek to detect malware with a known signature or digital fingerprint. Signature-less schemes seek to detect anomalies within the computer system by understanding normal behavior. Both architectures are typically built on top of existing solutions or paradigms. Furthermore, these solutions tend to utilize mechanisms that operate within the OS. If the OS becomes compromised, these mechanisms may be vulnerable to deactivation.We propose an approach to designing computer systems that inherently decouples the function of the computer system from its security specification. Instead of preventing and detecting malware attacks by patching code or using signatures (though we can use them as well), our proposed approach focuses on the policy specification of the system and possible graceful degradation of functionality according to the policy as anomalies of security concern are detected. We believe this innovative paradigm uses existing technologies in a novel manner to determine the integrity level of the system. Based on the integrity level, the system may behave differently and/or limit access to data available at a given integrity level. About the speaker: Dr. Chris Jenkins is a senior member of technical staff at Sandia National Laboratories in Albuquerque, NM. His primary responsibility focuses on researching new computing paradigms for mitigating compromise (malware) in current computing systems. He seeks to find ways to move beyond detection and prevention of malware and rootkits. Specifically, he concentrates on how to design systems that operate in a compromised state while maintaining availability and basic functionality. For decades, computer systems have been designed around the OS/app two- domain model. He has proposed a different model that attempts to bridge the old model to a new proposed four-domain model. The current prototype highlights a potential framework for achieving this goal. The current prototype utilizes various technologies ranging from low-level virtualization techniques to computer security policy specification at a high level. Additionally, he taught a mini-course entitled Virtualization on ARM at Sandia. His current career aspiration emphasizes on finding different ways to utilize next-generation processor and platforms to solve current and future cyber-security challenges. Chris received his bachelor's degree in computer engineering from the University of Illinois at Urbana-Champaign. He finished his PhD at the University of Wisconsin-Madison focusing on accelerating cryptographic algorithms utilizing SIMD execution units on a software-defined radio DSP.
This talk traces the development of the different types of instruction-level parallelism that have been incorporated into the hardware of processors from the early 1960s to the present day. We will see how the use of parallel function units in the CDC 6600 eventually led to the design of the Cray-1 with its vector processing instructions and superscalar processors such as the Alpha. The talk also covers Very Long Instruction Word systems and SIMD systems. The talk is accompanied by visual demonstrations of the activities that occur within processors using simulation models developed using the HASE computer architecture simulation environment. Links: Slides - Talk slides