PING is a podcast for people who want to look behind the scenes into the workings of the Internet. Each fortnight we will chat with people who have built and are improving the health of the Internet. The views expressed by the featured speakers are their own and do not necessarily reflect the views of APNIC.
In this episode of PING, APNIC's Chief Scientist, Geoff Huston, revisits changes underway in how the Domain Name System (DNS) delegates authority over a given zone and how resolvers discover the new authoritative sources. We last explored this in March 2024. In DNS, the word ‘domain' refers to a scope of authority. Within a domain, everything is governed by its delegated authority. While that authority may only directly manage its immediate subdomains (children), its control implicitly extends to all subordinate levels (grandchildren and beyond). If a parent domain withdraws delegation from a child, everything beneath that child disappears. Think of it like a Venn diagram of nested circles — being a subdomain means being entirely within the parent's scope. The issue lies in how this delegation is handled. It's by way of nameserver (NS) records. These are both part of the child zone (where they are defined) and the parent zone (which must reference them). This becomes especially tricky with DNSSEC. The parent can't authoritatively sign the child's NS records because they are technically owned by the child. But if the child signs them, it breaks the trust chain from the parent. Another complication is the emergence of third parties to the delegate, who actually operate the machinery of the DNS. We need mechanisms to give them permission to make changes to operational aspects of delegation, but not to hold all the keys a delegate has regarding their domain name. A new activity has been spun up in the IETF to discuss how to alter this delegation problem by creating a new kind of DNS record, the DELEG record. This is proposed to follow the Service Binding model defined in RFC 9460. Exactly how this works and what it means for the DNS is still up in the air. DELEG could fundamentally change how authoritative answers are discovered, how DNS messages are transported, and how intermediaries interact with the DNS ecosystem. In the future, significant portions of DNS traffic might flow over new protocols, introducing novel behaviours in the relationships between resolvers and authoritative servers.
In this episode of PING, Professor Cristel Pelsser who holds the chair of critical embedded systems at UCLouvain Discusses her work measuring BGP and in particular the system described in the 2024 SIGCOMM “best paper” award winning research: “The Next Generation of BGP Data Collection Platforms” Cristel and her collaborators Thomas Alfroy, Thomas Holterbach, Thomas Krenc and K. C. Claffy have built a system they call GILL, available on the web at https://bgproutes.io This work also features a new service called MVP, to help find the “most valuable vantage point” in the BGP collection system for your particular needs. GILL has been designed for scale, and will be capable of encompassing thousands of peerings. it also has an innovative approach to holding BGP data, focussed on the removal of demonstrably redundant information, and therefore significantly higher compression of the data stream compared to e.g. holding MRT files. The MVP system exploits machine learning methods to aide in the selection of the most advantageous data collection point reflecting a researchers specific needs. Application of ML methods here permits a significant amount of data to be managed and change reflected in the selection of vantage points. Their system has already been able to support DFOH, an approach to finding forged origin attacks from peering relationships seen online in BGP, as opposed to the peering expected both from location, and declarations of intent inside systems like peeringDB.
In this episode of PING, APNIC's Chief Scientist, Geoff Huston, discusses the history and emerging future of how Internet protocols get more than the apparent link bandwidth by using multiple links and multiple paths. Initially, the model was quite simple, capable of handling up to four links of equal cost and delay reasonably well, typically to connect two points together. At the time, the Internet was built on telecommunications services originally designed for voice networks, with cabling laid between exchanges, from exchanges to customers, or across continents. This straightforward technique allowed the Internet to expand along available cable or fibre paths between two points. However, as the system became more complex, new path options emerged, and bandwidth demands grew beyond the capacity of individual or even equal-cost links, increasingly sophisticated methods for managing these connections had to be developed. An interesting development at the end of this process is the impact of a fully encrypted transport layer on the intervening infrastructure's ability to manage traffic distribution across multiple links. With encryption obscuring the contents of the dataflow, traditional methods for intelligently splitting traffic become less effective. Randomly distributing data can often worsen performance, as modern techniques rely on protocols like TCP to sustain high-speed flows by avoiding data misordering and packet loss. This episode of PING explores how Internet protocols boost bandwidth by using multiple links and paths, and how secure transport layers affect this process.
Last month, during APRICOT 2025 / APNIC 59, the Internet Society hosted its first Pulse Internet Measurement Forum (PIMF). PIMF brings together people interested in Internet measurement from a wide range of perspectives — from technical details to policy, governance, and social issues. The goal is to create a space for open discussion, uniting both technologists and policy experts. In this second special episode of PING, we continue our break from the usual one-on-one podcast format and present a recap of why the PIMF forum was held, and the last 3 short interviews from the workshop. First we hear a repeat of Amreesh Phokeer's presentation. Amreesh is from the Internet Society and discusses his role in managing the Pulse activity within ISOC. Alongside Robbie Mitchell, Amreesh helped organize the forum, aiming to foster collaboration between measurement experts and policy professionals. Next we hear from Beau Gieskens, a Senior Software Engineer from APNIC Information Products. Beau has been working on the DASH system and discusses his PIMF presentation on a re-design to an event-sourcing model which reduced database query load and improved speed and scaling of the service. We then have Doug Madory from Kentik who presented to PIMF on a quirk in how Internet Routing Registries or IRR are being used, which can cause massive costs in BGP filter configuration and is related to some recent route leaks being seen at large in the default free zone of BGP. Finally, we hear from Lia Hestina from the RIPE NCC Atlas project. Lia is the community Development officer, and focusses on Asia Pacific and Africa for the Atlas project. Lia discusses the Atlas system and how it underpins measurements worldwide, including ones discussed in the PIMF meeting. For more insights from PIMF, be sure to check out the PULSE Forum recording on the Internet Society YouTube feed
In this episode of PING, APNIC's Chief Scientist, Geoff Huston, discusses the surprisingly vexed question of how to say ‘no' in the DNS. This conversation follows a presentation by Shumon Huque at the recent DNS OARC meeting, who will be on PING in a future episode talking about another aspect of the DNS protocol. You would hope this is a simple, straightforward answer to a question, but as usual with the DNS, there are more complexities under the surface. The DNS must indicate whether the labels in the requested name do not exist, whether the specific record type is missing, or both. Sometimes, it needs to state both pieces of information, while other times, it only needs to state one. The problem is made worse by the constraints of signing answers with DNSSEC. There needs to be a way to say ‘no' authoritatively, and minimize the risk of leaking any other information. NSEC3 records are designed to limit this exposure by making it harder to enumerate an entire zone. Instead of explicitly listing ‘before' and ‘after' labels in a signed response denying a label's existence, NSEC3 uses hashed values to obscure them. In contrast, the simpler NSEC model reveals adjacent labels, allowing an attacker to systematically map out all existing names — a serious risk for domain registries that depend on name confidentiality. This is documented in RFC 7129. Saying ‘no' with authority also raises the question of where signing occurs — at the zone's centre (by the zone holder) or at the edge (by the zone server). These approaches lead to different solutions, each with its own costs and consequences. In this episode of PING, Geoff explores the differences between a non-standard, vendor-explored solution, and the emergence of a draft standard in how to say ‘no' properly.
At the APRICOT/APNIC59 meeting held in Petaling Jaya in Malaysia last month, The internet society held it's first PIMF meeting. PIMF, or the Pulse Internet Measurement Forum is a gathering of people interested in Internet measurement in the widest possible sense, from technical information all the way to policy, governance and social questions. ISOC is interested in creating a space for the discussion to take place amongst the community, and bring both technologists and policy specialists into the same room. This time on PING, instead of the usual one-on-one format of podcast we've got 5 interviews from this meeting, and after the next episode from Geoff Huston at APNIC Labs we'll play a second part, with 3 more of the presenters from this session. First up we have Amreesh Phokeer from the Internet Society who manages the PULSE activity in ISOC, and along with Robbie Mitchell set up the meeting. Then we hear from Christoph Visser from IIJ Labs in Tokyo, who presented on his measurements of the "Steam" Game distribution platform used by Valve Software to share games. It's a complex system of application-specific source selection, using multiple Content Distribution Networks (CDN) to scale across the world, and allows Christoph to see into the link quality from a public API. No extra measurements required, for an insight into the gamer community and their experience of the Internet. The third interview is with Anand Raje, from AIORI-IMN, India's Indigenous Internet Measurement System. Anand leads a team which has built out a national measurement system using IoT "orchestration" methods to manage probes and anchors, in a virtual-environment which permits them to run multiple independent measurement systems hosted inside their platform. After this there's an interview with Andre Robachevsky from Global Cyber Alliance (GCA). Andre established the MANRS system, it's platform and nurtured the organisation into being inside ISOC. MANRS has now moved into the care of GCA and Andre moved with it, and discusses how this complements the existing GCA activities. FInally we have a conversation with Champika Wijayatunga from ICANN on the KINDNS project. This is a programme designed to bring MANRS-like industry best practice to the DNS community at large, including authoritative DNS delegates and the intermediate resolver and client supporting stub resolver operators. Champika is interested in reaching into the community to get KINDNS more widely understood and encourage its adoption with over 2,000 entities having completed the assessment process already. Next time we'll here from three more participants in the PIMF session: Doug Madory from Kentik, Beau Gieskins from APNIC Information Products, and Lia Hestina, from the RIPE NCC.
In this episode of PING, APNIC's Chief Scientist, Geoff Huston explores bgp "Zombies" which are routes which should have been removed, but are still there. They're the living dead of routes. How does this happen? Back in the early 2000s Gert Döring in the RIPE NCC region was collating a state of BGP for IPv6 report, and knew each of the 300 or so IPv6 announcements directly. He understood what should be seen, and what was not being routed. He discovered in this early stage of IPv6 that some routes he knew had been withdrawn in BGP still existed when he looked into the repositories of known routing state. This is some of the first evidence of a failure mode in BGP where withdrawal of information fails to propagate, and some number of BGP speakers do not learn a route has been taken down. They hang on to it. Because BGP is a protocol which only sends differences to the current routing state as and when they emerge (if you start afresh you get a LOT of differences, because it has to send everything from ground state of nothing. But after that, you're only told when new things come and old things go away) it can go a long time without saying anything about a particular route: if its stable and up, nothing to say, and if it was withdrawn, you don't have it, to tell people it's gone, once you passed that on. So if somehow in the middle of this conversation a BGP speaker misses something is gone, as long as it doesn't have to tell anyone it exists, nobody is going to know it missed the news. In more recent times, there has been a concern this may be caused by a problem in how BGP sits inside TCP messages and this has even led to an RFC in the IETF process to define a new way to close things out. Geoff isn't convinced this diagnosis is actually correct or that the remediation proposed is the right one. From a recent NANOG presentation Geoff has been thinking about the problem, and what to do. He has a simpler approach which may work better.
In this episode, Job Snijders discusses RPKIViews, his long term project to collect the "views" of RPKI state every day, and maintain an archive of BGP route validation states. The project is named to reflect route views, the long-standing archive of BGP state maintained by the University of Oregon, which has been discussed on PING. Job is based in the Netherlands, and has worked in BGP routing for large international ISPs and content distribution networks as well as being a board member of the RIPE NCC. He is known for his work producing the Open-Source rpki-client RPKI Validator, implemented in C and distributed widely through the OpenBSD project. RPKI is the Resource PKI, Resource meaning the Internet Number Resources, the IPv4, IPv6 and Autonomous System (AS) numbers which are used to implement routing in the global internet. The PKI provides cryptographic proofs of delegation of these resources and allows the delegates to sign over their intentions originating specific prefixes in BGP, and the relationships between the AS which speak BGP to each other. Why rpkiviews? Job explains that there's a necessary conversation between people involved in the operational deployment of secure BGP, and the standards development and research community: How many of the worlds BGP routes are being protected? How many places are producing Route Origin Attestations (ROA) which are the primary cryptographic object used to perform Route Origin Validation (ROV) and how many objects are made? Whats the error rate in production, the rate of growth, a myriad of introspective "meta" questions need to be asked in deploying this kind of system at scale, and one of the best tools to use, is an archive of state, updated frequently, and as for route views collected from a diverse range of places worldwide, to understand the dynamics of the system. Job is using the archive to produce his annual "RPKI Year in review" report, which was published this year on the APNIC blog (it's posted to operations, research and standards development mailing lists and presented at conferences and meetings normally) and products are being used by the BGPAlerter service developed by Massimo Candela
In his first episode of PING for 2025, APNIC's Chief Scientist, Geoff Huston returns to the Domain Name System (DNS) and explores the many faces of name servers behind domains. Up at the root, (the very top of the namespace, where all top-level domains like .gov or .au or .com are defined to exist) there is a well established principle of 13 root nameservers. Does this mean only 13 hosts worldwide service this space? Nothing could be farther from the truth! literally thousands of hosts act as one of those 13 root server labels, in a highly distributed worldwide mesh known as "anycast" which works through BGP routing. The thing is, exactly how the number of nameservers for any given domain is chosen, and how resolvers (the querying side of the DNS, the things which ask questions of authoritative nameservers) decide which one of those servers to use isn't as well defined as you might think. The packet sizes, the order of data in the packet, how it's encoded is all very well defined, but "which one should I use from now on, to answer this kind of question" is really not well defined at all. Geoff has been using the Labs measurement system to test behaviour here, and looking at basic numbers for the delegated domains at the root. The number of servers he sees, their diversity, the nature of their deployment technology in routing is quite variable. But even more interestingly, the diversity of "which one gets used" on the resolver side suggests some very old, out of date and over-simplistic methods are still being used almost everywhere, to decide what to do.
Welcome back to PING, at the start of 2025. In this episode, Gautam Akiwate, (now with Apple, but at the time of recording with Stanford University) talks about the 2021 Advanced Network Research Prize winning paper, co-authored with Stefan Savage, Geoffrey Voelker and Kimberly Claffy which was titled "Risky BIZness: Risks Derived from Registrar Name Management". The paper explores a situation which emerged inside the supply chain behind DNS name delegation, in the use of an IETF protocol called Extensible Provisioning Protocol or EPP. EPP is implemented in XML over the SOAP mechanism, and is how registry-registrar communications take place, on behalf of a given domain name holder (the delegate) to record which DNS nameservers have the authority to publish the delegated zone. The problem doesn't lie in the DNS itself, but in the operational practices which emerged in some registrars, to remove dangling dependencies in the systems when domain names were de-registered. In effect they used an EPP feature to rename the dependency, so they could move on with selling the domain name to somebody else. The problem is that feature created valid names, which could themselves then be purchased. For some number of DNS consumers, those new valid nameservers would then be permitted to serve the domain, and enable attacks on the integrity of the DNS and the web. Gautam and his co-authors explored a very interesting quirk of the back end systems and in the process helped improve the security of the DNS and identified weaknesses in a long-standing "daily dump" process to provide audit and historical data.
In the last episode of PING for 2024, APNIC's Chief Scientist Geoff Huston discusses the shift from existing public-private key cryptography using the RSA and ECC algorithms to the world of ‘Post Quantum Cryptography. These new algorithms are designed to withstand potential attacks from large-scale quantum computers and are capable of implementing Shor's algorithm, a theoretical approach for using quantum computing to break the cryptographic keys of RSA and ECC. Standards agencies like NIST are pushing to develop algorithms that are both efficient on modern hardware and resistant to the potential threats posed by Shor's Algorithm in future quantum computers. This urgency stems from the need to ensure ‘perfect forward secrecy' for sensitive data — meaning that information encrypted today remains secure and undecipherable even decades into the future. To date, maintaining security has been achieved by increasing the recommended key length as computing power improved under Moore's Law, with faster processors and greater parallelism. However, quantum computing operates differently and will be capable of breaking the encryption of current public-private key methods, regardless of the key length. Public-private keys are not used to encrypt entire messages or datasets. Instead, they encrypt a temporary ‘ephemeral' key, which is then used by a symmetric algorithm to secure the data. Symmetric key algorithms (where the same key is used for encryption and decryption) are not vulnerable to Shor's Algorithm. However, if the symmetric key is exchanged using RSA or ECC — common in protocols like TLS and QUIC when parties lack a pre-established way to share keys — quantum computing could render the protection ineffective. A quantum computer could intercept and decrypt the symmetric key, compromising the entire communication. Geoff raises concerns that while post-quantum cryptography is essential for managing risks in many online activities — especially for protecting highly sensitive or secret data—it might be misapplied to DNSSEC. In DNSSEC, public-private keys are not used to protect secrets but to ensure the accuracy of DNS data in real-time. If there's no need to worry about someone decoding these keys 20 years from now, why invest significant effort in adapting DNSSEC for a post-quantum world? Instead, he questions whether simply using longer RSA or ECC keys and rotating key pairs more frequently might be a more practical approach. PING will return in early 2025 This is the last episode of PING for 2024, we hope you've enjoyed listening. The first episode of our new series is expected in late January 2025. In the meantime, catch up on all past episodes.
This time on PING, Peter Thomassen from SSE and DEsec.io discusses his analysis of the failure modes of CDS and CDNSKEY records between parent and child in the DNS. These records are used to provide in-band signalling of the DS record, fundamental to the maintenance of a secure path from the trust anchor to the delegation through all the intermediate parent and grandparent domains. Many people use out-of-band methods to update this DS information, but the CDS and the CDNSKEY records are designed to signal this critical information inside the DNS, avoiding many of the pitfalls of passing through a registry-registrar web service. The problem is, as Peter has discovered, the information across the various nameservers (denoted by the NS record in the DNS) of the child domain can get out of alignment, and the tests a parent zone need to do checking CDS and CDNSKEY information aren't sufficiently specified to wire down this risk. Peter performed a "meta analysis" inside a far larger cohort of DNS data captured by Florian Steurer and Tobias Fiebig at the Max Planck Institute and discovered a low but persisting error rate, a drift in the critical keying information between a zones NS and the parent. Some of these related to transitional states in the DNS (such as when you move registry or DNS provider) but by no means all, and this has motivated Peter and his co-authors to look at improved recommendations for managing CDS/CDNSKEY data, to minimise the risk of inconsistency, and the consequent loss of secure entry path to a domain name.
In his regular monthly spot on PING, APNIC's Chief Scientist Geoff Huston discusses the slowdown in worldwide IPv6 uptake. Although within the Asia-Pacific footprint we have some truly remarkable national statistics, such as India which is now over 80% IPv6 enabled by APNIC Labs measurements, And Vietnam which is not far behind on 70% the problem is that worldwide, adjusted for population and considering levels of internet penetration in the developed economies, the pace of uptake overall has not improved and has been essentially linear since 2016. In some economies like the US, a natural peak of around 50% capability was reached in 2017 and since then uptake has been essentially flat: There is no sign of closure to a global deployment in the US, and many other economies. Geoff takes a high level view of the logisitic supply curve with the early adopters, early and late majority, and laggards, and sees no clear signal that there is a visible endpoint, where a transition to IPv6 will be "done". Instead we're facing a continual dual-stack operation of both IPv4 (increasingly behind Carrier Grade Nats (CGN) deployed inside the ISP) and IPv6. There are success stories in mobile (such as seen in India) and in broadband with central management of the customer router. But, it seems that with the shift in the criticality of routing and numbering to a more name-based steering mechanism and the continued rise of content distribution networks, the pace of IPv6 uptake worldwide has not followed the pattern we had planned for.
In this episode of PING, Vanessa Fernandez and Kavya Bhat, two students from the National Institute of Technology Karnataka (NITK) discuss the student led, multi-year project to deploy IPv6 at their campus. Kavya & Vanessa have just graduated, and are moving into their next stages of work and study in computer sciences and network engineering. Across 2023 and 2024 they were able to attend IETF118 and IETF119 and present on their project and it's experiences to the IPv6 working groups and off-Working Group meetings, in part funded by the APNIC ISIF Project and the APNIC Foundation. This multi-year project is supervised by the NITK Centre for Open-source Software and Hardware (COSH) and has outside review from Dhruv Dhody (ISOC) and Nalini Elkins (Inside Products inc). Former students have also acted as alumni and remain involved in the project as it progresses. We often focus on IPv6 deployment at scale in the telco sector, or experiences with small deployments in labs, but another side of the IPv6 experience is the large campus network, in scale equivalent to a significant factory or government department deployment but in this case undertaken by volunteer staff, with little or no prior experience of networking technology. Vanessa and Kavya talk about their time on the project, and what they got to present at IETF.
In his regular monthly spot on PING, APNIC's Chief Scientist, Geoff Huston, discusses a large pool of IPv4 addresses left in the IANA registry, from the classful allocation days back in the mid 1980s. This block, from 240.0.0.0 to 255.255.255.255 encompasses 268 million hosts, which is a significant chunk of address space: it's equivalent to 16 class-A blocks, each of 16 million hosts. Seems a shame to waste it, how about we get this back into use? Back in 2007 Geoff Paul and myself submitted An IETF Draft which would have removed these addresses from the "reserved" status in IANA and used to supplement the RFC1918 private use block. We felt at the time this was the best use of these addresses because of their apparent un-routability, in the global internet. Almost all IP network stacks at that time shared a lineage with the BSD network code developed at the University of California, and released in 1983 as BSD4.2. Subsequent versions of this codebase included a 2 or 3 line rule inside the Kernel which checked the top 4 bits of the 32 bit address field, and refused to forward packets which had these 4 bits set. This reflected the IANA status marking this range as reserved. The draft did not achieve consensus. A more recent proposal has emerged from Seth Schoen, David Täht and John Gilmore in 2021 which continues to be worked on, but rather than assigning to RFC1918 internal non-routable puts the address into global unicast use. The authors believe that the critical filter in devices has now been lifted, and no longer persists at large in the BSD and Linux derived codebases. This echoes use of the address space which has been noted inside the Datacentre. Geoff has been measuring reachability at large to this address space, using the APNIC Labs measurement system and a prefix in 240.0.0.0/4 temporarily assigned and routed in BGP. The results were not encouraging, and Geoff thinks routability of the range remains a very high burden.
In this episode of PING, Nowmay Opalinski from the French Institute of Geopolitics at Paris 8 University discusses his work on resilience, or rather the lack of it, confronting the Internet in Pakistan. As discussed in his blog post, Nowmay and his colleagues at the French Institute of Geopolitics (IFG), University Paris 8, and LUMS University Pakistan used a combination of technical measurement from sources such as RIPE Atlas, in a methodology devised by the GEODE project, combined with interviews in Pakistan, to explore the reasons behind Pakistan's comparative fragility in the face of seaborne fibre optical cable connectivity. The approach deliberately combines technical and social-science approaches to exploring the problem space, with quantitative data and qualitative interviews. Located at the head of the Arabian Sea, but with only two points of connectivity into the global Internet, Pakistan has suffered over 22 ‘cuts' to the service in the last 20 years, However, as Nowmay explores in this episode, there actually are viable fibre connections to India close to Lahore, which are constrained by politics. Nowmay is completing a PhD at the institute, and is a member of the GEODE project. His paper on this study was presented at the 2024 AINTEC conference held in Sydney, as part of ACM SIGCOMM 2024.
In his regular monthly spot on PING, APNIC's Chief Scientist, Geoff Huston, discusses another use of DNS Extensions: The EDNS0 Client Subnet option (RFC 7871). This feature, though flagged in its RFC as a security concern, can help route traffic based on the source of a DNS query. Without it, relying only on the IP address of the DNS resolver can lead to incorrect geolocation, especially when the resolver is outside your own ISP's network. The EDNS Client Subnet (ECS) signal can help by encoding the client's address through the resolver, improving accuracy in traffic routing. However, this comes at the cost of privacy, raising significant security concerns. This creates tension between two conflicting goals: Improving routing efficiency and protecting user privacy. Through the APNIC Labs measurement system, Geoff can monitor the prevalence of ECS usage in the wild. He also gains insights into how much end-users rely on their ISP's DNS resolvers versus opting for public DNS resolver systems that are openly available.
In this episode of PING, Joao Damas from APNIC Labs explores the mechanics of the Labs measurement system. Commencing over a decade ago, with an "actionscript" (better known as flash) mechanism, backed by a static ISC Bind DNS configuration cycling through a namespace, the Labs advertising measurement system now samples over 15 million end users per day, using Javascript and a hand crafted DNS system which can synthesise DNS names on-the-fly and lead users to varying underlying Internet Protocol transport choices, packet sizes, DNS and DNSSEC parameters in general, along with a range of Internet Routing related experiments. Joao explains how the system works, and the mixture of technologies used to achieve the goals. There's almost no end to the variety of Internet behaviour which the system can measure, as long as it's capable of being teased out of the user in a javascript enabled advert backed by the DNS!
In his regular monthly spot on PING, APNIC's Chief Scientist Geoff Huston re-visits the question of DNS Extensions, in particular the EDNS0 option signalling maximum UDP packet size accepted, and it's effect in the modern DNS. Through the APNIC Labs measurement system Geoff has visibility of the success rate for DNS events where EDNS0 signalling triggers DNS “truncation” and the consequent re-query in TCP as well as the impact of UDP fragmentation even inside the agreed limit, as well as the ability to handle the UDP packet sizes proffered in the settings. Read more about EDNS0 and UDP on the APNIC Blog and at APNIC Labs Revisiting DNS and UDP truncation (Geoff Huston, APNIC Blog July 2024) DNS TCP Requery failure rate (APNIC Labs)
In this episode of PING, Casper Schutijser and Ralph Koning from SIDN Labs in the Netherlands discuss their post-quantum testbed project. As mentioned in the previous PING episode about Post Quantum Cryptography (PQC) in DNSSEC with Peter Thomassen from SSE and Jason Goertzen from Sandbox AQ it's vital we understand how this technology shift will affect real-world DNS systems in deployment. The SIDN Labs system has been designed to be a "one stop shop" for DNS operators to test configurations of DNSSEC for their domain management systems, with a complete virtualised environment to run inside. It's fully scriptable so can be modified to suit a number of different situations and potentially include builds of your own critical software components to include with the system under test. Read more about the testbed and PQC on the APNIC Blog and at SIDN Labs.
In his regular monthly spot on PING, APNIC's Chief Scientist Geoff Huston continues his examination of DNSSEC. In the first part of this two-part story, Geoff explored the problem space, with a review of the comparative failure of DNSSEC to be deployed by zone holders, and the lack of validation by the resolvers. This is visible to APNIC labs from carefully crafted DNS zones with validly and invalidly signed DNSSEC states, which are included in the Labs advertising method of user measurement. This second episode offers some hope for the future. It reviews the changes which could be made to the DNS protocol, or use of existing aspects of DNS, to make DNSSEC safer to deploy. There is considerable benefit to having trust in names, especially as a "service" to Transport Layer Security (TLS) which is now ubiquitous worldwide in the web.
This time on PING, Peter Thomassen from deSEC and Jason Goertzen from Sandbox AQ discuss their research project on post quantum cryptography in DNSSEC, funded by NLNet Labs. Post Quantum cryptography is a response to the risk that a future quantum computer will be able to implement Shor's Algorithm -a mechanism to uncover the private key in the RSA public-private key cryptographic mechanism, as well as Diffie-Hellman and Elliptic Curve methods. This would render all existing public-private based security useless, because with knowledge of the private key by a third party, the ability to sign uniquely over things is lost: DNSSEC doesn't depend on secrecy of messages but it does depend on RSA and elliptic curve signatures. We'd lose trust in the DNSSEC protections the private key provides. Post Quantum Cryptography (PQC) addresses this by implementing methods which are not exposed to the weakness that Shor's Algorithm can exploit. But, the cost and complexity of these PQC methods rises. Peter and Jason have been exploring implementations of some of the NIST candidate post quantum algorithms, deployed into bind9 and PowerDNS code. They've been able to use the Atlas system to test how reliably the signed contents can be seen in the DNS and have confirmed that some aspects of packet size in the DNS, and new algorithms will be a problem in deployment as things stand. As they note, it's too soon to move this work into IETF DNS standards process but there is a continuing interest in researching the space, with other activity underway from SIDN which we'll also feature on PING.
In his regular monthly spot on PING, APNIC's Chief Scientist Geoff Huston discusses DNSSEC and it's apparent failure to deploy at scale in the market after 30 years: Both as the state of signed zone uptake (the supply side) and the low levels of verification seen by DNS client users (the consumption side) there is a strong signal DNSSEC isn't making way, compared to the uptake of TLS which is now ubiquitous in connecting to websites. Geoff can see this by measurement of client DNSSEC use in the APNIC Labs measurement system, and from tests of the DNS behind the Tranco top website rankings. This is both a problem (the market failure of a trust model in the DNS is a pretty big deal!) and an opportunity (what can we do, to make DNSSEC or some replacement viable) which Geoff explores in the first of two parts. A classic "cliffhanger" conversation about the problem side of things will be followed in due course by a second episode which offers some hope for the future. In the meantime here's the first part, discussing the scale of the problem.
This time on PING, Philip Paeps from the FreeBSD Cluster Administrators and Security teams discusses their approach to systems monitoring and measurement. Its eMail. “Short podcast” you say, but no, there's a wealth of war-stories and “why” to explore in this episode. We caught up at the APNIC57/APRICOT meeting held in Bangkok in February of 2024. Philip has a wealth of experience in systems management and security and a long history of participation in the free software movement. So his ongoing of support of email as a fundamental measure of system health isn't a random decision, it's based on experience. Mail may not seem like the obvious go-to for a measurement podcast, but Philip makes a strong case that it's one of the best tools available for a high-trust measure of how systems are performing, and in the first and second order derivative can indicate aspects of velocity and rate of change of mail flows, indicative of the continuance or change in the underlying systems issues. Philip has good examples of how Mail from the FreeBSD cluster systems indicates different aspects of systems health. Network delays, disk issues. He's realistic that there are other tools in the armoury, especially the Nagios and Zabbix systems which are deployed in parallel. But from time to time, the first best indication of trouble emerges from a review of the behaviour of email. A delightfully simple, and robust approach to systems monitoring can emerge from use of the fundamental tools which are part of your core distribution.
In his regular monthly spot on PING, APNIC's Chief Scientist Geoff Huston discusses the question of subnet structure, looking into the APNIC Labs measurement data which collects around 8 million discrete IPv6 addresses per day, worldwide. Subnets are a concept which "came along for the ride" in the birth of Internet Protocol, and were baked into the address distribution model as the class-A, class-B and class-C subnet models (there are also class-D and class-E addresses we don't talk about much). The idea of a sub-net is distinct from a routing network, many pre-Internet models of networking had some kind of public-local split, but the idea of more than one level of structure in what is "local" had to emerge when more complex network designs and protocols came into being. Subnets are the idea of structure inside the addressing plan, and imply logical and often physical separation of hosts, and structural dependency on routing. There can be subnets inside subnets, its "turtles all the way down" in networks. IP had an ability out-of-the-box to permit subnets to be defined, and when we moved beyond the classful model into classless inter-domain routing or CIDR, the idea of prefix/length models of networks came to life. But IPv6 is different, and the assumption we are heading to a net-subnet-host model of networks may not be applicable in IPv6, or in the modern world of high speed complex silicon for routing and switching. Geoff discusses an approach to modelling how network assignments are being used in deployment, which was raised by Nathan Ward in a recent NZNOG meeting. Geoff has been able to look into his huge collection of IPv6 addresses and see what's really going on.
This time on PING Doug Madory from Kentik discusses his recent measurements of the RPKI system worldwide, and it's visible impact on the stability and security of BGP. Doug makes significant use of the Oregon RouteViews repository of BGP data, a collection maintained continuously at the University of Oregon for decades. It includes data from back to 1997, originally collected by the NLANR/MOAT project and has archives of BGP Routing Information Base (RIB) dumps taken every two hours from a variety of sources, and made available in both human-readable and machine readable binary formats. This collection has become the de-facto standard for publicly available BGP state worldwide, along with the RIPE RIS collection. As Doug discusses, research papers which cite Oregon RouteViews data (over 1,000 are known of, but many more exist which have not registered their use of the data) invite serious appraisal because of the reproducibility of the research, and thus the testability of the conclusions drawn. It is a vehicle for higher quality science about the nature of the Internet through BGP. Doug presented on RPKI and BGP, at the APOPS session held in February at APRICOT/APNIC57 Bangkok, Thailand
In this episode of PING, APNIC's Chief Scientist Geoff Huston discusses Starlink again, and the ability of modern TCP flow control algorithms to cope with the highly variant loss and delay seen over this satellite network. Geoff has been doing more measurements using starlink terminals in Australia and the USA, at different times of day exploring the system behaviour. Starlink has broken new ground in Low Earth Orbit internet services. Unlike Geosynchronous satellite services which have a long delay but constant visibility of the satellite in stationary orbit above, Starlink requires the consumer to continuously re-select a new satellite as they move overhead in orbit. In fact, a new satellite has to be picked every 15 seconds. This means there's a high degree of variability in the behaviour of the link, both between signal quality to each satellite, and in the brief interval of loss ocurring at each satellite re-selection window. Its a miracle TCP can survive, and in fact in the case of the newer BBR protocol thrive, and achieve remarkably high throughput, if the circumstances permit. This is because of the change from a slow start, fast backoff model used in Cubic and Reno to a much more aggressive link bandwidth estimation model, which continuously probes to see if there is more room to play in.
This time on PING, Dr Mona Jaber from Queen Mary University of London (QMUL), discusses her work exploring IoT, Digital Twins and Social Science led research in the field of networking and telecommunications. Dr Jaber is a senior lecturer in QMUL and is the founder and director of the Digital Twins for Sustainable Development Goals (DT4SDG) at QMUL. She was one of the invited Keynote speakers at the recent APRICOT/APNIC57 meeting held in Bangkok, and the podcast explores the three major themes explored in her keynote presentation. The role of deployed fibre optic communication systems in measurement for sustainable green goals Digital Twin Simulation platforms for exploring the problem space Social Sciences led research, an inter-disciplinary approach to formulating and exploring problems which has been applied to Sustainable Development-related research through technical innovation in IoT, AI, and Digital Twins. The Fibre Optic measurement method is Distributed Acoustic Sensor or DAS: "DAS reuses underground fibre optic cables as distributed strain sensing where the strain is caused by moving objects above ground. DAS is not affected by weather or light and the fibre optic cables are often readily available, offering a continuous source for sensing along the length of the cable. Unlike video cameras, DAS systems also offer a GDPR-compliant source of data." The DASMATE Project at theengineer.co.uk This Episode of PING was recorded live in the venue and is a bit noisy compared to the usual recordings, but it's well worth putting up with the background chatter!
In this episode of PING, APNIC's Chief Scientist Geoff Huston discusses the European Union's consideration of taking a role in the IETF, as itself. Network engineers, policy makers and scientists from all around the world have participated in IETF but this is the first time an entity like the EU has considered participation as itself in the process of standards development. What's lead to this outcome? What is driving the concern that the EU as a law setting and treaty body, an inter-governmental trade bloc needs to participate in the IETF process? Is this a mis-understanding of the nature of Internet Standards development or does it reflect a concern that standards are diverging from society's needs? Geoff wrote this up in a recent opinion piece on the APNIC Blog and the podcast is a conversation around the topic.
This time on PING we have Phil Regnauld from DNS Operations Analysis & Resource Center (DNS-OARC) talking about the three distinct faces OARC presents to the community. Phil came to the OARC presidents role, replacing Keith Mitchell who was the founding president since 2008 through to this year. Phil previously has worked with the Network Startup Resource Centre (NSRC) and with AFNOG, and the Francophone Internet community at large. DNS OARC has at least 3 distinct faces. It is a community of DNS operators and researchers, who maintain an active ongoing dialogue face to face in workshops and online in the OARC Mattermost community hub. Secondly it is a home, repository and ongoing development environment for DNS related tools such as DNSVIZ (written by Casey Deccio) hosting the AS112 project, and development of the DSC systems amongst many other tools. Thirdly it is the organiser and host of the Day In The Life or DITL activity, the periodic collection of 48-72 hours of DNS traffic from the DNS root operators, and other significant sources of DNS traffic. Stretching back over 10 years DITL is a huge resource for DNS research, providing insights in the use of DNS and its behaviour on-the-wire.
In this episode of PING, APNICs Chief Scientist Geoff Huston discusses a new proposed DNS resource record called DELEG. The record is being designed to aid in managing where a DNS zone is delegated. Delegation is the primary mechanism used in the DNS to separate responsibility between child and parent for a given domain name. The DELEG RR is designed to address several problems, including a goal of moving to new transports for the name resolution service the DNS provides to all other Internet protocols. Additionally, Geoff believes it can help with cost and management issues inherent in out-of-band external domain name management through the registry/registrar process, bound in the whois system and in a protocol called Extensible Provisioning Protocol or EPP. There are big costs here and they include some problems dealing with intermediaries who manage your DNS on your behalf. Unlike whois, EPP, and registrar functions, DELEG would be an in-band mechanism between the parent zone, any associated registry, and the delegated child zone. It's a classic disintermediation story about improved efficiency and enables the domain name holder to nominate intermediaries for their services, via an aliasing mechanism that has until now eluded the DNS.
This time on PING we have Amreesh Phokeer from the Internet Society (ISOC) talking about a system they operate called Pulse, available at https://pulse.internetsociety.org/. Pulse's purpose is to assess the “resiliency” of the Internet in a given locality. Similar systems we have discussed before on Ping include APNIC's DASH service, aimed at resource holding APNIC members, and the MANRS project. Both of these take underlying statistics like resource distribution data, or measurements of RPKI uptake or BGP behaviours and present them to the community, and in the case of MANRS there's a formalised “score” which shows your ranking against current best practices. The Pulse system measures resilience in four pillars: Infrastructure, Quality, Security and Market Readiness. Some of these are “hard” measures analogous to MANRS and DASH, but Pulse in addition to these kinds of measurements includes “soft” indicators like the economic impacts of design decisions in an economy of interest, the extent of competition, and less formally defined attributes like the amount of resiliency behind BGP transit. This allows the ISOC Pulse system to consider governance-related aspects of the development of Internet, and has a simple scoring model which allows a single health metric analogous to the use of pulse and blood pressure by a physician to assess your condition, but this time applied to the Internet.
In this episode of PING, APNIC's Chief Scientist Geoff Huston discusses the role of DNS in directing where your applications connect to, and where content comes from. Although this more “steering” traffic than it “routing” in the strict sense of IP packet forwarding, (that's still the function of the border gateway protocol or BGP) It does in fact represent a kind of routing decision, to select a content source or server logistically “best” or “closest” to you. So in the spirit of “Orange is the new Black” -DNS is the new BGP. As this change in delivery of content has emerged, the effective control on this kind of routing decision has also become more concentrated, into the hands of the small number of at-scale Content Distribution Networks (CDN) and associated DNS providers worldwide. This is far less than the 80,000 or so BGP speakers with their own AS and represents another trend to be thought about. How we optimise content delivery isn't decided in common amongst us, its managed by simpler contractual relationships between content owner and intermediaries. The upside of course remains the improvement in efficiency of fetch for each client, the reduction in delay and loss. But the evolution of the Internet over time and the implications for governance in “steering” decisions is going to be of increasing concern. Read more about Geoff's views of Concentration in the Internet, Governance, and Economics on the APNIC Blog and at APNIC Labs:
In this episode of PING, Leslie Daigle from the Global Cyber Alliance (GCA) discusses their honeynet project, measuring bad traffic internet-wide. This was originally focussed on IoT devices with the AIDE project but is clearly more generally informative. Leslie also discusses the quad-nine DNS service, GCA's domain trust work and the MANRS project. Launched in 2014 with support from ISOC, MANRS now has a continuing relationship with GCA and may represent a model for the routing community regarding the ‘bad traffic' problem which the AIDE project explores. Leslie has a long history of work in the public interest, as Chief Internet Technology Officer of the Internet Society, and with the IETF. She is currently the chair of the MOPS working group, has co-authored 22 RFCs and was chair of the IAB for five years.
In this episode of PING, APNIC's Chief Scientist Geoff Huston discusses the change in IP packet fragmentation behaviour adopted by IPv6, and the implications of a change in IETF “Normative Language” regarding use of IPv6 in the DNS. IPv4 arguably succeeds over so many variant underlying links and networks because it's highly adaptable to fragmentation in the path. IPv6 has a proscriptive requirement that only the end hosts fragment, which limits how intermediate systems can handle IPv6 data in flight. In the DNS, increasing complexity from things like DNSSEC mean the the DNS packet sizes are getting larger and larger, which risks invoking the IPv6 fragmentation behaviour in UDP. This has consequences for the reliability and timeliness of the DNS service. For this reason, a revision of the IETF normative language (the use of capitalised MUST MAY SHOULD and MUST NOT) directing how IPv6 integrates into the DNS service in deployment has risks. Geoff argues for a “first, do no harm” approach to this kind of IETF document. Read more about IPv6, Fragmentation, the DNS and Geoff's measurements on the APNIC Blog and APNIC Labs.
In this episode of PING, Sara Dickinson from Sinodun Internet Technologies and Terry Manderson, VP, Information Security and Network Engineering at ICANN discuss the ICANN DNS stats collector system which ICANN commissioned, and Sinodun wrote for them. This system consists of two parts, a DNS stats compactor framework which captures data in the C-DNS format, a specified set of data in CBOR format, and the DNS stats visualiser which is uses Grafana. The C-DNS format is not a complete packet capture but allows the recreation of all the DNS context of the query and response. It was standardised in 2019, in an RFC authored by Sara, her partner John, Jim Hague, John Bond and Terry. Unlike DSC, which is a 5 minute sample aggregation system, this system is able to preserve a significantly larger amount of the seen DNS query information and can even be used to re-create an on-the-wire view of the DNS (albiet not 1 to 1 identical to the original IP packetflows)
In this episode of PING, APNIC's Chief Scientist Geoff Huston discusses the rise of Low Earth Orbiting (LEO) Satellite based Internet, and the consequences for end-to-end congestion control in TCP and related protocols. Modern TCP has mostly been tuned for constant delay, low loss paths and performs very well at balancing bandwidth amongst the cooperating users of such a link, achieving maximum use of the resource. But a consequence of the new LEO internet is a high degree of variability in delay, loss and consequently an unstable bandwidth, which means TCP congestion control methods aren't working quite as well in this kind of Internet. A problem is, that with the emergence of TCP bandwidth estimation models such as BBR, and the rise of new transports like QUIC (which continue to use the classic TCP model for congestion control), we have a fundamental mismatch in how competing flows try to share the link. Geoff has been exploring this space with some tests from starlink home routers, and models of satellite visibility. His Labs starlink page shows a visualisation of behaviour of the starlink system, and a movie of views of the satellites in orbit. Read more about TCP, QUIC, LEO and Geoff's measurements on the APNIC Blog and APNIC Labs
In this episode of PING, Verisign fellow Duane Wessels discusses a late state (version 08) Internet draft he's working on with two colleagues from Verisign. The draft is on Negative Caching of DNS Resolution Failures and is co-authored by Duane, William Carroll, and Matt Thomas This episode discusses the behaviour of the DNS system overall in the face of failures to answer. There are already mechanisms to deny the existence of a queried name or a specific resource type. There are also mechanisms to define how long this negative answer should be cached, just as there are cache lifetimes defined for how long to hold valid answers, things that do exist, and have been supplied. This time, it's a cache of not being able to answer. The thing asked about? It might exist, or it might not. This cached data isn't saying if it does exist or not, it's a caching failure to be able to answer. As the draft states: “… a non-response due to a resolution failure in which the resolver does not receive any useful information regarding the data's existence.” Prior DNS specifications did provide guidance on caching in the context of positive responses and negative responses but the only guidance relating to failing to answer was to avoid aggressive re-querying of the nameservers that should be able to answer.
In this episode of PING, instead of a conversation with APNIC's Chief Scientist Geoff Huston we've got a panel session from APNIC56 he facilitated, where Geoff and six guests got to discuss the 30 year history of APNIC. With Geoff on the panel were: Professor Jun Murai known as the ‘father of the Internet' in Japan. In 1984, he developed the Japan University UNIX Network (JUNET), the first-ever inter-university network in that nation. In 1988, he founded the Widely Integrated Distributed Environment (WIDE) Project, a Japanese Internet research consortium, for which he continues to serve as a board member. Along with Geoff, Jun was one of the main progenitors of what became APNIC. Elise Gerich, a 31 year veteran of Internet networking, is recognised globally for her significant contributions to the Internet. Before retiring, Elise was President of PTI and prior to that, Vice President of IANA at ICANN. Elise served as the Associate Director National Networking at Merit Network in Michigan. While at Merit she was also a Principal Investigator for NSFNET's T3 Backbone Project and the Routing Arbiter Project and was responsible for much of the early address management Impetus which led to the creation of the RIR system. David Conrad Previously the Chief Technology Officer of ICANN, who was involved in the creation of APNIC as its first full-time employee and founding Director-General. Akinori Maemura the JPNICChief Policy Officer, and a member of the APNIC EC for 16 years, 13 of which he was Chair of the EC. Gaurab Raj Upadhaya Head of WWW Video Delivery Strategy, Prime Video at Amazon. Gaurab has been active in the Internet community for more than a decade and like Akinori served on the APNIC EC for 12 years, 7 of these as Chair of the EC. Paul Wilson has more than thirty years' involvement with the Internet, including 25 years' experience as the Director General of APNIC. The Panel discussed the early years of the Internet and the processes which led to the creation of APNIC along with some significant moments in the life of the registry.
In this episode of PING, Stephen Song discusses his work mapping the Internet. This is a long-term project, which he carries out alongside and supported by Mozilla Corporation, and the Association for Progressive Communications (APC). Stephen has long championed the case for Open Data in telecommunications decision-making and maintains a list of resources for capacity building and development of the Internet with a particular focus on Africa. The combination of some opaque business practices and the change from end delivery to mediated proxies from the content distribution network model raises questions about where the things users engage with and depend on are, so network infrastructure can be efficiently and openly planned. The latest episode of PING explores the issues inherent in understanding ‘where things are' in the modern Internet.
25 Million end-user measurements per day, worldwide, from google advertising.
In june of this year, the Dashboard for AS Health or DASH, a service operated by APNIC saw a leak of approximately 260,000 BGP routes from a vantage point in Singapore, and sent alerts to around 90 subscribers to our routing mis-alignment notification service which is part of DASH. BGP is the state of announcements made and heard worldwide, calculated by every BGP speaker for themselves and although its globally connected and represents “the same” network, not everyone sees all things, as a result of filtering and configuration differences around the globe. BGP also should align with two external information systems, the older Internet Routing Registry (IRR) system which uses a notation called RPSL to represent routing policy data, including the “route” object, and Resource Public Key Infrastructure or RPKI, which represents the origin-AS (in BGP, who originates a given prefix) in a cryptographically signed objected called a ROA. The BGP prefix and origin (the route) should align with whats in an IRR route object and an RPKI ROA, but sometimes these disagree. Thats what DASH is designed to do: tell you when these three information sources fall out of alignment. I discussed this incident, and the APNIC Information Product family (DASH, a collaboration with RIPE NCC called NetOX, and the delegation statistics portal called REX) with Rafael Cintra, the product manager of these systems, and with Dave Phelan who works in the APNIC Academy and has a background in Network Routing Operations. You can find the APNIC Information products here: (note that the DASH service needs a MyAPNIC login to be used) https://dash.apnic.net the DASH portal login page (MyAPNIC resource login needed) https://netox.apnic.net NetOX the Network Observatory web service https://rex.apnic.net Resource Explorer: delegation statistics for the world
In this episode of PING, APNIC's Chief Scientist Geoff Huston discusses the coming future of VLSI with Moores law coming to an end. This was motivated by a key presentation made at the most recent ANRW session at IETF117, San Francisco. For over 5 decades we have been able to rely on an annual, latterly bi-annual doubling of speed called Moore's Law, and halving of size of the technology inside a microchip: Very Large Scale Integration (VLSI), the basic building block of the modern age being the transistor. From it's beginnings off the back of the diode, replacing valves but still discrete components, to the modern reality of trillions of logic "gates" on a single chip, everything we have built in recent times which includes a computer, has been built under the model "it can only get cheaper next time round" -But for various reasons explored in this episode, that isn't true any more, and won't be true into the future. We're going to have to get used to the idea it isn't always faster, smaller, cheaper, and this will have an impact on how we design Networks, including details inside the protocol stack which go to processing complexity forwarding those packets along the path. A few times, Both Geoff and myself get our prefixes mixed up and may say millimeters for nanometers or even worse on air. We also confused the order of letters in the company Acronym TSMC -The Taiwan Semiconductor Manufacturing Company. Read more about the end of Moore's law on APNIC Blog and the IETF: Chipping Away at Moore's Law (August 2023, Geoff Huston) It's the End of DRAM As We Know It (July 2023, Philip Levis, IETF117 ANRW session)
In this episode of PING Jaap Akkerhuis (NLNet Labs), Ulrich Spiedel (University of Auckland) and Russ White (Juniper) discuss the issues behind Sunspots, ionisation in the athmosphere and it's effects on satellite communications and terrestrial infrastructure based on wires in the air: Power grids and data services. In two blogs Good day sunshine and Solar Storms and the Internet we've highlighted the potential risks from increases in solar activity such as solar flares and the associated Coronal Mass Ejection or CME. Spectacular as the effects on earths atmosphere can be, The risk of these events is quite high, if things line up badly for us: It's possible for there to be compounding effects on Satellite systems orbit, their electrical components, their lifetime in orbit (due to repositioning costs burning fuel to cope with the event) as well as effects on land as the suspended wires in power grids and data communications act as antenna, and produce voltage "spikes" to attached equipment at the end, as well as along the path. However, as explored in this episode of PING the situation is often overblown by the news cycle, and it's more a story about being prepared with resilience in systems exposed to risk, and understanding those risks.
In this episode of PING, APNIC's Chief Scientist Geoff Huston discusses the eternal tension between content and carriage. At the RIPE 86 meeting held in Rotterdam in May of this year, Rudolf van der Berg presented a talk titled ‘The EU Gigabit Connectivity Package and How It Will Hurt the Internet' (video, slides). Geoff has previously written about the tensions between content and carriage, transit and Content Distribution Networks (CDNs), and the economics of networks but this episode of PING discusses a new twist: Vodafone's underlying cost and price issues seem at odds with the European operator community seeking to regulate the ‘cost' side of carrying domestic content. Read more about the economics of the Internet on the APNIC Blog: RIPE 86 bites — Gigabits for EU (June 2023, Geoff Huston on this RIPE 86 presentation) On centrality and fragmentation (July 2023, Geoff Huston) The Internet as a public utility (May 2023, Geoff Huston) An economic perspective on Internet centrality (March 2023, Geoff Huston) Sender pays (September 2022, Geoff Huston) Content vs carriage — who pays? (June 2022, Geoff Huston)
In this episode of PING, Verisign fellow Duane Wessels presents the ZONEMD resource record, defined in RFC8976. The “MD” in ZONEMD stands for “message digest” and this resource record (RR) is a checksum over the state of a zone, including all its records and the zone serial record (“start of authority” or SOA) which includes a serial number. This means that by fetching an entire zone, either in the DNS or “out of band” from an FTP or Web server or however you receive it, if it has the ZONEMD record you have a way to check that the entire zone, as it should be for that serial, is exactly what you have in-hand. ZONEMD is going to permit people who copy zones to serve them (locally, or more widely) now have a basis to trust the state of the zone before publishing it. Duane talks about the long lifetime of this idea with roots back into the 1990s, and the road to RFC8976 taken by the co-authors. A ZONEMD record with an un-testable signature will be placed in the root zone of the DNS in September of this year, and will become testable in December to allow time for the community to understand it's behaviour. This podcast is accompanied by a repost of a Verisign blog Duane wrote recently which has just been republished here on the APNIC Blog: Adding ZONEMD protections to the root zone Read more about DNS, ZONEMD, and other blogs and podcasts by Duane on the APNIC Blog and elsewhere online: The Root of the DNS revisited(2023, Geoff Huston) Notes from DNS OARC 38 (2022 APNIC Blog post by Geoff Huston) Notes from DNS OARC 35 (2021 APNIC Blog post by Geoff Huston) RFC8976 (2021 RFC D. Wessels, P. Barber – Verisign; M. Weinberg – Amazon; W. Kumari – Google; & W. Hardaker – USC/ISI) [Podcast] A look back at notable root zone changes (Duane Wessels on PING discusses 3 significant root zone changes over the last decade)
In this episode of PING, APNIC's Chief Scientist Geoff Huston discusses how Sweden built a national time distribution system and the nature of time in the modern Internet. At the RIPE86 Meeting held in Rotterdam in May of this year, Karin Ahl, the CEO of Netnod presented a talk titled “How Sweden Built a World-Leading Time Network” A central problem in time distribution on the Internet is firstly the lack of security inside the Network Time Protocol (NTP), and secondly the sources and reliability of the time information. The first problem is solved by using the newer Network Time Security (NTS) protocol which adds TLS, and the second by investment in reliable and strategically placed time distribution servers, which is the basis of the Swedish national time initiative. Geoff attended the Netnod presentation and reflects on the complex and murky history of time, and the emergence of worldwide communities that coordinate both civil time (what the time of day is, in the world) and the nature of how time is measured (how a ‘second' is defined, for example). Geoff discusses historic and current attempts to standardise time measurements (such as UT1 and UTC) — with their inherent compromises — against Earth's revolutions and rotations around the Sun. These measurements have become increasingly critical to modern technology, such as GPS. Read more about NTP, NTS, and the time problem at the APNIC Blog and elsewhere online: Watch Karin Ahl's presentation at RIPE86 Rotterdam RIPE 86 bites — what's the time? (2023 Geoff Huston's APNIC Blog write-up on the issues) Network Time Security: new NTP authentication mechanism (2021 APNIC Blog by Martin Langer) How do you know what time it is? (2020 APNIC Blog by Patrik Fälström) Putting a stop to Internet Time Shifters (2019 APNIC Blog by Neta Rosen Schiff) Is the Internet Running Late? (2018 APNIC Blog by Geoff Huston) Steve Allan blogs on time (background reading) Tony Finch blogs on time (background reading) The views expressed by the featured speakers are their own and do not necessarily reflect the views of APNIC. At the RIPE86 Meeting held in Rotterdam in May of this year, Karin Ahl, the CEO of Netnod presented a talk titled “How Sweden Built a World-Leading Time Network” A central problems in time distribution on the Internet is firstly the lack of security inside the Network Time Protocol (NTP) and secondly the sources and reliability of the time information. The first problem is solved by use of the newer Network Time Security (NTS) protocol which adds TLS, and the second by investment in reliable and strategically placed time distribution servers, which is the basis of the Swedish national time initiative. Geoff saw this presentation and reflects on the complex and murky history of how we “do” time, and the emergence of a worldwide communities which coordinate both civil time (what the time of day is, in the world) and the nature of how we measure time (what is a “second” exactly and how is it defined?) Decisions made in the 1950s and 1970s to try and normalise the difference between monotonically increasing “UT1” time and the civil time system we know as “UTC” continue to plague the IT world, as civil time drifts (occasionally) by one second forwards (or very occasionally backwards) against UT1 -And with the emergence of more and more technology and especially satellite based systems like GPS, Bai-Dou, Galileo and GLONASS which provide time, the need to finalise the relative status of each time model becomes greater. Read more about NTP, SNTP and the Time problem at the APNIC Blog and elsewhere online: Watch Karin Ahl's presentation at RIPE86 Rotterdam RIPE 86 bites — what's the time? (2023 Geoff Huston's APNIC Blog write-up on the issues) Network Time Security: new NTP authentication mechanism (2021 APNIC Blog by Martin Langer) How do you know what time it is? (2020 APNIC Blog by Patrik Fälström) Putting a stop to Internet Time Shifters (2019 APNIC Blog by Neta Rosen Schiff) Is the Internet Running Late? (2018 APNIC Blog by Geoff Huston) Steve Allan blogs on time (background reading) Tony Finch blogs on time (background reading) The views expressed by the featured speakers are their own and do not necessarily reflect the views of APNIC. The views expressed by the featured speakers are their own and do not necessarily reflect the views of APNIC.
In this episode of PING, Christian Huitema discusses how looking into the IETF data tracker allowed him to assess "how well we are doing" at document production. As the IETF has grown, and as the process of developing standards has got more complex its understandable it takes a bit longer to produce a viable RFC but some questions have been made about exactly where in process the delays come from. Are we really doing better or worse than we used to? and, why might that be? Christian took an interesting approach to the problem, using a random sample of 20 documents from 2018 (initially) and a hand method of collating the issues, and then applied the same methodology back into 2008 and 1998. His approach to measurement was rigorous and careful, separating his own opinions from the underlying data to aide reproducibility. Christian has a long history of network development and research, with experience in industry, and in the french national computing research institute "INRIA" before joining Bell Communications Research, and Microsoft. He worked on OSI systems, X.500 directories, Satellite communications, and latterly the IPv6 stack including the "Tededo" transition technology, the H/D ratio used in determining IPv6 allocations and assignments in the RIR model, and the QUIC transport layer protocol. The views expressed by the featured speakers are their own and do not necessarily reflect the views of APNIC.
In this episode of PING, APNIC's Chief Scientist Geoff Huston discusses the major themes from his recent blog on “Failed Expectations” In a trip down memory lane, the podcast ranges over the 40 year plus history of how we came to have the current Internet as we know it, and some of the “road not taken” alternates which were under consideration at the time. In this context. “Failed” doesn't have to mean “failed to work” -it can mean the technology simply wasn't chosen, or it can be the “failure” to turn off something which was believed to be at best temporary! In part, the story of IPv6 deployment is part of this mismatch of expectations and reality, because nobody sought the outcome we're now living through, of a 20 plus year transition from 32 bit addresses to a world of 128 bit addressing. IPv6 was designed with an eye to the needs of addressing at scale, but the emergence of a transfer model, and continued improvement in NAT (and deployment of Carrier-grade NAT or CGN) at scale, worldwide has perpetuated a 32 bit address and routing world. IPv4 Internet is the “little network which could” and refuses to go away quietly. The views expressed by the featured speakers are their own and do not necessarily reflect the views of APNIC.
In this episode of PING, Verisign Fellow Duane Wessels discusses notable changes in the DNS root zone over the last 13 years.Duane joined Verisign in the early stages of DNSSEC deployment and has conducted measurements of DNS for many years, in his measurement factory days, and in DNS OARC as well as inside Verisign. The significant changes to the DNS root zone, and it's implications for the root zone operators are discussed: Deploying DNSSEC, the first DNSSEC KSK key changes, the increase in packet sizes with RSA keylength changes, and the future KSK and ZSK algorithm changesRead more about DNS and DNSSEC on the APNIC Blog.Here's some articles from the blog which discuss the issues:The views expressed by the featured speakers are their own and do not necessarily reflect the views of APNIC.
In this episode of PING, APNIC's Chief Scientist Geoff Huston discusses the question of buffers, flow control and 'efficient' use of a network link. How do we maximise the use of a given network path, without knowing everything about its size along the way? It turns out, the story isn't as simple as "more is better" because sometimes, adding more memory to the system adds delay. Modern TCP's flow control algorithms are being modified to react to delay as well as loss, and become more efficient at occupying the available space. At the same time bit-marks inside the IP packet are modifying how end hosts can react to signals of congestion along the path. Are these two mechanisms in conflict? how do they stack up, and achieve critical mass in deployment? Read more about TCP and flow control on the APNIC Blog. Here's some articles from the blog which discuss the issues: Comparing TCP and QUIC (Geoff Huston) Does TCP keep pace with QUIC? (Konrad Wolsing) Striking a balance between bufferbloat and TCP queue oscillation (Ulrich Speidel) TCP initial window configurations in the wild (Jan Rüth) Underload: The future of congestion control (Safiqul Islam) Beyond bufferbloat: End-to-end congestion control cannot avoid latency spikes (Bjørn Teigen) Congestion Control at IETF 110 (Geoff Huston) The views expressed by the featured speakers are their own and do not necessarily reflect the views of APNIC.