Podcasts about Internet Engineering Task Force

Open Internet standards organization

  • 22PODCASTS
  • 25EPISODES
  • 33mAVG DURATION
  • ?INFREQUENT EPISODES
  • May 29, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Internet Engineering Task Force

Latest podcast episodes about Internet Engineering Task Force

The Business of Politics Show
Using Email Authentication To Boost Deliverability – Seth Blank (Valimail)

The Business of Politics Show

Play Episode Listen Later May 29, 2024 22:42


Seth Blank is the Chief Technology Officer at Valimail, a leading provider of email authentication and anti-impersonation solutions. Seth is also co-chair of the DMARC working group for the Internet Engineering Task Force.We'll also talk about Valimail's outreach to political campaigns and what Seth thinks campaigns can do to send better emails.The New Requirements for Email Delivery at Gmail

History in Slow German
#79 The Establishment of the Internet Engineering Task Force (IETF)

History in Slow German

Play Episode Listen Later Feb 20, 2024 4:33


internet tasks establishment ietf internet engineering task force ietf internet engineering task force episode transcripts
CDT Tech Talks
Talking Tech with Mallory Knodel and Niels ten Oever On Inclusive Language in Internet Standards

CDT Tech Talks

Play Episode Listen Later Dec 5, 2023 36:59


Back in 2018, CDT's own, Mallory Knodel, teamed up with Niels ten Oever from the critical infrastructure lab at the University of Amsterdam to present a draft document at the internet standards governing body called the Internet Engineering Task Force, or IETF. This draft outlined a proposal that urged the community to officially reject the use of discriminatory and exclusive language in Internet Drafts and RFCs. As we persistently uncover and confront systemic racial inequality across society, it becomes equally vital to guarantee that the fundamental design comprising one of our most critical and democratic technologies– the internet– is devoid of any historically racist or prejudiced terms.

Sustain
Episode 191: FOSSY 2023 with Sam Whited

Sustain

Play Episode Listen Later Jul 21, 2023 15:25


Guest Sam Whited Panelist Richard Littauer Show Notes Hello and welcome to Sustain! Richard is in Portland, OR at FOSSY, the Free and Open Source Software Yearly conference that is held by the Software Freedom Conservancy. Today, our guest is Sam Whited, a bicycle mechanic with a deep involvement in open source software development. His contributions include work with the XMPP Standards Foundation, the Internet Engineering Task Force, and the creation of Mellium, an XMPP library in Go. The conversation delves into the sustainability challenges faced by Mellium and similar projects with Sam advocating for support from larger companies and well-funded open source initiatives. Sam, a strong supporter of open source co-op consultancies, also shares his personal journey from tech to bicycle mechanic, underscoring the struggle of maintaining open source projects while managing living expenses. Go ahead and download this episode now to hear more! [00:00:38] Sam tells us about himself, working as a bicycle mechanic while contributing to open source software in his free time. He's worked with the XMPP Standards Foundation, the Internet Engineering Task Force, and maintains an XMPP library called Mellium. [00:01:45] He explains XMPP stands for Extensible Messaging and Presence Protocol and is an open standard communication protocol. He believes in it because of its recognized standards body, resilience, and the continuing work to keep it open, free, and sustainable. [00:02:38] XMPP sits at several levels in the communication stack. It's used in various applications like Snikket, Cisco's mobile video conferencing, Grindr, Zoom, and Jitsi. [00:04:11] Mellium is explained as an implementation of XMPP in Go. [00:05:13] Richard asks about the sustainability of Mellium. Sam acknowledges the challenges of attracting maintainers and funding for the project, and he explains his goal is to operate Mellium as a cooperative. [00:08:00] The conversation turns to funding for protocol implementation and Sam suggests that companies and well-funded open source projects should give back to the smaller projects they utilize. He mentions that Mellium sets aside a portion of their donations for upstream projects that helped him. [00:10:38] Sam explains “The Seven Cooperative Principles” from the International Cooperative Alliance. [00:11:30] Sam explains why he decided to work as a bike mechanic instead of pursuing work related to his expertise in using Golang. [00:13:43] Find out where you can find Sam on the internet. Links SustainOSS (https://sustainoss.org/) SustainOSS Twitter (https://twitter.com/SustainOSS?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor) SustainOSS Discourse (https://discourse.sustainoss.org/) podcast@sustainoss.org (mailto:podcast@sustainoss.org) SustainOSS mastodon (https://openoss.sourceforge.net/) Richard Littauer Twitter (https://twitter.com/richlitt?lang=en) Software Freedom Conservancy (https://sfconservancy.org/) Open OSS (https://openoss.sourceforge.net/) Sam Whited-social.coop (https://social.coop/@sam) Sam Whited Blog (https://blog.samwhited.com/) Mellium-Go XMPP library (https://xmpp.org/software/mellium/) XMPP Standards Foundation (XSF) (https://xmpp.org/about/xmpp-standards-foundation/) Go (https://go.dev/) Snikket (https://snikket.org/) Jitsi (https://jitsi.org/) Grindr (https://www.grindr.com/) The Seven Cooperative Principles (International Cooperative Alliance) (https://www.ica.coop/en/cooperatives/cooperative-identity) Credits Produced by Richard Littauer (https://www.burntfen.com/) Edited by Paul M. Bahr at Peachtree Sound (https://www.peachtreesound.com/) Show notes by DeAnn Bahr Peachtree Sound (https://www.peachtreesound.com/) Special Guest: Sam Whited.

FLF, LLC
Daily News Brief for Thursday, April 7th, 2022 [Daily News Brief]

FLF, LLC

Play Episode Listen Later Apr 7, 2022 13:02


Hi, this is Garrison Hardie with your CrossPolitic Daily News Brief for Thursday, April 7th, 2022. Today, we’ll be talking about a potential food crisis in Germany… is this a sign of things to come? The always trustworthy CDC says that there is a steep decline with teens’ mental health… I wonder why that is… Border chief Alejandro Mayorkas is directing that economic migrants get every opportunity to stay once the Title 42 barrier is removed — regardless of the huge damage he inflicts on ordinary Americans, and finally, Black Lives Matter took $6 million and dropped it on a 6,500-square-foot mansion… for black lives I’m sure. But first, Our Fight Laugh Feast Magazine is a quarterly issue that packs a punch like a 21 year Balvenie, no ice. We don’t water down our scotch, why would we water down our theology? Order a yearly subscription for yourself and then send a couple yearly subscriptions to your friends who have been drinking luke-warm evangelical cool-aid. Every quarter we promise quality food for the soul, wine for the heart, and some Red Bull for turning over tables. Our magazine will include cultural commentary, a Psalm of the quarter, recipes for feasting, laughter sprinkled through out the glossy pages, and more. Subscribe today, at flfnetwork.com/product/fight-laugh-feast-magazine/. Germany on the Brink of Food Crisis as Prices Increase by 20–50% https://neonnettle.com/news/18731-germany-on-the-brink-of-food-crisis-as-prices-increase-by-20-50- Germany is steeping toward the return of the dreaded Weimar hyperinflation as consumers brace themselves for massive price hikes for everyday goods and groceries at a whopping 20 to 50% rise.Even before the war in Ukraine, prices soared by five percent “across the product range” due to increased energy prices, HDE President Josef Sanktjohanser warned on Friday.Russia’s invasion is now pummeling economies and the supply chain as more price increases are on the horizon.“The second wave of price increases is coming, and it will certainly be in double figures,” Sanktjohanser warned. The first retail chains have already started to raise their prices in Germany, and the rest will likely follow, according to the president of the trade association. Popular retail chains such as Aldi, Edeka, and Globus, announced they would raise their prices.From today, meat and butter will be “significantly more expensive” at Aldi due to price hikes from suppliers. Earlier this year, it was announced that Germany’s cost of living rose at the highest level since reunification, with everyday goods increasing by an average of 7.3%.According to the federal statistics agency Destatis, the jump from January’s figure of 5.1 percent to February’s 7.3 percent reflected the impact of Russia’s invasion of Ukraine, which made oil and gas prices soaring.According to a recently published survey by the Ifo Institute, almost all companies in Germany’s food retail sector are planning price increases. CDC warns of a steep decline in teen mental health https://www.washingtonpost.com/education/2022/03/31/student-mental-health-decline-cdc/ This from the Washington Post, The Centers for Disease Control and Prevention is warning of an accelerating mental health crisis among adolescents, with more than 4 in 10 teens reporting that they feel “persistently sad or hopeless,” and 1 in 5 saying they have contemplated suicide, according to the results of a survey published Thursday. “These data echo a cry for help,” said Debra Houry, a deputy director at the CDC.She also added that the COVID-19 pandemic has created traumatic stressors that have the potential to further erode students’ mental well-being… yes I’m sure that it doesn’t have anything to do with the numerous mandates or shut downs that you imposed Debra… The findings draw on a survey of a nationally representative sample of 7,700 teens conducted in the first six months of 2021, when they were in the midst of their first full pandemic school year. They were questioned on a range of topics, including their mental health, alcohol and drug use, and whether they had encountered violence at home or at school. They were also asked about whether they had encountered racism. Although young people were spared the brunt of the virus — falling ill and dying at much lower rates than older people — they might still pay a steep price for the pandemic, having come of age while weathering isolation, uncertainty, economic turmoil and, for many, grief. The article then goes on to talk about race and LGBTQ students because we HAVE to force feed kids sex and racial identity politics right? Folks… get your kids out of government-run schools. End of story. Border Chrief Alejandro Mayorkas’ Leaked Title 42 Plan: Ensure Migrants Get ‘Any’ Way to Stay https://www.breitbart.com/economy/2022/04/05/mayorkas-title-42-strategy-ensure-migrants-get-any-way-to-stay/ This is from Breibart, Border chief Alejandro Mayorkas is directing that economic migrants get every opportunity to stay once the Title 42 barrier is removed — regardless of the huge damage he inflicts on ordinary Americans. Mayorkjas’ intentions are described in his February strategy, which was leaked to Breitbart on April 4. The February strategy is titled “DHS Southwest Border Mass Irregular Migration Contingency Plan,” and it says on page 16: Titled: Secretary’s Intent. USE PICTURE: The purpose of this plan is to describe a proactive approach that humanely prevents and responds to surges in irregular migration across the U.S. [southern border]. This will be done while ensuring that migrants can apply for any form of relief or protection [emphasis added] for which they may be eligible, including asylum, withholding of removal, and protection from removal under the regulations implementing United States obligations under the Convention Against Torture. To maximize benefits for migrants, Mayorkas minimizes the detention and deportation of migrants — even though federal law generally denies the entry of foreign workers and economic migrants into Americans’ homeland. His plan sketches ways for border officials to squeeze many migrants through small doorways in the nation’s border: Former immigration judge Andrew Arthur, told Breibart News in an interview, that the parole side-door “is a very limited authority that Congress has given for exceptional situations,” such as a sick airline passenger. It “is very narrowly written [for small numbers of people], but the administration has blown right past the limitations,” he said. In February, up to 165,000 migrants arrived at the border, and Mayorkas admitted 74,000 under various legal claims. Very few of the arrivals were detained, and few prior arrivals were deported, despite the federal law. On April 26, the Supreme Court will consider a judgment by federal judges that seeks to make Mayorkas comply with federal law. The Cuban-born Mayorkas is a pro-migration zealot who argued in 2013 that Americans’ homeland “always has been, and forever will remain a nation of immigrants.” Only about one-third of Americans accept the “nation of immigrants” narrative, according to a survey by a pro-migration group. His plan ignores the reasonable and rational economic concerns of roughly at least 100 million citizens of the United States. Let’s take a moment to talk tech… The world in which we live is moving towards total techno tyranny at an incredible rate. This tyranny includes spying, censorship, data theft all through electronic means. Iron Apples is a small cybersecurity consultancy firm seeking to give churches, organizations, businesses, schools and individuals the education, resources and tools needed to be able to circumvent techno tyranny. Over the next year, Iron Apples will be hosting a series of virtual meetings to inform, educate and equip attendees with actual solutions to the problem we find ourselves in. Visit ironapples.com, and click events, in the bottom right corner of their site, to sign up today! That’s ironapples.com. Black Lives Matter took $6 million and dropped it on a 6,500-square-foot mansion and tried to cover it up using a shell corporation https://notthebee.com/article/black-lives-matter-took-6-million-of-the-groups-money-and-dropped-it-on-a-6500-square-foot-mansion-and-then-tried-to-cover-it-up-using-a-shell-corporation?fbclid=IwAR1VNFuJTOOpbWOz8kEXeaziqSUUttVyhKgMc1uGVDrNWEjkUCfirPNF9NE This is from our friends at Not the Bee… It's far from a box, with more than 6,500 square feet, more than half a dozen bedrooms and bathrooms, several fireplaces, a soundstage, a pool and bungalow, and parking for more than 20 cars, according to real-estate listings. The California property was purchased for nearly $6 million in cash in October 2020 with money that had been donated to [Black Lives Matter Global Network Foundation]. The transaction has not been previously reported, and Black Lives Matter's leadership had hoped to keep the house's existence a secret. Documents, emails, and other communications I've seen about the luxury property's purchase and day-to-day operation suggest that it has been handled in ways that blur, or cross, boundaries between the charity and private companies owned by some of its leaders. It creates the impression that money donated to the cause of racial justice has been spent in ways that benefit the leaders of Black Lives Matter personally. On top of this… they tried to cover it all up! [A] man named Dyane Pascall purchased the seven-bedroom house that would become known as Campus. According to California business-registration documents, Pascall is the financial manager for Janaya and Patrisse Consulting, an LLC run by Cullors and her spouse, Janaya Khan; Pascall is also the chief financial officer for Trap Heals, a nonprofit led by Damon Turner, the father of Cullors's only child. Within a week, Pascall transferred ownership of the house to an LLC established in Delaware by the law firm Perkins Coie. The maneuver ensured that the ultimate identity of the property's new owner was not disclosed to the public… https://nypost.com/2022/04/05/the-6-million-mansion-blm-reportedly-bought-with-donated-funds/ -Show photos from the NY Post Here are some photos of the property provided by the NY Post… Nothing puts a varnish on your nominal civil rights nonprofit like buying a place where Marilyn Monroe and Humphrey Bogart once slept. Before we wrap up here, I wanted to start something new here with the Daily News Brief, and my co-hosts can use it in their briefs if they want, but I wanted to do an “On This Day in History” segment! So, without further adieu, on this day in history, April 7, 1969, the internet was born! Here’s a live look: https://www.youtube.com/watch?v=Q6ctb-Pb3lc - Play video The publication of the first “request for comments,” or RFC, documents paves the way for the birth of the internet. April 7 is often cited as a symbolic birth date of the net because the RFC memoranda contain research, proposals and methodologies applicable to internet technology. RFC documents provide a way for engineers and others to kick around new ideas in a public forum; sometimes, these ideas are adopted as new standards by the Internet Engineering Task Force. One interesting aspect of the RFC is that each document is issued a unique serial number. An individual paper cannot be overwritten; rather, updates or corrections are submitted on a separate RFC. The result is an ongoing historical record of the evolution of internet standards. When it comes to the birth of the net, Jan. 1, 1983, also has its supporters. On that date, the National Science Foundation’s university network backbone, a precursor to the World Wide Web, became operational. So maybe April 7th is just a precursor… I’ll leave that for you to decide. Thanks for tuning in for this CrossPolitic Daily News Brief. If you liked the show, share it. If you want to come to our conference in Knoxville Tennessee October 6-8, you can sign up now at flfnetwork.com/knoxville2022, and if you aren’t a club member, you should sign up now, because if you’re a club member, you’ll get $100 off of your conference ticket! Plus you really already should be a club member… and as always, if you’re a business owner or CEO, and want to become a corporate partner, email me, at garrison@fightlaughfeast.com. Thanks for tuning in, and have a great rest of your day. Lord bless.

Daily News Brief
Daily News Brief for Thursday, April 7th, 2022

Daily News Brief

Play Episode Listen Later Apr 7, 2022 13:02


Hi, this is Garrison Hardie with your CrossPolitic Daily News Brief for Thursday, April 7th, 2022. Today, we’ll be talking about a potential food crisis in Germany… is this a sign of things to come? The always trustworthy CDC says that there is a steep decline with teens’ mental health… I wonder why that is… Border chief Alejandro Mayorkas is directing that economic migrants get every opportunity to stay once the Title 42 barrier is removed — regardless of the huge damage he inflicts on ordinary Americans, and finally, Black Lives Matter took $6 million and dropped it on a 6,500-square-foot mansion… for black lives I’m sure. But first, Our Fight Laugh Feast Magazine is a quarterly issue that packs a punch like a 21 year Balvenie, no ice. We don’t water down our scotch, why would we water down our theology? Order a yearly subscription for yourself and then send a couple yearly subscriptions to your friends who have been drinking luke-warm evangelical cool-aid. Every quarter we promise quality food for the soul, wine for the heart, and some Red Bull for turning over tables. Our magazine will include cultural commentary, a Psalm of the quarter, recipes for feasting, laughter sprinkled through out the glossy pages, and more. Subscribe today, at flfnetwork.com/product/fight-laugh-feast-magazine/. Germany on the Brink of Food Crisis as Prices Increase by 20–50% https://neonnettle.com/news/18731-germany-on-the-brink-of-food-crisis-as-prices-increase-by-20-50- Germany is steeping toward the return of the dreaded Weimar hyperinflation as consumers brace themselves for massive price hikes for everyday goods and groceries at a whopping 20 to 50% rise.Even before the war in Ukraine, prices soared by five percent “across the product range” due to increased energy prices, HDE President Josef Sanktjohanser warned on Friday.Russia’s invasion is now pummeling economies and the supply chain as more price increases are on the horizon.“The second wave of price increases is coming, and it will certainly be in double figures,” Sanktjohanser warned. The first retail chains have already started to raise their prices in Germany, and the rest will likely follow, according to the president of the trade association. Popular retail chains such as Aldi, Edeka, and Globus, announced they would raise their prices.From today, meat and butter will be “significantly more expensive” at Aldi due to price hikes from suppliers. Earlier this year, it was announced that Germany’s cost of living rose at the highest level since reunification, with everyday goods increasing by an average of 7.3%.According to the federal statistics agency Destatis, the jump from January’s figure of 5.1 percent to February’s 7.3 percent reflected the impact of Russia’s invasion of Ukraine, which made oil and gas prices soaring.According to a recently published survey by the Ifo Institute, almost all companies in Germany’s food retail sector are planning price increases. CDC warns of a steep decline in teen mental health https://www.washingtonpost.com/education/2022/03/31/student-mental-health-decline-cdc/ This from the Washington Post, The Centers for Disease Control and Prevention is warning of an accelerating mental health crisis among adolescents, with more than 4 in 10 teens reporting that they feel “persistently sad or hopeless,” and 1 in 5 saying they have contemplated suicide, according to the results of a survey published Thursday. “These data echo a cry for help,” said Debra Houry, a deputy director at the CDC.She also added that the COVID-19 pandemic has created traumatic stressors that have the potential to further erode students’ mental well-being… yes I’m sure that it doesn’t have anything to do with the numerous mandates or shut downs that you imposed Debra… The findings draw on a survey of a nationally representative sample of 7,700 teens conducted in the first six months of 2021, when they were in the midst of their first full pandemic school year. They were questioned on a range of topics, including their mental health, alcohol and drug use, and whether they had encountered violence at home or at school. They were also asked about whether they had encountered racism. Although young people were spared the brunt of the virus — falling ill and dying at much lower rates than older people — they might still pay a steep price for the pandemic, having come of age while weathering isolation, uncertainty, economic turmoil and, for many, grief. The article then goes on to talk about race and LGBTQ students because we HAVE to force feed kids sex and racial identity politics right? Folks… get your kids out of government-run schools. End of story. Border Chrief Alejandro Mayorkas’ Leaked Title 42 Plan: Ensure Migrants Get ‘Any’ Way to Stay https://www.breitbart.com/economy/2022/04/05/mayorkas-title-42-strategy-ensure-migrants-get-any-way-to-stay/ This is from Breibart, Border chief Alejandro Mayorkas is directing that economic migrants get every opportunity to stay once the Title 42 barrier is removed — regardless of the huge damage he inflicts on ordinary Americans. Mayorkjas’ intentions are described in his February strategy, which was leaked to Breitbart on April 4. The February strategy is titled “DHS Southwest Border Mass Irregular Migration Contingency Plan,” and it says on page 16: Titled: Secretary’s Intent. USE PICTURE: The purpose of this plan is to describe a proactive approach that humanely prevents and responds to surges in irregular migration across the U.S. [southern border]. This will be done while ensuring that migrants can apply for any form of relief or protection [emphasis added] for which they may be eligible, including asylum, withholding of removal, and protection from removal under the regulations implementing United States obligations under the Convention Against Torture. To maximize benefits for migrants, Mayorkas minimizes the detention and deportation of migrants — even though federal law generally denies the entry of foreign workers and economic migrants into Americans’ homeland. His plan sketches ways for border officials to squeeze many migrants through small doorways in the nation’s border: Former immigration judge Andrew Arthur, told Breibart News in an interview, that the parole side-door “is a very limited authority that Congress has given for exceptional situations,” such as a sick airline passenger. It “is very narrowly written [for small numbers of people], but the administration has blown right past the limitations,” he said. In February, up to 165,000 migrants arrived at the border, and Mayorkas admitted 74,000 under various legal claims. Very few of the arrivals were detained, and few prior arrivals were deported, despite the federal law. On April 26, the Supreme Court will consider a judgment by federal judges that seeks to make Mayorkas comply with federal law. The Cuban-born Mayorkas is a pro-migration zealot who argued in 2013 that Americans’ homeland “always has been, and forever will remain a nation of immigrants.” Only about one-third of Americans accept the “nation of immigrants” narrative, according to a survey by a pro-migration group. His plan ignores the reasonable and rational economic concerns of roughly at least 100 million citizens of the United States. Let’s take a moment to talk tech… The world in which we live is moving towards total techno tyranny at an incredible rate. This tyranny includes spying, censorship, data theft all through electronic means. Iron Apples is a small cybersecurity consultancy firm seeking to give churches, organizations, businesses, schools and individuals the education, resources and tools needed to be able to circumvent techno tyranny. Over the next year, Iron Apples will be hosting a series of virtual meetings to inform, educate and equip attendees with actual solutions to the problem we find ourselves in. Visit ironapples.com, and click events, in the bottom right corner of their site, to sign up today! That’s ironapples.com. Black Lives Matter took $6 million and dropped it on a 6,500-square-foot mansion and tried to cover it up using a shell corporation https://notthebee.com/article/black-lives-matter-took-6-million-of-the-groups-money-and-dropped-it-on-a-6500-square-foot-mansion-and-then-tried-to-cover-it-up-using-a-shell-corporation?fbclid=IwAR1VNFuJTOOpbWOz8kEXeaziqSUUttVyhKgMc1uGVDrNWEjkUCfirPNF9NE This is from our friends at Not the Bee… It's far from a box, with more than 6,500 square feet, more than half a dozen bedrooms and bathrooms, several fireplaces, a soundstage, a pool and bungalow, and parking for more than 20 cars, according to real-estate listings. The California property was purchased for nearly $6 million in cash in October 2020 with money that had been donated to [Black Lives Matter Global Network Foundation]. The transaction has not been previously reported, and Black Lives Matter's leadership had hoped to keep the house's existence a secret. Documents, emails, and other communications I've seen about the luxury property's purchase and day-to-day operation suggest that it has been handled in ways that blur, or cross, boundaries between the charity and private companies owned by some of its leaders. It creates the impression that money donated to the cause of racial justice has been spent in ways that benefit the leaders of Black Lives Matter personally. On top of this… they tried to cover it all up! [A] man named Dyane Pascall purchased the seven-bedroom house that would become known as Campus. According to California business-registration documents, Pascall is the financial manager for Janaya and Patrisse Consulting, an LLC run by Cullors and her spouse, Janaya Khan; Pascall is also the chief financial officer for Trap Heals, a nonprofit led by Damon Turner, the father of Cullors's only child. Within a week, Pascall transferred ownership of the house to an LLC established in Delaware by the law firm Perkins Coie. The maneuver ensured that the ultimate identity of the property's new owner was not disclosed to the public… https://nypost.com/2022/04/05/the-6-million-mansion-blm-reportedly-bought-with-donated-funds/ -Show photos from the NY Post Here are some photos of the property provided by the NY Post… Nothing puts a varnish on your nominal civil rights nonprofit like buying a place where Marilyn Monroe and Humphrey Bogart once slept. Before we wrap up here, I wanted to start something new here with the Daily News Brief, and my co-hosts can use it in their briefs if they want, but I wanted to do an “On This Day in History” segment! So, without further adieu, on this day in history, April 7, 1969, the internet was born! Here’s a live look: https://www.youtube.com/watch?v=Q6ctb-Pb3lc - Play video The publication of the first “request for comments,” or RFC, documents paves the way for the birth of the internet. April 7 is often cited as a symbolic birth date of the net because the RFC memoranda contain research, proposals and methodologies applicable to internet technology. RFC documents provide a way for engineers and others to kick around new ideas in a public forum; sometimes, these ideas are adopted as new standards by the Internet Engineering Task Force. One interesting aspect of the RFC is that each document is issued a unique serial number. An individual paper cannot be overwritten; rather, updates or corrections are submitted on a separate RFC. The result is an ongoing historical record of the evolution of internet standards. When it comes to the birth of the net, Jan. 1, 1983, also has its supporters. On that date, the National Science Foundation’s university network backbone, a precursor to the World Wide Web, became operational. So maybe April 7th is just a precursor… I’ll leave that for you to decide. Thanks for tuning in for this CrossPolitic Daily News Brief. If you liked the show, share it. If you want to come to our conference in Knoxville Tennessee October 6-8, you can sign up now at flfnetwork.com/knoxville2022, and if you aren’t a club member, you should sign up now, because if you’re a club member, you’ll get $100 off of your conference ticket! Plus you really already should be a club member… and as always, if you’re a business owner or CEO, and want to become a corporate partner, email me, at garrison@fightlaughfeast.com. Thanks for tuning in, and have a great rest of your day. Lord bless.

Google Cloud Platform Podcast
Fathers of the Internet with Vint Cerf

Google Cloud Platform Podcast

Play Episode Listen Later Mar 23, 2022 41:00


This week, Stephanie Wong and Anthony Bushong introduce a special podcast of the Gtalk at Airbus speaker series where prestigious Googlers have been invited to talk with Airbus. In this episode, Vint Cerf, who is widely regarded as one of the fathers of the Internet, talks with Rhys Phillips of Airbus and fellow Googler Rafael Lami Dozo. Vint tells us about his journey to Google, including his interest in science which stemmed from a chemistry set he received as a child. After high school, he got a job writing data analyzation software on the Apollo project. His graduate work at UCLA led him to the ARPANet project where he developed host protocols, and eventually to his work on the original Internet with Bob Kahn. Vint tells us about the security surrounding this project and the importance of internet security still today. The open architecture of the internet then and now excites Vint because it allows new, interesting projects to contribute without barriers. Vint is also passionate about accessibility. At Google, he and his team continue to make systems more accessible by listening to clients and adapting software to make it usable. He sees an opportunity to train developers to optimize software to work with common accessibility tools like screen readers to ensure better usability. Later, Vint tells us about the Interplanetary Internet, describing how this system is being built to provide fast, effective Internet to every part of the planet. Along with groups like the Internet Engineering Task Force, this new Internet is being deployed and tested now to ensure it works as expected. He talks about his work with NASA and other space agencies to grow the Interplanetary Internet. Digital obsolescence is another type of accessibility that concerns Vint. Over time, the loads of data we store and their various storage devices could become unreadable. Software needed to use or see this media could no longer be supported as well, making the data inaccessible. Vint hopes we will begin practicing ways to perpetuate the existence of this data through copying and making software more backward compatible. He addresses the issues with this, including funding. Vint Cerf While at UCLA, Vint Cerf worked on ARPANet - the very beginnings of what we know as the internet today and is now, fittingly, Chief Internet Evangelist & VP at Google. He is an American Internet pioneer and is recognized as one of “the fathers of the Internet”, sharing this title with TCP/IP co-developer Bob Kahn. Rhys Phillips Rhys Phillips is Change and Adoption Leader, Digital Workplace at Airbus. Rafael Lami Dozo Rafael Lami Dozo is Customer Success Manager, Google Cloud Workspace for Airbus. Cool things of the week Celebrating Pi Day with Cloud Functions blog Apollo Scales GraphQL Platform using GKE blog Interview Vinton G. Cerf Profile site ARPANet on Wikipedia site To Boldly Go Where No Internet Protocol Has Gone Before article Building the backbone of an interplanetary internet video IETF site CCSDS site IPNSIG site The Internet Society site NASA site What's something cool you're working on? Stephanie is working on new Discovering Data Centers videos. Anthony is working on content for building scalable GKE clusters. Hosts Stephanie Wong and Anthony Bushong

The Anti-Dystopians
Human rights and internet infrastructure 

The Anti-Dystopians

Play Episode Listen Later Nov 29, 2021 41:54


Alina Utrata talks to Dr Corinne Cath-Speth, a recent graduate from the doctoral program at the Oxford Internet Institute (OII) and a cultural anthropologist whose research focuses on Internet infrastructure politics, engineering cultures, and technology policy and governance. They discuss the Internet Engineering Task Force (IETF): What is it? What are internet protocols? And how can infrastructure uphold or harm human rights?You can follow Corinne Cath-Speth on Twitter @C___CSAlina Utrata on Twitter @alinautrataAnd the Anti-Dystopians podcast on Twitter @AntiDystopiansOr sign up for the AD email newsletter: bit.ly/3kuGM5XAll episodes of the Anti-Dystopians are hosted and produced by Alina Utrata, and are freely available to all listeners. To support the production to the show, visit: bit.ly/3AApPN4Selected Reading and Articles by Corinne Cath-Speth:Corinne Cath on Internet governance cultures: https://hackcur.io/whats-wrong-with-loud-men-talking-loudly-the-ietfs-culture-wars/Suzanne van Geuns and Corinne Cath, article for the Brookings Institute: https://www.brookings.edu/techstream/how-hate-speech-reveals-the-invisible-politics-of-internet-infrastructure/Report workshop organized by Beatrice Martini, Niels ten Oever, Corinne Cath: https://data-activism.net/2019/12/off-the-beaten-path-human-rights-advocacy-to-change-the-internet-infrastructure/ 'Changing minds and machines: a case study of human rights advocacy in the Internet Engineering Task Force'. PhD Thesis University of Oxford. https://corinnecath.com/wp-content/uploads/2021/09/CathCorinne-Thesis-DphilInformationCommunicationSocialSciences.pdfThe Technology We Choose to Create: Human Rights Advocacy in the Internet Engineering Task Force'. Telecommunications Policy 45, no. 6 (1 July 2021): 102144. https://doi.org/10.1016/j.telpol.2021.102144.Nowhere Land by Kevin MacLeodLink: https://incompetech.filmmusic.io/song/4148-nowhere-landLicense: http://creativecommons.org/licenses/by/4.0/ Hosted on Acast. See acast.com/privacy for more information.

The History of Computing
A broad overview of how the Internet happened

The History of Computing

Play Episode Listen Later Jul 12, 2021 29:45


The Internet is not a simple story to tell. In fact, every sentence here is worthy of an episode if not a few.  Many would claim the Internet began back in 1969 when the first node of the ARPAnet went online. That was the year we got the first color pictures of earthen from Apollo 10 and the year Nixon announced the US was leaving Vietnam. It was also the year of Stonewall, the moon landing, the Manson murders, and Woodstock. A lot was about to change. But maybe the story of the Internet starts before that, when the basic research to network computers began as a means of networking nuclear missile sites with fault-tolerant connections in the event of, well, nuclear war. Or the Internet began when a T3 backbone was built to host all the datas. Or the Internet began with the telegraph, when the first data was sent over electronic current. Or maybe the Internet began when the Chinese used fires to send messages across the Great Wall of China. Or maybe the Internet began when drums sent messages over long distances in ancient Africa, like early forms of packets flowing over Wi-Fi-esque sound waves.  We need to make complex stories simpler in order to teach them, so if the first node of the ARPAnet in 1969 is where this journey should end, feel free to stop here. To dig in a little deeper, though, that ARPAnet was just one of many networks that would merge into an interconnected network of networks. We had dialup providers like CompuServe, America Online, and even The WELL. We had regional timesharing networks like the DTSS out of Dartmouth University and PLATO out of the University of Illinois, Champaign-Urbana. We had corporate time sharing networks and systems. Each competed or coexisted or took time from others or pushed more people to others through their evolutions. Many used their own custom protocols for connectivity. But most were walled gardens, unable to communicate with the others.  So if the story is more complicated than that the ARPAnet was the ancestor to the Internet, why is that the story we hear? Let's start that journey with a memo that we did an episode on called “Memorandum For Members and Affiliates of the Intergalactic Computer Network” sent by JCR Licklider in 1963 and can be considered the allspark that lit the bonfire called The ARPANet. Which isn't exactly the Internet but isn't not. In that memo, Lick proposed a network of computers available to research scientists of the early 60s. Scientists from computing centers that would evolve into supercomputing centers and then a network open to the world, even our phones, televisions, and watches. It took a few years, but eventually ARPA brought in Larry Roberts, and by late 1968 ARPA awarded an RFQ to build a network to a company called Bolt Beranek and Newman (BBN) who would build Interface Message Processors, or IMPs. The IMPS were computers that connected a number of sites and routed traffic. The first IMP, which might be thought of more as a network interface card today, went online at UCLA in 1969 with additional sites coming on frequently over the next few years. That system would become ARPANET. The first node of ARPAnet went online at the University of California, Los Angeles (UCLA for short). It grew as leased lines and more IMPs became more available. As they grew, the early computer scientists realized that each site had different computers running various and random stacks of applications and different operating systems. So we needed to standardize certain aspects connectivity between different computers.  Given that UCLA was the first site to come online, Steve Crocker from there began organizing notes about protocols and how systems connected with one another in what they called RFCs, or Request for Comments. That series of notes was then managed by a team that included Elizabeth (Jake) Feinler from Stanford once Doug Engelbart's project on the “Augmentation of Human Intellect” at Stanford Research Institute (SRI) became the second node to go online. SRI developed a Network Information Center, where Feinler maintained a list of host names (which evolved into the hosts file) and a list of address mappings which would later evolve into the functions of Internic which would be turned over to the US Department of Commerce when the number of devices connected to the Internet exploded. Feinler and Jon Postel from UCLA would maintain those though, until his death 28 years later and those RFCs include everything from opening terminal connections into machines to file sharing to addressing and now any place where the networking needs to become a standard.  The development of many of those early protocols that made computers useful over a network were also being funded by ARPA. They funded a number of projects to build tools that enabled the sharing of data, like file sharing and some advancements were loosely connected by people just doing things to make them useful and so by 1971 we also had email. But all those protocols needed to flow over a common form of connectivity that was scalable. Leonard Kleinrock, Paul Baran, and Donald Davies were independently investigating packet switching and Roberts brought Kleinrock into the project as he was at UCLA. Bob Kahn entered the picture in 1972. He would team up with Vint Cerf from Stanford who came up with encapsulation and so they would define the protocol that underlies the Internet, TCP/IP. By 1974 Vint Cerf and Bob Kahn wrote RFC 675 where they coined the term internet as shorthand for internetwork. The number of RFCs was exploding as was the number of nodes. The University of California Santa Barbara then the University of Utah to connect Ivan Sutherland's work. The network was national when BBN connected to it in 1970. Now there were 13 IMPs and by 1971, 18, then 29 in 72 and 40 in 73. Once the need arose, Kleinrock would go on to work with Farouk Kamoun to develop the hierarchical routing theories in the late 70s. By 1976, ARPA became DARPA. The network grew to 213 hosts in 1981 and by 1982, TCP/IP became the standard for the US DOD and in 1983, ARPANET moved fully over to TCP/IP. And so TCP/IP, or Transport Control Protocol/Internet Protocol is the most dominant networking protocol on the planet. It was written to help improve performance on the ARPAnet with the ingenious idea to encapsulate traffic. But in the 80s, it was just for researchers still. That is, until NSFNet was launched by the National Science Foundation in 1986.  And it was international, with the University College of London connecting in 1971, which would go on to inspire a British research network called JANET that built their own set of protocols called the Colored Book protocols. And the Norwegian Seismic Array connected over satellite in 1973. So networks were forming all over the place, often just time sharing networks where people dialed into a single computer. Another networking project going on at the time that was also getting funding from ARPA as well as the Air Force was PLATO. Out of the University of Illinois, was meant for teaching and began on a mainframe in 1960. But by the time ARPAnet was growing PLATO was on version IV and running on a CDC Cyber. The time sharing system hosted a number of courses, as they referred to programs. These included actual courseware, games, convent with audio and video, message boards, instant messaging, custom touch screen plasma displays, and the ability to dial into the system over lines, making the system another early network. In fact, there were multiple CDC Cybers that could communicate with one another. And many on ARPAnet also used PLATO, cross pollinating non-defense backed academia with a number of academic institutions.  The defense backing couldn't last forever. The Mansfield Amendment in 1973 banned general research by defense agencies. This meant that ARPA funding started to dry up and the scientists working on those projects needed a new place to fund their playtime. Bob Taylor split to go work at Xerox, where he was able to pick the best of the scientists he'd helped fund at ARPA. He helped bring in people from Stanford Research Institute, where they had been working on the oNLineSystem, or NLS and people like Bob Metcalfe who brought us Ethernet and better collusion detection. Metcalfe would go on to found 3Com a great switch and network interface company during the rise of the Internet. But there were plenty of people who could see the productivity gains from ARPAnet and didn't want it to disappear. And the National Science Foundation (NSF) was flush with cash. And the ARPA crew was increasingly aware of non-defense oriented use of the system. So the NSF started up a little project called CSNET in 1981 so the growing number of supercomputers could be shared between all the research universities. It was free for universities that could get connected and from 1985 to 1993 NSFNET, surged from 2,000 users to 2,000,000 users. Paul Mockapetris made the Internet easier than when it was an academic-only network by developing the Domain Name System, or DNS, in 1983. That's how we can call up remote computers by names rather than IP addresses. And of course DNS was yet another of the protocols in Postel at UCLAs list of protocol standards, which by 1986 after the selection of TCP/IP for NSFnet, would become the standardization body known as the IETF, or Internet Engineering Task Force for short. Maintaining a set of protocols that all vendors needed to work with was one of the best growth hacks ever. No vendor could have kept up with demand with a 1,000x growth in such a small number of years. NSFNet started with six nodes in 1985, connected by LSI-11 Fuzzball routers and quickly outgrew that backbone. They put it out to bid and Merit Network won out in a partnership between MCI, the State of Michigan, and IBM. Merit had begun before the first ARPAnet connections went online as a collaborative effort by Michigan State University, Wayne State University, and the University of Michigan. They'd been connecting their own machines since 1971 and had implemented TCP/IP and bridged to ARPANET. The money was getting bigger, they got $39 million from NSF to build what would emerge as the commercial Internet.  They launched in 1987 with 13 sites over 14 lines. By 1988 they'd gone nationwide going from a 56k backbone to a T1 and then 14 T1s. But the growth was too fast for even that. They re-engineered and by 1990 planned to add T3 lines running in parallel with the T1s for a time. By 1991 there were 16 backbones with traffic and users growing by an astounding 20% per month.  Vint Cerf ended up at MCI where he helped lobby for the privatization of the internet and helped found the Internet Society in 1988. The lobby worked and led to the the Scientific and Advanced-Technology Act in 1992. Before that, use of NSFNET was supposed to be for research and now it could expand to non-research and education uses. This allowed NSF to bring on even more nodes. And so by 1993 it was clear that this was growing beyond what a governmental institution whose charge was science could justify as “research” for any longer.  By 1994, Vent Cerf was designing the architecture and building the teams that would build the commercial internet backbone at MCI. And so NSFNET began the process of unloading the backbone and helped the world develop the commercial Internet by sprinkling a little money and know-how throughout the telecommunications industry, which was about to explode. NSFNET went offline in 1995 but by then there were networks in England, South Korea, Japan, Africa, and CERN was connected to NSFNET over TCP/IP. And Cisco was selling routers that would fuel an explosion internationally. There was a war of standards and yet over time we settled on TCP/IP as THE standard.  And those were just some of the nets. The Internet is really not just NSFNET or ARPANET but a combination of a lot of nets. At the time there were a lot of time sharing computers that people could dial into and following the release of the Altair, there was a rapidly growing personal computer market with modems becoming more and more approachable towards the end of the 1970s. You see, we talked about these larger networks but not hardware.  The first modulator demodulator, or modem, was the Bell 101 dataset, which had been invented all the way back in 1958, loosely based on a previous model developed to manage SAGE computers. But the transfer rate, or baud, had stopped being improved upon at 300 for almost 20 years and not much had changed. That is, until Hayes Hayes Microcomputer Products released a modem designed to run on the Altair 8800 S-100 bus in 1978. Personal computers could talk to one another.  And one of those Altair owners was Ward Christensen met Randy Suess at the Chicago Area Computer Hobbyists' Exchange and the two of them had this weird idea. Have a computer host a bulletin board on one of their computers. People could dial into it and discuss their Altair computers when it snowed too much to meet in person for their club. They started writing a little code and before you know it we had a tool they called Computerized Bulletin Board System software, or CBBS. The software and more importantly, the idea of a BBS spread like wildfire right along with the Atari, TRS-80, Commodores and Apple computers that were igniting the personal computing revolution. The number of nodes grew and as people started playing games, the speed of those modems jumped up with the v.32 standard hitting 9600 baud in 84, and over 25k in the early 90s. By the early 1980s, we got Fidonet, which was a network of Bulletin Board Systems and by the early 90s we had 25,000 BBS's. And other nets had been on the rise. And these were commercial ventures. The largest of those dial-up providers was America Online, or AOL. AOL began in 1985 and like most of the other dial-up providers of the day were there to connect people to a computer they hosted, like a timesharing system, and give access to fun things. Games, news, stocks, movie reviews, chatting with your friends, etc. There was also CompuServe, The Well, PSINet, Netcom, Usenet, Alternate, and many others. Some started to communicate with one another with the rise of the Metropolitan Area Exchanges who got an NSF grant to establish switched ethernet exchanges and the Commercial Internet Exchange in 1991, established by PSINet, UUNet, and CERFnet out of California.  Those slowly moved over to the Internet and even AOL got connected to the Internet in 1989 and thus the dial-up providers went from effectively being timesharing systems to Internet Service Providers as more and more people expanded their horizons away from the walled garden of the time sharing world and towards the Internet. The number of BBS systems started to wind down. All these IP addresses couldn't be managed easily and so IANA evolved out of being managed by contracts from research universities to DARPA and then to IANA as a part of ICANN and eventually the development of Regional Internet Registries so AFRINIC could serve Africa, ARIN could serve Antarctica, Canada, the Caribbean, and the US, APNIC could serve South, East, and Southeast Asia as well as Oceania LACNIC could serve Latin America and RIPE NCC could serve Europe, Central Asia, and West Asia. By the 90s the Cold War was winding down (temporarily at least) so they even added Russia to RIPE NCC. And so using tools like WinSOCK any old person could get on the Internet by dialing up. Modems for dial-ups transitioned to DSL and cable modems. We got the emergence of fiber with regional centers and even national FiOS connections. And because of all the hard work of all of these people and the money dumped into it by the various governments and research agencies, life is pretty darn good.  When we think of the Internet today we think of this interconnected web of endpoints and content that is all available. Much of that was made possible by the development of the World Wide Web by Tim Berners-Lee in in 1991 at CERN, and Mosaic came out of the National Center for Supercomputing applications, or NCSA at the University of Illinois, quickly becoming the browser everyone wanted to use until Mark Andreeson left to form Netscape. Netscape's IPO is probably one of the most pivotal moments where investors from around the world realized that all of this research and tech was built on standards and while there were some patents, the standards were freely useable by anyone.  Those standards let to an explosion of companies like Yahoo! from a couple of Stanford grad students and Amazon, started by a young hedge fund Vice President named Jeff Bezos who noticed all the money pouring into these companies and went off to do his own thing in 1994. The companies that arose to create and commercialize content and ideas to bring every industry online was ferocious.  And there were the researchers still writing the standards and even commercial interests helping with that. And there were open source contributors who helped make some of those standards easier to implement by regular old humans. And tools for those who build tools. And from there the Internet became what we think of today. Quicker and quicker connections and more and more productivity gains, a better quality of life, better telemetry into all aspects of our lives and with the miniaturization of devices to support wearables that even extends to our bodies. Yet still sitting on the same fundamental building blocks as before. The IANA functions to manage IP addressing has moved to the private sector as have many an onramp to the Internet. Especially as internet access has become more ubiquitous and we are entering into the era of 5g connectivity.  And it continues to evolve as we pivot due to new needs and threats a globally connected world represent. IPv6, various secure DNS options, options for spam and phishing, and dealing with the equality gaps  surfaced by our new online world. We have disinformation so sometimes we might wonder what's real and what isn't. After all, any old person can create a web site that looks legit and put whatever they want on it. Who's to say what reality is other than what we want it to be. This was pretty much what Morpheus was offering with his choices of pills in the Matrix. But underneath it all, there's history. And it's a history as complicated as unraveling the meaning of an increasingly digital world. And it is wonderful and frightening and lovely and dangerous and true and false and destroying the world and saving the world all at the same time.  This episode is pretty simplistic and many of the aspects we cover have entire episodes of the podcast dedicated to them. From the history of Amazon to Bob Taylor to AOL to the IETF to DNS and even Network Time Protocol. It's a story that leaves people out necessarily; otherwise scope creep would go all the way back to to include Volta and the constant electrical current humanity received with the battery. But hey, we also have an episode on that! And many an advance has plenty of books and scholarly works dedicated to it - all the way back to the first known computer (in the form of clockwork), the Antikythera Device out of Ancient Greece. Heck even Louis Gerschner deserves a mention for selling IBM's stake in all this to focus on things that kept the company going, not moonshots.  But I'd like to dedicate this episode to everyone not mentioned due to trying to tell a story of emergent networks. Just because they were growing fast and our modern infrastructure was becoming more and more deterministic doesn't mean that whether it was writing a text editor or helping fund or pushing paper or writing specs or selling network services or getting zapped while trying to figure out how to move current that there aren't so, so, so many people that are a part of this story. Each with their own story to be told. As we round the corner into the third season of the podcast we'll start having more guests. If you have a story and would like to join us use the email button on thehistoryofcomputing.net to drop us a line. We'd love to chat!

Daily Tech News Show
WhatsApp Won't Limit Functionality Over New Privacy Policy - DTH

Daily Tech News Show

Play Episode Listen Later May 31, 2021 5:15


WhatsApp says it has no plans to limit functionality for users who don't agree to its new privacy policy, Instagram will give equal algorithmic priority to original and re-shared Stories content, and the Internet Engineering Task Force published the Quic protocol as a standard.  See acast.com/privacy for privacy and opt-out information.

Daily Tech Headlines
WhatsApp Won’t Limit Functionality Over New Privacy Policy – DTH

Daily Tech Headlines

Play Episode Listen Later May 31, 2021


WhatsApp says it has no plans to limit functionality for users who don’t agree to its new privacy policy, Instagram will give equal algorithmic priority to original and re-shared Stories content, and the Internet Engineering Task Force published the Quic protocol as a standard. MP3 Please SUBSCRIBE HERE. You can get an ad-free feed ofContinue reading "WhatsApp Won’t Limit Functionality Over New Privacy Policy – DTH"

PHP Internals News
PHP Internals News: Episode 60: OpenSSL CMS Support

PHP Internals News

Play Episode Listen Later Jul 2, 2020


PHP Internals News: Episode 60: OpenSSL CMS Support London, UK Thursday, July 2nd 2020, 09:23 BST In this episode of "PHP Internals News" I chat with Eliot Lear (Twitter, GitHub, Website) about OpenSSL CMS support, which he has contributed to PHP. The RSS feed for this podcast is https://derickrethans.nl/feed-phpinternalsnews.xml, you can download this episode's MP3 file, and it's available on Spotify and iTunes. There is a dedicated website: https://phpinternals.news Transcript Derick Rethans 0:16 Hi, I'm Derick, and this is PHP internals news, a weekly podcast dedicated to demystifying the development of the PHP language. This is Episode 60. Today I'm talking with Eliot Lear about adding OpenSSL CMS supports to PHP. Hello Eliot, would you please introduce yourself. Eliot Lear 0:34 Hi Derick, it's great to be here. My name is Eliot Lear, I'm a principal engineer for Cisco Systems working on IoT security. Derick Rethans 0:41 I saw somewhere on the internet, Wikipedia I believe that he also did some RFCs, not PHP RFC, but internet RFCs. Eliot Lear 0:49 That's correct. I have a few out there I'm a jack of all trades But Master of None. Derick Rethans 0:53 The one that piqued my interest was the one for the timezone database, because I added timezone support to PHP a long long time ago. Eliot Lear 1:01 That's right, there's a whole funny story about that RFC, we will have to save it for another time but there are a lot of heroes out there in the volunteer world, who keep that database up to date, and currently the they're corralled and coordinated by a lovely gentleman by the name of Paul Eggert and if you're not a member of that community it's really a wonderful contribution to make, and they need people all around the world to send an information but I guess that's not why we're here today. Derick Rethans 1:29 But I'm happy to chat about that at some other point in the future. Now today we're talking about CMS support in OpenSSL and the first time I saw CMS. I don't think that means content management system here. Eliot Lear 1:41 No, it stands for cryptographic message syntax, and it is the follow on to earlier work which people will know as PKCS#7. So it's a way in which one can transmit and receive encrypted information or just signed information. Derick Rethans 1:58 How does CMS, and PKCS#7 differ from each other. Eliot Lear 2:03 Actually not too many differences, the externally the envelope or the structure of the message is slightly better formed, and the people who worked on that at the Internet Engineering Task Force were essentially just making incremental improvements to make sure that there was good interoperability, good for email support and encrypted email, and signed email, and for other purposes as well. So it's very relatively modest but important improvements, from PKCS#7. Derick Rethans 2:39 How old are these two standards? Eliot Lear 2:42 Goodness. PKCS#7, I'm not sure actually of how old the PKCS#7 is, but CMS dates back. Gosh, probably a decade or so I'd have to go look. I'm sorry if I don't have the answer to that one, Derick Rethans 2:56 A ballpark figure works fine for me. Why would you want to use CMS over the older PKCS#7? Eliot Lear 3:02 You know, truthfully, I'm not, I'm not a cryptographer, so the reason I used it was because it was the latest and greatest thing and when you're doing this sort of work. I'm an, I'm an interdisciplinary person so what I do is I go find the experts and they tell me what to use. And believe it or not, I went and found the person who's the expert on cryptographic signatures, which is what I need. I said: What should I use? He said: You should use CMS and so that's what I did. What I ran into some troubles though, which is that some of the tooling, doesn't support CMS. So, in particular PHP didn't support CMS. So that's why I got involved in the PHP project. Derick Rethans 3:40 You are a new contributor to the PHP project. What did you think of its interactions? Eliot Lear 3:45 I had a wonderful time doing the development. There was a fair amount of coding involved, and one has to understand that the underlying code here is OpenSSL and OpenSSL's documentation for some of its interfaces could stand a little bit of improvement. I needed to do a fair amount of work and I needed a fair amount of review so I got a lot of support from Jakub particular, who looks after the OpenSSL code base, as one of the maintainers, and I really enjoyed the CI/CD integration, which allowed me to check the numerous environments that PHP runs on. I really enjoyed the community review, and I really enjoyed it even though I didn't have to really do one in my case, I did do an RFC, as part of the PHP development process, which essentially forced me to write really good documentation or at least I hope it's really good. Before all of the caller interfaces that I defined, so it was a really enjoyable experience. I really liked working with the team. Derick Rethans 4:47 That's good to hear. I think sometimes although an RFC wasn't particularly necessary here, as an RFC one particularly necessary I always find writing down the requirements that I have for my own software, first, even though this doesn't get publicized or nobody's going to review that always very useful to just clear my head and see what's going on there. Eliot Lear 5:06 Yeah, I think that's a good approach. Derick Rethans 5:07 During the review, was there a lot of feedback where you weren't quite sure, or what was the best feedback that you got during this process? Eliot Lear 5:15 Biggest issue that we had was, how to handle streaming, and we have some code in there now for streaming, but it's it's unlikely to get really heavily exercised in the way that the interfaces are defined right now. It's essentially files in/files out interface which mirrors the PKCS#7 interface. One of the future activities that I would like to take on if I can find a little bit more time, is to move away from the files in/files out interface, but rather use an in memory structure or in memory interface. So that can actually take advantage of streaming and can be more memory efficient, over time. Derick Rethans 5:56 When you say file now you actually provide a file name to the functions? Eliot Lear 6:00 That's right, you know, depending on which of the interfaces you're using, there's an encrypt, there's an encrypt call there's a decrypt call. There's a sign and a validate call, and or a verify call, and each of them has a slightly different interface, but you know if you're encrypting you need to have the destination that you're encrypting through these are all public key, you know PKI based approaches so you have to have the destination certificates, that you're sending. If you're verifying you need to have the private key to do or you need, I'm sorry you need to have the public key chain and if you're decrypting to have the private key to do all this. So, but they're all filenames that are passed and it's a bit of a limitation of the original interface in that you probably don't really want to be passing file names from most of your functions you'd rather be passing objects that are a bit better structure than that. Derick Rethans 6:53 Is the underlying OpenSSL interface similar or does that allow for streaming in general? Eliot Lear 6:59 The C API allows for streaming in such. The command line interface, it doesn't seem to me that they do any particular things with with streaming. If you look at the cryptographic interface that we that we did for CMS, mostly it is an attempt to provide the capability that you would otherwise have on the open using the OpenSSL command line interface and I think the nice thing here is that we can evolve from that point. Derick Rethans 7:26 And the progress wouldn't only be done implemented for the CMS mechanism, but also for PKCS#7, as well as others that are also available. Eliot Lear 7:35 Yes. Another area that I would like to look at, I'm not sure how easy it will be, we didn't try it this time was to try and combine the code bases because they are so close, and be a little bit more code efficient, but there are just slight enough differences in the caller interfaces between PKCS#7 and CMS that, I'm not sure I could get away with using void functions for everything I have. I might have to have a lot of switches, or conditionals in the code. But what I am interested in doing for both sets of code is, again, providing new interfaces, where instead of passing file names, you're passing memory structures of some form that can be used to stream. That's the future. Derick Rethans 8:22 I've been writing quite a bit of GO code in the last couple of months. And that interface is exactly the same, you provide file names to it, which I find kind of annoying because I'm going to have to distribute these binaries at some point. And I don't really want any other dependencies in the form of files, so I need to figure out a way how to do that without also provide those key files at some point. Eliot Lear 8:43 Indeed, that's, that's an issue, and for us right well who are web developers I did this because I was doing some web development. A lot of the stuff that I want to do. I just want to do in memory and then pass right back to the client and I don't really want to have to go to the File System. And right now, I'll have to take an extra step to go to the File System and that's alright, it's not a big deal, but it'll be a little bit more elegant when I get away from that. We'll do that you know at an appropriate time. Derick Rethans 9:11 Yes, that sounds lovely. I'm not an expert in cryptography either. I saw that the RFC mentions the X 509. How does it tie in with CMS and PKCS #7? Eliot Lear 9:21 X 509 is essentially a certificate standard. In fact, that's what really what it is. A certificate essentially has a bunch of attributes, along with a subject being one of those attributes and a signature on top of the whole structure. And the signature comes from a signer, and the signer is essentially asserting all of these attributes on behalf of whoever sent the request. X 509 certificates are, for example the core of our web authentication infrastructure. When you go to the bank online, it uses an X 509 certificate to prove to you that it is the bank that you intended to visit, that's the basis of this and CMS and PKCS#7 are structures that allow the X 509 standard to be serialized, so there's the distinguishing coding rules that are used underneath PKCS#7 and CMS, and then what you have, CMS essentially was designed as at least in part for mail transmission. So how is it that you indicate the certificate, the subject name, the content of the message. All of this information had to be formally described, and it had to be done in a way that is scalable. And the nice thing about X 509, as compared to say just using naked public keys, is with naked public keys, the verifier or the recipient has to have each individual public key, whereas with X 509, it uses the certificate hierarchy such that you only need to have the top of the chain, if you will, in order to validate a certificate. So X 509 scales, amazingly well, we see that success, all throughout the web. And so that's what CMS and PKCS#7 help support. Derick Rethans 11:24 Like I said, I've never really done enough research into this but I think it is something that many web developers should really know how that works because this comes back, not only with mail, but also with HTTPS. Eliot Lear 11:35 It's another part of the code right. So CMS isn't directly used for supporting TLS connections, there's a whole a whole set of code inside of PHP for that. Derick Rethans 11:44 Would you have anything else to add? Eliot Lear 11:46 I would say a couple of things. The basis of this work was that I was attempting to create signatures for something called manufacturer usage descriptions. The reason I got involved with PHP is that I'm doing tooling that supports an IoT protection project. And this this manufacturer usage descriptions essentially describes what the device, what an IoT device needs in terms of network access. And the purpose of using PHP and adding the code that I added was so that those descriptions could be signed, and that's why Cisco, my employer, supported my activity. Now Cisco loves giving back to the community. This was one way we could do so it's something I'm very proud of when it comes to our company. And so we're very happy to participate with the PHP project. I really enjoyed working with Derick Rethans 12:33 That's glad to hear. I'm looking forward to some other API improvements because I agree that the interfaces that the OpenSSL extension has aren't always the easiest to use and I think it's important that encryption is easy to use, because more people will use it right. Eliot Lear 12:49 I have to say, in my opinion, the encryption interfaces that we have today are still relatively immature. And not just CMS, the code that I wrote, which is really you know fresh it just got committed, but the whole category of interfaces, is something that will evolve over time and it's important that it do so because the threats are evolving over time and people need to be able to use these interfaces, and we can't all be cryptographic experts, I'm not. I just use the code but I needed to write some in order to use it in my case, but as we go on I think will enjoy richer and easier to use interfaces that normal developers can use without being experts. Derick Rethans 13:38 PHP has been going that way already a little bit because we started having a simple random interface, and in a simple way of doing hashes and verifying hashes, to make these things a lot easier because we saw that lots of people are implementing their own ways in PHP code, and pretty much messing it up because, as you say not everybody's a cryptographer. Eliot Lear 13:56 That's right. And so that's a really good thing that PHP did, because as you pointed out, it eliminates all the people who are going onto the net looking for the little snippet of code that they're going to include in PHP, whether that snippet is correct or not that's a big issue. Derick Rethans 14:11 Absolutely. And cryptography is not something that you want to get wrong. Eliot Lear 14:15 That's right, because for every line of code that you've written in this space, there's going to be somebody who's going to want to attack it, maybe several. Derick Rethans 14:23 Absolutely. Thank you, Eliot, for taking the time this morning to talk to me about CMS support. Eliot Lear 14:28 It's been my pleasure Derick, and thanks for having me on. And again, it was really enjoyable to work with the PHP team and I'm looking forward to doing more. Derick Rethans 14:38 Thanks for listening to this instalment of PHP internals news, the weekly podcast dedicated to demystifying the development of the PHP language. I maintain a Patreon account for supporters of this podcast, as well as the Xdebug debugging tool, you can sign up for Patreon at https://drck.me/patreon. If you have comments or suggestions, feel free to email them to derick@phpinternals.news. Thank you for listening, and I'll see you next week. Show Notes RFC: Add CMS Support Credits Music: Chipper Doodle v2 — Kevin MacLeod (incompetech.com) — Creative Commons: By Attribution 3.0

The History of Computing
IETF: Guardians of the Internet

The History of Computing

Play Episode Listen Later Jan 3, 2020 9:13


Today we're going to look at what it really means to be a standard on the Internet and the IETF, the governing body that sets those standards.  When you open a web browser and visit a page on the Internet, there are rules that govern how that page is interpreted. When traffic sent from your computer over the Internet gets broken into packets and encapsulated, other brands of devices can interpret the traffic and react, provided that the device is compliant in how it handles the protocol being used. Those rules are set in what are known as RFCs. It's a wild concept. You write rules down and then everyone follows them. Well, in theory. It doesn't always work out that way but by and large the industry that sprang up around the Internet has been pretty good at following the guidelines defined in RFCs.  The Requests for Comments gives the Internet industry an opportunity to collaborate in a non-competitive environment. Us engineers often compete on engineering topics like what's more efficient or stable and so we're just as likely to disagree with people at your own organization as we are to disagree with people at another company. But if we can all meet and hash out our differences, we're able to get emerging or maturing technology standards defined in great detail, leaving as small a room for error in implementing the tech as possible. This standardization process can be lengthy and slows down innovation, but it ends up creating more innovation and adoption once processes and technologies become standardized.  The concept of standardizing advancements in technologies is nothing new. Alexander Graham Bell saw this when he started The American Institute of Electrical Engineers in 1884 to help standardize the new electrical inventions coming out of Bell labs and others. That would merge with the Institute of Radio Engineers in 1963 and now boasts half a million members spread throughout nearly every company in the world. And the International Organization for Standardization was founded in 1947. It was as a merger of sorts between the International Federation of the National Standardizing Associations, which had been founded in 1928 and the newly formed United Nations Standards Coordinating Committee. Based in Geneva, they've now set over 20,000 standards across a number of industries.   I'll over-simplify this next piece and revisit it in a dedicated episode. The Internet began life as a number of US government funded research projects inspired by JCR Licklider around 1962, out of ARPAs Information Processing Techniques Office, or IPTO. The packet switching network would evolve into ARPANET based on a number of projects he and his successor Bob Taylor at IPTO would fund straight out of the pentagon. It took a few years, but eventually they brought in Larry Roberts, and by late 1968 they'd awarded an RFQ to a company called Bolt Beranek and Newman (BBN) to build Interface Message Processors, or IMPs, to connect a number of sites and route traffic. The first one went online at UCLA in 1969 with additional sites coming on frequently over the next few years.    Given that UCLA was the first site to come online, Steve Crocker started organizing notes about protocols in what they called RFCs, or Request for Comments. That series of notes would then be managed by Jon Postel until his death 28 years later.    They were also funding a number of projects to build tools to enable the sharing of data, like file sharing and by 1971 we also had email. Bob Kahn was brought in, in 1972, and he would team up with Vinton Cerf from Stanford who came up with encapsulation and so they would define TCP/IP. By 1976, ARPA became DARPA and by 1982, TCP/IP became the standard for the US DOD and in 1983, ARPANET moved over to TCP/IP.  NSFNet would be launched by the National Science Foundation in 1986.   And so it was in 1986 when The Internet Engineering Task Force, or IETF, was formed to do something similar to what the IEEE and ISO had done before them. By now, the inventors, coders, engineers, computer scientists, and thinkers had seen other standards organizations - they were able to take much of what worked and what didn't, and they were able to start defining standards.    They wanted an open architecture. The first meeting was attended by 21 researchers who were all funded by the US government. By the fourth meeting later that year they were inviting people from outside the hollowed halls of the research community. And it grew, with 4 meetings a year that continue on to today, open to anyone.   Because of the rigor practiced by Postel and early Internet pioneers, you can still read those notes from the working groups and RFCs from the 60s, 70s, and on. The RFCs were funded by DARPA grants until 1998 and then moved to the Internet Society, who runs the IETF and the RFCs are discussed and sometimes ratified at those IETF meetings. You can dig into those RFCs and find the origins and specs for NTP, SMTP, POP, IMAP, TCP/IP, DNS, BGP, CardDAV and pretty much anything you can think of that's become an Internet Standard. A lot of companies claim to the “the” standard in something. And if they wrote the RFC, I might agree with them.    At those first dozen IETF meetings, we got up to about 120 people showing up. It grew with the advancements in routing, application protocols, other networks, file standards, peaking in Y2K with  2,810 attendees. Now, it averages around 1,200. It's also now split into a number of working groups with steering committees, While the IETF was initially funded by the US government, it's now funded by the Public Interest Registry, or PIR, which was sold to Ethos Capital in November of 2019.    Here's the thing about the Internet Society and the IETF. They're mostly researchers. They have stayed true to the mission since they took over from Pentagon, a de-centralized Internet. The IETF is full of super-smart people who are always trying to be independent and non-partisan. That independence and non-partisanship is the true Internet, the reason that we can type www.google.com and have a page load, and work, no matter the browser. The reason mail can flow if you know an email address. The reason the Internet continues to grow and prosper and for better or worse, take over our lives. The RFCs they maintain, the standards they set, and everything else they do is not easy work. They iterate and often don't get credit individually for their work other than a first initial and a last name as the authors of papers.  And so thank you to the IETF and the men and women who put themselves out there through the lens of the standards they write. Without you, none of this would work nearly as well as it all does. And thank you, listeners, for tuning in for this episode of the History of Computing Podcast. We are so lucky to have you.

IT Manager Podcast (DE, german) - IT-Begriffe einfach und verständlich erklärt

Die Abkürzung FTP steht für File-Transfer-Protocol und ist die englische Bezeichnung für „Dateiübertragungsprotokoll”. Bei diesem Protokoll handelt es sich genauer gesagt um ein Netzwerkprotokoll, welches den Transfer von Daten zwischen einem Server und Client in einem IP-Netzwerk ermöglicht. Die ursprüngliche Spezifikation des File-Transfer-Protocol wurde am 16. April 1971 als RFC 114 veröffentlicht. RFC steht für Request for Comments und bezeichnet ein formelles Dokument der Internet Engineering Task Force. Im Oktober 1985 wurde mit RFC 959 die heute noch gültige Spezifikation des File Transfer Protocol eingeführt. Somit gilt das File-Transfer-Protocol als eines der ältesten Protokolle, die es im Zusammenhang mit dem Internet gibt.  Das File-Transfer-Protocol dient primär dem Austausch von Dateien zwischen einem Client und einem Server oder der Übertragung zwischen zwei Servern. Hierbei sind mehrere Konstellationen denkbar: vom Server zum Client Client zum Server und von einem Server zu einem anderen Server. Hier spricht man in der Regel von einem Dateiaustausch mittels des File Exchange Protocols. Sobald eine FTP-Verbindung hergestellt worden ist, können FTP-User nicht nur Dateien hoch- und herunterladen, sondern auch neue Verzeichnisse anlegen, verändern, auslesen oder löschen. Außerdem können sie Dateien umbenennen, verschieben und löschen. Zudem ermöglicht das File Transfer Protocol die Berechtigungsverwaltung für Dateien. Sprich, man kann festlegen, ob gespeicherte Dateien nur vom Besitzer, von einer bestimmten Gruppe oder von der Öffentlichkeit gelesen, geändert oder ausgeführt werden dürfen. Aber lassen Sie mich die Dateiübertragung mittels des File Transfer Protokolls noch etwas näher erklären. Um einen FTP-Server zu erreichen, ist zunächst einmal der Verbindungsaufbau durch eine Benutzerauthentifizierung und einen FTP-Client notwendig. Beim FTP-Client handelt es sich um eine Software, die in den meisten Betriebssystemen standardmäßig integriert ist und welches das FTP-Protokoll zur Übertragung von Dateien nutzt. Ein FTP-Verbindungsaufbau sieht vor, dass das FTP zwei separate Verbindungen zwischen Client und Server herstellt. Eine Verbindung ist der Steuerkanal über den TCP-Port 21. Dieser Kanal dient ausschließlich zur Übertragung von FTP-Befehlen, auch Kommandos genannt und deren Antworten. Die zweite Verbindung ist der Datenkanal über den TCP-Port 20. Dieser Kanal dient ausschließlich zur Übertragung von Daten. Im ersten Schritt wird also der Steuerkanal vom FTP-Client zum FTP-Server aufgebaut. Steht der Steuerkanal werden sowohl Befehle des Clients zum Server gesendet als auch die Antworten des Servers zum Client übertragen. Im zweiten Schritt wird Datenverbindung vom FTP-Server zum FTP-Client initiiert, um die Daten auszutauschen, wie es in den Kommandos festgelegt wurde. Sobald die Dateiübertragungen abgeschlossen sind, werden die Verbindungen vom Benutzer oder vom Server (Timeout) beendet. Grundsätzlich gibt es zwei unterschiedliche Herangehensweisen, einen Datei-Transfer zwischen Client und Server zu initialisieren: den aktiven und den passiven Verbindungs-Modus. Beiden gemein ist, dass zuerst eine Steuerverbindung aufgebaut wird, über die FTP-Kommandos gesendet werden, und anschließend zum Datentransfer eine Datenverbindung aufgebaut wird. Der Unterschied liegt darin, wer diese Verbindungen aufbaut - Client oder Server. Im Detail läuft das folgendermaßen ab: Beim aktiven Verbindungsmodus reserviert der Client 2 TCP-Ports. Über den ersten Port baut er die Steuerverbindung zu Port 21 des Servers auf und teilt dem Server die 2. Port-Nummer mit, auf welchem der Client die Daten erwartet. Beim passiven Verbindungsmodus reserviert der Client 2 TCP-Ports zur eigenen Verwendung und baut über den ersten Port die Steuerverbindung zu Port 21 des Servers auf. Da eine passive Verbindung gewünscht ist, sendet der Client aus dem angesprochenen FTP-Befehlssatz das Kommando PASV. Damit weiß der Server: Eine passive Verbindung ist erwünscht, woraufhin er für sich einen TCP-Port für den Datentransfer reserviert und diesen Port dem Client mitteilt. Neben dem aktiven und dem passive Verbindungs-Modus kennt das FTP zwei verschiedene Übertragungsmodi: Den ASCII-Modus und den Binary-Modus, wobei sich die beiden Modi in der Art der Codierung unterscheiden. Der ASCII-Modus wird zur Übertragung von reinen Text-Dateien verwendet. Hier muss die Zeilenstruktur des Textes umcodiert werden. Bei diesem Vorgang wird der Zeichensatz dieser Datei an das Zielsystem angepasst. Der Binary-Modus hingegen überträgt die Dateien byteweise ohne die Daten zu ändern. Dieser Modus wird am häufigsten genutzt. Vorzugsweise natürlich bei Binär-Dateien. Außerdem bieten viele FTP-Server, vor allem Server von Universitäten ein sogenanntes Anonymous FTP an. Hier ist zum Einloggen neben den realen Benutzerkonten ein spezielles Benutzerkonto, typischerweise „anonymous“ und/oder „ftp“, vorgesehen, für das kein (oder ein beliebiges) Passwort angegeben werden muss. Früher gehörte es zum „guten Ton“, bei anonymem FTP seine eigene, gültige E-Mail-Adresse als Passwort anzugeben. Die meisten Webbrowser tun dies heute nicht mehr, da es aus Spamschutz-Gründen nicht zu empfehlen ist. Bevor wir nun zum Ende unseres heutigen Podcast kommen, möchte Ihnen kurz die Begriffe FTPS und SFTP erläutern und vor allem warum sie zum Einsatz kommen. Zunächst einmal müssen Sie wissen, obwohl ein -User zum FTP-Verbindungsaufbau stets ein Kennwort und Passwort eingeben muss, erfolgt die Übertragung von Dateien zwischen Client und Server immer unverschlüsselt. Das bedeutet, dass mit etwas Know-How und den richtigen Tools die gesamte Kommunikation einfach abgehört und auch verändert werden kann. Daher kann bei einer FTP-Verbindung kein Datenschutz oder und ein Schutz der Datenintegrität gewährleistet werden. Um die Sicherheitsprobleme des FTPs zu umgehen hat man nun zwei Möglichkeiten: den Einsatz von SFTP oder FTPS. FTPS erweitert das FTP-Protokoll um eine SSL-Verschlüsselung. Da FTPS aber auch unsichere Verbindungen auf FTP-Basis ermöglicht, ist es nicht unbedingt das sicherste Protokoll. Im Gegensatz dazu basiert SFTP auf einem neueren Protokoll: SSH (Secure Shell). Dabei werden sämtliche Verbindungen verschlüsselt über einen zuverlässigen Kanal übermittelt. Eine unverschlüsselte Verbindung ist mit SSH somit ausgeschlossen. Kontakt: Ingo Lücker, ingo.luecker@itleague.de

The History of Computing
The History Of DNS

The History of Computing

Play Episode Listen Later Jul 31, 2019 8:22


Welcome to the History of Computing Podcast, where we explore the history of information technology. Because by understanding the past, we're able to be prepared for the innovations of the future! Todays episode is on the history of the Domain Name System, or DNS for short.  You know when you go to www.google.com. Imagine if you had to go to 172.217.4.196, or the IP address, instead. DNS is the service that resolves that name to that IP address. Let's start this story back in 1966. The Beatles released Yellow Submarine. The Rolling Stones were all over the radio with Paint It Black. Indira Ghandi was elected the Prime Minister of India. US Planes were bombing Hanoi smack dab in the middle of the Vietnam War. The US and USSR agreed not to fill space with nukes. The Beach Boys had just released Good Vibrations. I certainly feel the good vibrations when I think that quietly, when no one was watching, the US created ARPANET, or the Advanced Research Projects Agency Network.  ARPANET would evolve into the Internet as we know it today. As with many great innovations in technology, it took awhile to catch on. Late into the 1980s there were just over 300 computers on the Internet, most doing research. Sure, there were 254 to the 4th addresses that were just waiting to be used, but the idea of keeping the address of all 300 computers you wanted to talk to seemed cumbersome and it was slow to take hold. To get an address in the 70s you needed to contact Jon Postel at USC to get put on what was called the Assigned Numbers List. You could call or mail them.  Stanford Research Institute (now called SRI) had a file they hosted called hosts.txt. This file mapped the name of one of these hosts on the network to a IP address, making a table of computer names and then IP addresses those matched with, or a table of hosts. Many computers still maintain this file. Elizabeth Feinler maintained this directory of systems. She would go on to lead and operate the Network Information Center, or NIC for short, for ARPANET and see the evolution to the Defense Data Network, or DDN for short and later the Internet. She wrote what was then called the Resource Handbook.  By 1982, Ken Harrenstien and Vic White on Feinler's group at Stanford created a service called Whois, defined in RFC 812, which was an online directory. You can still use the whois command on Windows, Mac and Linux computers today. But by 1982 it was clear that the host table was getter's slower and harder to maintain as more systems were coming online. This meant more people to do that maintenance. But Postel from USC then started reviewing proposals for maintaining this thing, a task he handed off to Paul Mockapetris. That's when Mockapetris did something that he wasn't asked to do and created DNS.  Mockapetris had been working on some ideas for filesystems at the time and jumped at the chance to apply those ideas to something different. So Jon Postel and Zaw-Sing Su helped him complete his thoughts which were published by the Internet Engineering Task Force, or IETF, in in RFC 882 for the concepts and facilities and RFC 883 for the implementation and specification in November 1983. You can google those and read them today. And most of it is still used.  Here, he introduced the concept that a NAME of a TYPE points to an address, or RDATA and lives for a specified amount of time, or TTL short for Time To Live. He also mapped IP addresses to names in the specifications, creating PTR records. All names had a TLD or Top Level Domain name of ARPANET.  Designing a protocol isn't the same thing as implementing a protocol. In 1984, four students from the University of California Berkeley wrote the first version of BIND, short for Berkeley Internet Name Domain, for BSD 4.3. Douglas Terry, Mark Painter, David Riggle, and Songnian Zhou using funds from a DARPA grant. In 1988 Paul Vixie from Digital Equipment Corporation then gave it a little update and maintained it until he founded the Internet Systems Consortium to take it over.  BIND is still the primary distribution of DNS, although there are other distributions now. For example, Microsoft added DNS in 1995 with the release of NT 3.51.  But back to the 80s real quick. In 1985, came the introduction of .mil, .gov, .edu, .org, .com TLDs. Remember John Postel from USC? He and Joyce K Reynolds started an organization called IANA to assign numbers for use on the Internet. DNS Servers are hierarchical, and so there's a set of root DNS servers, with a root zone controlled by the US Dept of Commerce. 10 of the 13 original servers were operated in the US and 3 outside, each assigned a letter of A through M. You can still ping a.root-servers.net. These host the root zone database from IANA and handle the hierarchy of the TLD they're authoritative for with additional servers hosted for .gov, .com, etc. There are now over 1,000 TLDs! And remember how USC was handling the addressing (which became IANA) and Stanford was handling the names? Well Feinler's group turned over naming to Network Solutions in 1991 and they handled it until 1998 when Postel died and ICANN was formed. ICANN or the Internet Corporation for Assigned Names and Numbers, merged the responsibilities under one umbrella. Each region of the world is allowed to manage their own IP addresses, and so ARIN was formed in 1998 to manage the distribution of IP addresses in America.  The collaboration between Feinler and Postel fostered the innovations that would follow. They also didn't try to take everything on. Postel instigated TCP/IP and DNS. Postel co-wrote many of the RFCs that define the Internet and DNS to this day. And Feinler's showed great leadership in administering how much of that was implemented. One can only aspire to find such a collaboration in life and to do so with results like the Internet, worth tens of trillions of dollars, but more importantly has reshaped the world, disrupted practically every industry and touched the lives of nearly every human on earth.  Thank you for joining us for this episode of the History Of Computing Podcast. We hope you had an easy time finding thehistoryofcomputing.libsyn.com thanks to the hard work of all those who came before us. 

That's Genius!
3. The 2 Questions You Must Ask Before Launching Any AI Product w/ Jonathan Rosenberg

That's Genius!

Play Episode Listen Later May 28, 2019 16:54 Transcription Available


“What’s the strategy?” That’s where every product has its beginning. In this episode of That’s Genius!, Jonathan Rosenberg brings us lessons from his first 100 days as CTO and Head of AI at Five9, where he’s helping roll out a new AI component to the software. He discusses the first questions every company must ask before launching any new AI product (or any product really). Jonathan has been a Technology Leader on the Internet Engineering Task Force for the past 24 years. He also worked at Cisco as VP & CTO of Cloud Collaboration, and has vast experience with several other technology firms, such as Microsoft, Skype, and Nokia.

TechNow with Tom Lyon
DriveScale TechNow Podcast with Bob Hinden

TechNow with Tom Lyon

Play Episode Listen Later Jul 25, 2018 23:32


In this TechNow podcast, Tom and Bob Hinden, the IPv6 co-inventor, Check Point Fellow and IEEE Internet Award winner, confer about pioneering work on the Internet Engineering Task Force and Internet security.

internet ipv6 internet engineering task force
Spectrum
Internet Pioneer is both Optimistic & Cautious about New Cyber Developments

Spectrum

Play Episode Listen Later Apr 18, 2018 43:00


Dr. Steve Crocker was there for the birth of the Internet. In the late 1960s and early 1970s, he was part of the group that developed the protocols for the ARPANET. That was the foundation for today's Internet. It was originally designed to share data and scientific research; however, it quickly morphed into a system used by millions of people for both productive and nefarious reasons. He helped formulate the Network Working Group, the forerunner of the modern Internet Engineering Task Force. He also helped initiate the Requests for Comment (RFC) through which protocol designs are shared and changes made to systems for upgrades. Dr. Crocker still remains optimistic about the thousands of positive uses of the Internet. He doesn’t think that we have even come close to maximizing the use of the Internet. However, he also cautions that security breaches remain a problem the need to be addressed with some urgency. He, most recently, has been the CEO and co-founder of Shinkuro, Inc., a start-up company focused on dynamic sharing of information across the Internet and the deployment of improved security protocols. Dr. Crocker also is extremely optimistic about the uses of Artificial Intelligence to enhance our way of living – especially in medical fields. He, however, does not want us to turn our lives over to being totally dominated by algorithms of someone else. For his lifetime work, Dr. Crocker has been admitted to the Internet Hall of Fame.

Opposable Thumbs
Episode 14: In Reverse Dance Poem

Opposable Thumbs

Play Episode Listen Later Sep 12, 2017 67:09


Jon Satrom is our guest this week! Chicago (x2) in tha house! We talk about being punk rockers, dropping projectors and being deep in the hippie game. There are many things to buy when the invisible hand of disaster capitalism pays full retail price for your surplus gas masks. Internet Engineering Task Force... take notice! Taylor has created the haikulink transport protocol (Is port 575 taken?). Jon squeezes Taylor and Rob through a neural network and a whole bunch of like... awesome weird stuff comes out, you know? Rob talks trash on solenoids but they came through in the end so he probably shouldn't get too puffy about it. Check out our project photos, videos and more at http://projects.opposablepodcast.com Thanks to Nik, Luke and Kelly (http://kellymariemartin.com)! They're our top Patreon supporters! And props to Mike and Jen as well! Ya'll are great too! Join 'em at: https://www.patreon.com/opposablethumbs Special Guest: Jon Satrom.

chicago dance reverse poem nik internet engineering task force
Show IP Protocols
Increase iPhones’ battery life by removing unnecessary IPv6 multicast Router Advertisements

Show IP Protocols

Play Episode Listen Later Feb 23, 2016


I came across a new RFC 7772: “Reducing Energy Consumption of Router Advertisements”. I want to share my learnings after reading this RFC.Internet Engineering Task Force (IETF) Logo, captured on Wikipedia.I intentionally mentioned “iPhone” at the subject to have your attention. Actually, the whole discussion applies to any mobile devices with limited battery capacity, such as smart phones and tablet computers.It is quite obvious mobile devices will consume more power while awake than asleep. The question is how serious this problem is?The problemAlthough the authors of this RFC did not mention how they got these numbers, I believe the numbers must be typical and derived from actual lab measurements.While asleep, a mobile device would consume 5 mA of current. While awake, it would consume 40 times more on the other hand. That is 200 mA.A single Router Advertisement (RA) will wake up the target mobile device. A single multicast RA to all hosts will wake up ALL the mobile devices attached to the same subnet.Remember, the power capacity of mobile devices are so limited. The more power consumption we can save, the more battery time we will have for every mobile devices attached to the same IPv6 subnet.Reasonable RA frequency: 7 RAs per hourHere I want to emphasize on the word “reasonable”. To keep IPv6 working, we do need RAs to push and refresh network information to mobile devices. If nothing changed at the network, why keep sending so many unnecessary RAs just to wake mobile devices up and waste battery capacity?Here is a reasonable goal: 2% of idle power consumption.Assume we want to achieve the goal: we do not want RAs to consume more than 2% of idle (sleeping) power consumption of every mobile device. After some calculations, we know the reasonable frequency for RAs is no more than 7 RAs per hour.Here is the calculation.A typical wakeup high power consumption surge mentioned in this RFC would last for 250 ms. That is, the wakeup power consumption is triggered by single RA is:{The battery capacity consumed for single RA wakeup in mAH} = 200 mA x 250ms/1 hour = 200 x 250/3,600,000 = 0.0138888… ~= 0.014 mAH.To calculate the idle (asleep) power consumption, I assume the device keeps asleep for the whole hour. This is the total budget for me to meet.{2% of idle (asleep) power consumption of battery capacity for an hour in mAH} = 2% x 5mA x 1 Hour = 0.02x5x1 mAH = 0.1 mAH.{Reasonable number of wakeups without exceeding the budget} = 0.1 / 0.014 ~= 7I have to be honest I did not expect this number to be this small. The default IPv6 RA interval is 200 seconds on Cisco IOS routers. That is equivalent to 18 RAs per hour. I believe configuring the interval to roughly 600 seconds would be a better idea.http://www.cisco.com/c/en/us/td/docs/ios-xml/ios/ipv6/command/ipv6-cr-book/ipv6-i3.html#wp3911380069The default interval between IPv6 RA transmissions is 200 seconds.Note: the lifetime of each RA should be 5 to 10 times of this interval. This is also mentioned in this RFC as roughly 45~90 minutes.Recommendations at network sideI will just focus on the network side.To implement the recommendations of Section 5.1.1 and 5.1.2 of this RFC, I found one interesting command on Cisco’s web site.The command is:interface E0/0 ipv6 nd ra solicited unicasthttp://www.cisco.com/c/en/us/td/docs/ios-xml/ios/ipv6/command/ipv6-cr-book/ipv6-i3.html#wp5031733970Large networks with a high concentration of mobile devices might experience like battery depletion, when solicited Router Advertisement messages are multicast . Use the ipv6 nd ra solicited unicast to unicast solicited Router Advertisement messages extend battery life of mobile device in the network.Most of the IPv6 end devices could send out Router Solicitation even when their own link-local addresses are not determined yet. In that case, the replying RAs to such Router Solicitations would become destined to multicast address of all hosts. After enabling this feature, the router would ignore all such Router Solicitations. End devices can still get their global IPv6 prefix because after determination of their own link-local addresses, they can send out RS again and at this moment the router will respond to them because these RSs are sourced with unicast addresses.For stable network, we should keep the RA interval as large as possible to save more power on mobile devices. Here is a sample configuration on Cisco IOS Routers.interface ethernet 0/0  ipv6 nd ra interval 600  ipv6 nd ra lifetime 2700Here I use 45 minutes (2700 seconds) as a reasonable RA lifetime.We should consider increasing the frequency ONLY when we are changing network topology or renumbering address. For most of the time, we should keep below 7 RAs per hour as reasonable configuration.Zhuifen Station (追分車站) (Google Maps). Taichung City, Taiwan.One more thing…Increasing RA frequency indeed helps to push network changes much faster to all end devices. For devices without battery capacity concerns such as desktop computers, this advantage would outweigh the power consumption.My personal suggestion is we should put limited battery capacity mobile devices in separate IPv6 subnets, and enable only to such subnets with the recommendations discussed in this post.

Tech Talk Radio Podcast
January 16, 2016 Tech Talk Radio Show

Tech Talk Radio Podcast

Play Episode Listen Later Jan 16, 2016 58:03


Converting PDF tables (importing into Excel), Presidential candidates and technology (a sorry sight), creating a CD with an ISO file (ImgBurn, Active@ISO Burner), Profiles in IT (Ian Murdock, founder of Debian Linux distribution), Wikepedia turns 15 (WikiMedia politics is troubling), Internet Engineering Task Force (30 years old, rough consensus and running code), developer declares Bitcoin a failure (BitcoinXT fork undermines the future), and Bitcoin blockchain is technology of choice (Open Ledger Project, digital transaction tracking embraced by Wall Street). This show originally aired on Saturday, January 16, 2016, at 9:00 AM EST on WFED (1500 AM).

Tech Talk Radio Podcast
January 16, 2016 Tech Talk Radio Show

Tech Talk Radio Podcast

Play Episode Listen Later Jan 16, 2016 58:03


Converting PDF tables (importing into Excel), Presidential candidates and technology (a sorry sight), creating a CD with an ISO file (ImgBurn, Active@ISO Burner), Profiles in IT (Ian Murdock, founder of Debian Linux distribution), Wikepedia turns 15 (WikiMedia politics is troubling), Internet Engineering Task Force (30 years old, rough consensus and running code), developer declares Bitcoin a failure (BitcoinXT fork undermines the future), and Bitcoin blockchain is technology of choice (Open Ledger Project, digital transaction tracking embraced by Wall Street). This show originally aired on Saturday, January 16, 2016, at 9:00 AM EST on WFED (1500 AM).

BSD Now
105: Virginia BSD Assembly

BSD Now

Play Episode Listen Later Sep 2, 2015 66:09


It's already our two-year anniversary! This time on the show, we'll be chatting with Scott Courtney, vice president of infrastructure engineering at Verisign, about this year's vBSDCon. What's it have to offer in an already-crowded BSD conference space? We'll find out. This episode was brought to you by Headlines OpenBSD hypervisor coming soon (https://www.marc.info/?l=openbsd-tech&m=144104398132541&w=2) Our buddy Mike Larkin never rests, and he posted some very tight-lipped console output (http://pastebin.com/raw.php?i=F2Qbgdde) on Twitter recently From what little he revealed at the time (https://twitter.com/mlarkin2012/status/638265767864070144), it appeared to be a new hypervisor (https://en.wikipedia.org/wiki/Hypervisor) (that is, X86 hardware virtualization) running on OpenBSD -current, tentatively titled "vmm" Later on, he provided a much longer explanation on the mailing list, detailing a bit about what the overall plan for the code is Originally started around the time of the Australia hackathon, the work has since picked up more steam, and has gotten a funding boost from the OpenBSD foundation One thing to note: this isn't just a port of something like Xen or Bhyve; it's all-new code, and Mike explains why he chose to go that route He also answered some basic questions about the requirements, when it'll be available, what OSes it can run, what's left to do, how to get involved and so on *** Why FreeBSD should not adopt launchd (http://blog.darknedgy.net/technology/2015/08/26/0/) Last week (http://www.bsdnow.tv/episodes/2015_08_26-beverly_hills_25519) we mentioned a talk Jordan Hubbard gave about integrating various parts of Mac OS X into FreeBSD One of the changes, perhaps the most controversial item on the list, was the adoption of launchd to replace the init system (replacing init systems seems to cause backlash, we've learned) In this article, the author talks about why he thinks this is a bad idea He doesn't oppose the integration into FreeBSD-derived projects, like FreeNAS and PC-BSD, only vanilla FreeBSD itself - this is also explained in more detail The post includes both high-level descriptions and low-level technical details, and provides an interesting outlook on the situation and possibilities Reddit had quite a bit (https://www.reddit.com/r/BSD/comments/3ilhpk) to say (https://www.reddit.com/r/freebsd/comments/3ilj4i) about this one, some in agreement and some not *** DragonFly graphics improvements (http://lists.dragonflybsd.org/pipermail/commits/2015-August/458108.html) The DragonFlyBSD guys are at it again, merging newer support and fixes into their i915 (Intel) graphics stack This latest update brings them in sync with Linux 3.17, and includes Haswell fixes, DisplayPort fixes, improvements for Broadwell and even Cherryview GPUs You should also see some power management improvements, longer battery life and various other bug fixes If you're running DragonFly, especially on a laptop, you'll want to get this stuff on your machine quick - big improvements all around *** OpenBSD tames the userland (https://www.marc.info/?l=openbsd-tech&m=144070638327053&w=2) Last week we mentioned OpenBSD's tame framework getting support for file whitelists, and said that the userland integration was next - well, now here we are Theo posted a mega diff of nearly 100 smaller diffs, adding tame support to many areas of the userland tools It's still a work-in-progress version; there's still more to be added (including the file path whitelist stuff) Some classic utilities are even being reworked to make taming them easier - the "w" command (https://www.marc.info/?l=openbsd-cvs&m=144103945031253&w=2), for example The diff provides some good insight on exactly how to restrict different types of utilities, as well as how easy it is to actually do so (and en masse) More discussion can be found on HN (https://news.ycombinator.com/item?id=10135901), as one might expect If you're a software developer, and especially if your software is in ports already, consider adding some more fine-grained tame support in your next release *** Interview - Scott Courtney - vbsdcon@verisign.com (mailto:vbsdcon@verisign.com) / @verisign (https://twitter.com/verisign) vBSDCon (http://vbsdcon.com/) 2015 News Roundup OPNsense, beyond the fork (https://opnsense.org/opnsense-beyond-the-fork) We first heard about (http://www.bsdnow.tv/episodes/2015_01_14-common_sense_approach) OPNsense back in January, and they've since released nearly 40 versions, spanning over 5,000 commits This is their first big status update, covering some of the things that've happened since the project was born There's been a lot of community growth and participation, mass bug fixing, new features added, experimental builds with ASLR and much more - the report touches on a little of everything *** LibreSSL nukes SSLv3 (http://undeadly.org/cgi?action=article&sid=20150827112006) With their latest release, LibreSSL began to turn off SSLv3 (http://disablessl3.com) support, starting with the "openssl" command At the time, SSLv3 wasn't disabled entirely because of some things in the OpenBSD ports tree requiring it (apache being one odd example) They've now flipped the switch, and the process of complete removal has started From the Undeadly summary, "This is an important step for the security of the LibreSSL library and, by extension, the ports tree. It does, however, require lots of testing of the resulting packages, as some of the fallout may be at runtime (so not detected during the build). That is part of why this is committed at this point during the release cycle: it gives the community more time to test packages and report issues so that these can be fixed. When these fixes are then pushed upstream, the entire software ecosystem will benefit. In short: you know what to do!" With this change and a few more to follow shortly, LibreSSL won't actually support SSL anymore - time to rename it "LibreTLS" *** FreeBSD MPTCP updated (http://caia.swin.edu.au/urp/newtcp/mptcp/tools/v05/mptcp-readme-v0.5.txt) For anyone unaware, Multipath TCP (https://en.wikipedia.org/wiki/Multipath_TCP) is "an ongoing effort of the Internet Engineering Task Force's (IETF) Multipath TCP working group, that aims at allowing a Transmission Control Protocol (TCP) connection to use multiple paths to maximize resource usage and increase redundancy." There's been work out of an Australian university to add support for it to the FreeBSD kernel, and the patchset was recently updated Including in this latest version is an overview of the protocol, how to get it compiled in, current features and limitations and some info about the routing requirements Some big performance gains can be had with MPTCP, but only if both the client and server systems support it - getting it into the FreeBSD kernel would be a good start *** UEFI and GPT in OpenBSD (https://www.marc.info/?l=openbsd-cvs&m=144092912907778&w=2) There hasn't been much fanfare about it yet, but some initial UEFI and GPT-related commits have been creeping into OpenBSD recently Some support (https://github.com/yasuoka/openbsd-uefi) for UEFI booting has landed in the kernel, and more bits are being slowly enabled after review This comes along with a number (https://www.marc.info/?l=openbsd-cvs&m=143732984925140&w=2) of (https://www.marc.info/?l=openbsd-cvs&m=144088136200753&w=2) other (https://www.marc.info/?l=openbsd-cvs&m=144046793225230&w=2) commits (https://www.marc.info/?l=openbsd-cvs&m=144045760723039&w=2) related to GPT, much of which is being refactored and slowly reintroduced Currently, you have to do some disklabel wizardry to bypass the MBR limit and access more than 2TB of space on a single drive, but it should "just work" with GPT (once everything's in) The UEFI bootloader support has been committed (https://www.marc.info/?l=openbsd-cvs&m=144115942223734&w=2), so stay tuned for more updates (http://undeadly.org/cgi?action=article&sid=20150902074526&mode=flat) as further (https://twitter.com/kotatsu_mi/status/638909417761562624) progress (https://twitter.com/yojiro/status/638189353601097728) is made *** Feedback/Questions John writes in (http://slexy.org/view/s2sIWfb3Qh) Mason writes in (http://slexy.org/view/s2Ybrx00KI) Earl writes in (http://slexy.org/view/s20FpmR7ZW) ***

CERIAS Security Seminar Podcast
Scott Hollenbeck, Provisioning Protocol Challenges in an Era of gTLD Expansion

CERIAS Security Seminar Podcast

Play Episode Listen Later Aug 24, 2011 58:49


The number of generic top-level domains in the Internet's Domain Name System has been increasing slowly since 2000. In July 2011 the Internet Corporation for Assigned Names and Numbers (ICANN) approved a long-awaited plan to significantly increase the number of generic top-level domain names. With a specific focus on users of the Extensible Provisioning Protocol (EPP), this presentation will describe the practical challenges faced by participants in the domain name provisioning ecosystem in the face of evolving domain name management requirements. About the speaker: Scott Hollenbeck is the Director of Applied Research for Verisign. In this capacity he manages the company's efforts to explore and investigate strategic technology areas in collaboration with university partners. Mr. Hollenbeck is the author of the Extensible Provisioning Protocol (EPP), a standard protocol for the registration and management of Internet infrastructure data including domain names. He has served as a member of the Internet Engineering Steering Group of the Internet Engineering Task Force, where he was the responsible area director for several working groups developing application protocol standards. He received a Bachelor's degree in Computer Science from the Pennsylvania State University and a Master's degree in Computer Science complemented by a graduate certificate in Software Engineering from George Mason University.

ISTS: Institute for Security, Technology, and Society
Cloud Computing: Finding the Silver Lining

ISTS: Institute for Security, Technology, and Society

Play Episode Listen Later Mar 5, 2009 89:40


The concept of Cloud Computing has raised many hopes and just as many concerns. Steve Hanna, Distinguished Engineer at Juniper Networks, spoke on March 5, 2009 about the risks and rewards of sharing computing resources over the Internet. He is co-chair of the Trusted Network Connect Work Group in the Trusted Computing Group, co-chair of the Network Endpoint Assessment Working Group in the Internet Engineering Task Force and is active in other networking and security standards groups such as the Open Group and OASIS.

internet oasis silver lining cloud computing juniper networks distinguished engineer open group internet engineering task force steve hanna trusted computing group