POPULARITY
Scott Parker joins Tony to discuss the progress getting applications running on Aurora Supercomputer nodes. From 2022 where the project started to today, significant progress has been made using a variety of programming models. Kokkos, OpenMP and oneAPI have enabled migration of scientific and AI applications to run on the Intel CPUs and GPUs Aurora is built on. Guest: Scott Parker is the Lead for Performance Tools and Programming Models at the ALCF. He received his B.S. in Mechanical Engineering from Lehigh University, and a Ph.D. in Mechanical Engineering from the University of Illinois at Urbana-Champaign. Prior to joining Argonne, he worked at the National Center for Supercomputing Applications, where he focused on high-performance computing and scientific applications. At Argonne since 2008, he works on performance tools, performance optimization, and spectral element computational fluid dynamics solvers. Scott Parker Resources: Aurora Super Computer - https://www.alcf.anl.gov/aurora Intel Data Center Max GPU - https://www.intel.com/content/www/us/en/products/details/discrete-gpus/data-center-gpu/max-series.html Intel Xeon Processors - https://www.intel.com/content/www/us/en/products/details/processors/xeon.html Intel oneAPI Base Toolkit - https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit.html
Dr. Kathryn Huff, Ph.D. ( https://www.energy.gov/ne/person/dr-kathryn-huff ) is Assistant Secretary, Office of Nuclear Energy, U.S. Department of Energy, where she leads their strategic mission to advance nuclear energy science and technology to meet U.S. energy, environmental, and economic needs, both realizing the potential of advanced technology, and leveraging the unique role of the government in spurring innovation. Prior to her current role, Dr. Huff served as a Senior Advisor in the Office of the Secretary and also led the office as the Principal Deputy Assistant Secretary for Nuclear Energy. Before joining the Department of Energy, Dr. Huff was an Assistant Professor in the Department of Nuclear, Plasma, and Radiological Engineering at the University of Illinois at Urbana-Champaign where she led the Advanced Reactors and Fuel Cycles Research Group. She was also a Blue Waters Assistant Professor with the National Center for Supercomputing Applications. Dr. Huff was previously a Postdoctoral Fellow in both the Nuclear Science and Security Consortium and the Berkeley Institute for Data Science at the University of California - Berkeley. She received her PhD in Nuclear Engineering from the University of Wisconsin-Madison and her undergraduate degree in Physics from the University of Chicago. Her research focused on modeling and simulation of advanced nuclear reactors and fuel cycles. Dr. Huff is an active member of the American Nuclear Society, a past Chair of the Nuclear Nonproliferation and Policy Division as well as the Fuel Cycle and Waste Management Division, and recipient of both the Young Member Excellence and Mary Jane Oestmann Professional Women's Achievement awards. Through leadership within Software Carpentry, SciPy, the Hacker Within, and the Journal of Open Source Software she also advocates for best practices in open, reproducible scientific computing. Dr. Huff's book "Effective Computation in Physics: Field Guide to Research with Python" can be found on all major book sellers. Support the show
We talk about nutrition, the human microbiome, and big data with Hannah Holscher, PhD, RD, director of the Nutrition and the Human Microbiome Laboratory, associate professor of Nutrition, Department of Food Science and Human Nutrition, Division of Nutritional Sciences, at the University of Illinois. Dr. Holscher's laboratory uses clinical interventions and computational approaches to study the interactions of nutrition, the gastrointestinal microbiome, and health. In addition to publishing in top nutrition journals, she also actively disseminates research findings in formats ranging from scientific presentations and webinars to podcasts, Twitter chats, blogs, and popular press articles. She also has affiliate appointments with the Institute of Genomic Biology, the National Center for Supercomputing Applications, and the Family Resiliency Center. Her research team aims to enhance human health through dietary modulation of the gastrointestinal microbiome. Her research on nutrition and the microbiome has been recognized by both local and national organizations with several awards, including the 2020 National Academy of Medicine Emerging Leader, and the 2021 American Society for Nutrition's Mead Johnson Young Investigator Award. She currently serves on The Journal of Nutrition editorial board and as an associate editor for Nutrition Research and Gut Microbiome. ◘ Related Links Dr. Holscher's Faculty Page bit.ly/41DH9Pe Dr. Holscher's Nutrition and Human Microbiome Laboratory bit.ly/3J3s6qJ Fecal and soil microbiota composition of gardening and non-gardening families bit.ly/3ZtMQgL Comparison of Microbiota Analytic Techniques bit.ly/3KLsBXM Diet Quality and the Fecal Microbiota in Healthy Adults in the American Gut Project bit.ly/3SBAvVK ◘ Transcript bit.ly/3Zxq8Ew ◘ This podcast features the song “Follow Your Dreams” (freemusicarchive.org/music/Scott_Ho…ur_Dreams_1918) by Scott Holmes, available under a Creative Commons Attribution-Noncommercial (01https://creativecommons.org/licenses/by-nc/4.0/) license. ◘ Disclaimer: The content and information shared in GW Integrative Medicine is for educational purposes only and should not be taken as medical advice. The views and opinions expressed in GW Integrative Medicine represent the opinions of the host(s) and their guest(s). For medical advice, diagnosis, and/or treatment, please consult a medical professional.
Historian Ben Baumann and Dr. Wendy K. Tam Cho discuss how technology has increased the ability to gerrymander, but also empowers us to combat it. (Wendy K. Tam Cho is Professor in the Departments of Political Science, Statistics, Mathematics, Computer Science, Asian American Studies, and the College of Law, Senior Research Scientist at the National Center for Supercomputing Applications, Faculty in the Illinois Informatics Institute, and Affiliate of the Cline Center for Advanced Social Research, the CyberGIS Center for Advanced Digital and Spatial Studies, the Computational Science and Engineering Program, and the Program on Law, Behavior, and Social Science at the University of Illinois at Urbana-Champaign. She is also a Fellow of the John Simon Guggenheim Memorial Foundation, the Society for Political Methodology, the Center for Advanced Study in the Behavior Sciences at Stanford University, and a Visiting Fellow at the Hoover Institution at Stanford University.) For more on Dr. Wendy K Tam Cho check out the following links: Website- http://cho.pol.illinois.edu/wendy/ (The memories, comments, and viewpoints shared by guests in the interviews do not represent the viewpoints of, or speak for Roots of Reality)
PLATO (Programmed Logic for Automatic Teaching Operations) was an educational computer system that began at the University of Illinois Champaign Urbana in 1960 and ran into the 2010s in various flavors. Wait, that's an oversimplification. PLATO seemed to develop on an island in the corn fields of Champaign Illinois, and sometimes precedes, sometimes symbolizes, and sometimes fast-follows what was happening in computing around the world in those decades. To put this in perspective - PLATO began on ILLIAC in 1960 - a large classic vacuum tube mainframe. Short for the Illinois Automatic Computer, ILLIAC was built in 1952, around 7 years after ENIAC was first put into production. As with many early mainframe projects PLATO 1 began in response to a military need. We were looking for new ways to educate the masses of veterans using the GI Bill. We had to stretch the reach of college campuses beyond their existing infrastructures. Computerized testing started with mechanical computing, got digitized with the introduction of Scantron by IBM in 1935, and a number of researchers were looking to improve the consistency of education and bring in new technology to help with quality teaching at scale. The post-World War II boom did this for industry as well. Problem is, following the launch of Sputnik by the USSR in 1957, many felt the US began lagging behind in education. So grant money to explore solutions flowed and CERL was able to capitalize on grants from the US Army, Navy, and Air Force. By 1959, physicists at Illinois began thinking of using that big ILLIAC machine they had access to. Daniel Alpert recruited Don Bitzer to run a project, after false starts with educators around the campus. Bitzer shipped the first instance of PLATO 1 in 1960. They used a television to show images, stored images in Raytheon tubes, and a make-shift keyboard designed for PLATO so users could provide input in interactive menus and navigate. They experimented with slide projectors when they realized the tubes weren't all that reliable and figured out how to do rudimentary time sharing, expanding to a second concurrent terminal with the release of PLATO II in 1961. Bitzer was a classic Midwestern tinkerer. He solicited help from local clubs, faculty, high school students, and wherever he could cut a corner to build more cool stuff, he was happy to move money and resources to other important parts of the system. This was the age of hackers and they hacked away. He inspired but also allowed people to follow their own passions. Innovation must be decentralized to succeed. They created an organization to support PLATO in 1966 - as part of the Graduate College. CERL stands for the Computer-Based Education Research Laboratory (CERL). Based on early successes, they got more and more funding at CERL. Now that we were beyond a 1:1 ratio of users to computers and officially into Time Sharing - it was time for Plato III. There were a number of enhancements in PLATO III. For starters, the system was moved to a CDC 1604 that CEO of Control Data William Norris donated to the cause - and expanded to allow for 20 terminals. But it was complicated to create new content and the team realized that content would be what drove adoption. This was true with applications during the personal computer revolution and then apps in the era of the App Store as well. One of many lessons learned first on PLATO. Content was in the form of applications that they referred to as lessons. It was a teaching environment, after all. They emulated the ILLIAC for existing content but needed more. People were compiling applications in a complicated language. Professors had day jobs and needed a simpler way to build content. So Paul Tenczar on the team came up with a language specifically tailored to creating lessons. Similar in some ways to BASIC, it was called TUTOR. Tenczar released the manual for TUTOR in 1969 and with an easier way of getting content out, there was an explosion in new lessons, and new features and ideas would flourish. We would see simulations, games, and courseware that would lead to a revolution in ideas. In a revolutionary time. The number of hours logged by students and course authors steadily increased. The team became ever more ambitious. And they met that ambition with lots of impressive achievements. Now that they were comfortable with the CDC 1604 they new that the new content needed more firepower. CERL negotiated a contract with Control Data Corporation (CDC) in 1970 to provide equipment and financial support for PLATO. Here they ended up with a CDC Cyber 6400 mainframe, which became the foundation of the next iteration of PLATO, PLATO IV. PLATO IV was a huge leap forward on many levels. They had TUTOR but with more resources could produce even more interactive content and capabilities. The terminals were expensive and not so scalable. So in preparation for potentially thousands of terminals in PLATO IV they decided to develop their own. This might seem a bit space age for the early 1970s, but what they developed was a touch flat panel plasma display. It was 512x512 and rendered 60 lines per second at 1260 baud. The plasma had memory in it, which was made possible by the fact that they weren't converting digital signals to analog, as is done on CRTs. Instead, it was a fully digital experience. The flat panel used infrared to see where a user was touching, allowing users some of their first exposure to touch screens. This was a grid of 16 by 16 rather than 512 but that was more than enough to take them over the next decade. The system could render basic bitmaps but some lessons needed more rich, what we might call today, multimedia. The Raytheon tubes used in previous systems proved to be more of a CRT technology but also had plenty of drawbacks. So for newer machines they also included a microfiche machine that produced images onto the back of the screen. The terminals were a leap forward. There were other programs going on at about the same time during the innovative bursts of PLATO, like the Dartmouth Time Sharing System, or DTSS, project that gave us BASIC instead of TUTOR. Some of these systems also had rudimentary forms of forums, such as EIES and the emerging BBS Usenet culture that began in 1973. But PLATO represented a unique look into the splintered networks of the Time Sharing age. Combined with the innovative lessons and newfound collaborative capabilities the PLATO team was about to bring about something special. Or lots of somethings that culminated in more. One of those was Notes. Talkomatic was created by Doug Brown and David R. Woolley in 1973. Tenczar asked the 17-year old Woolley to write a tool that would allow users to report bugs with the system. There was a notes file that people could just delete. So they added the ability for a user to automatically get tagged in another file when updating and store notes. He expanded it to allow for 63 responses per note and when opened, it showed the most recent notes. People came up with other features and so a menu was driven, providing access to System Announcements, Help Notes, and General Notes. But the notes were just the start. In 1973, seeing the need for even more ways to communicate with other people using the system, Doug Brown wrote a prototype for Talkomatic. Talkomatic was a chat program that showed when people were typing. Woolley helped Brown and they added channels with up to five people per channel. Others could watch the chat as well. It would be expanded and officially supported as a tool called Term-Talk. That was entered by using the TERM key on a console, which allowed for a conversation between two people. You could TERM, or chat a person, and then they could respond or mark themselves as busy. Because the people writing this stuff were also the ones supporting users, they added another feature, the ability to monitor another user, or view their screen. And so programmers, or consultants, could respond to help requests and help get even more lessons going. And some at PLATO were using ARPANET, so it was only a matter of time before word of Ray Tomlinson's work on electronic mail leaked over, leading to the 1974 addition of personal notes, a way to send private mail engineered by Kim Mast. As PLATO grew, the amount of content exploded. They added categories to Notes in 1975 which led to Group Notes in 1976, and comments and linked notes and the ability to control access. But one of the most important innovations PLATO will be remembered for is games. Anyone that has played an educational game will note that school lessons and games aren't always all that different. Since Rick Blomme had ported Spacewar! to PLATO in 1969 and added a two-player option, multi-player games had been on the rise. They made leader boards for games like Dogfight so players could get early forms of game rankings. Games like airtight and airace and Galactic Attack would follow those. MUDs were another form of games that came to PLATO. Collosal Cave Adventure had come in 1975 for the PDP, so again these things were happening in a vacuum but where there were influences and where innovations were deterministic and found in isolation is hard to say. But the crawlers exploded on PLATO. We got Moria, Oubliette by Jim Schwaiger, Pedit5, crypt, dungeon, avatar, and drygulch. We saw the rise of intense storytelling, different game mechanics that were mostly inspired by Dungeons and Dragons, As PLATO terminals found their way in high schools and other universities, the amount of games and amount of time spent on those games exploded, with estimates of 20% of time on PLATO being spent playing games. PLATO IV would grow to support thousands of terminals around the world in the 1970s. It was a utility. Schools (and even some parents) leased lines back to Champagne Urbana and many in computing thought that these timesharing systems would become the basis for a utility model in computing, similar to the cloud model we have today. But we had to go into the era of the microcomputer to boomerang back to timesharing first. That microcomputer revolution would catch many, who didn't see the correlation between Moore's Law and the growing number of factories and standardization that would lead to microcomputers, off guard. Control Data had bet big on the mainframe market - and PLATO. CDC would sell mainframes to other schools to host their own PLATO instance. This is where it went from a timesharing system to a network of computers that did timesharing. Like a star topology. Control Data looked to PLATO as one form of what the future of the company would be. Here, he saw this mainframe with thousands of connections as a way to lease time on the computers. CDC took PLATO to market as CDC Plato. Here, schools and companies alike could benefit from distance education. And for awhile it seemed to be working. Financial companies and airlines bought systems and the commercialization was on the rise, with over a hundred PLATO systems in use as we made our way to the middle of the 1980s. Even government agencies like the Depart of Defense used them for training. But this just happened to coincide with the advent of the microcomputer. CDC made their own terminals that were often built with the same components that would be found in microcomputers but failed to capitalize on that market. Corporations didn't embrace the collaboration features and often had these turned off. Social computing would move to bulletin boards And CDC would release versions of PLATO as micro-PLATO for the TRS-80, Texas Instruments TI-99, and even Atari computers. But the bureaucracy at CDC had slowed things down to the point that they couldn't capitalize on the rapidly evolving PC industry. And prices were too high in a time when home computers were just moving from a hobbyist market to the mainstream. The University of Illinois spun PLATO out into its own organization called University Communications, Inc (or UCI for short) and closed CERL in 1994. That was the same year Marc Andreessen co-founded Mosaic Communications Corporation, makers of Netscape -successor to NCSA Mosaic. Because NCSA, or The National Center for Supercomputing Applications, had also benefited from National Science Foundation grants when it was started in 1982. And all those students who flocked to the University of Illinois because of programs like PLATO had brought with them more expertise. UCI continued PLATO as NovaNet, which was acquired by National Computer Systems and then Pearson corporation, finally getting shut down in 2015 - 55 years after those original days on ILLIAC. It evolved from the vacuum tube-driven mainframe in a research institute with one terminal to two terminals, to a transistorized mainframe with hundreds and then over a thousand terminals connected from research and educational institutions around the world. It represented new ideas in programming and programming languages and inspired generations of innovations. That aftermath includes: The ideas. PLATO developers met with people from Xerox PARC starting in the 70s and inspired some of the work done at Xerox. Yes, they seemed isolated at times but they were far from it. They also cross-pollinated ideas to Control Data. One way they did this was by trading some commercialization rights for more mainframe hardware. One of the easiest connections to draw from PLATO to the modern era is how the notes files evolved. Ray Ozzie graduated from Illinois in 1979 and went to work for Data General and then Software Arts, makers of VisiCalc. The corporate world had nothing like the culture that had evolved out of the notes files in PLATO Notes. Today we take collaboration tools for granted but when Ozzie was recruited by Lotus, the makers of 1-2-3, he joined only if they agreed to him funding a project to take that collaborative spirit that still seemed stuck in the splintered PLATO network. The Internet and networked computing in companies was growing, and he knew he could improve on the notes files in a way that companies could take use of it. He started Iris Associates in 1984 and shipped a tool in 1989. That would evolve into what is would be called Lotus Notes when the company was acquired by Lotus in 1994 and then when Lotus was acquired by IBM, would evolve into Domino - surviving to today as HCL Domino. Ozzie would go on to become a CTO and then the Chief Software Architect at Microsoft, helping spearhead the Microsoft Azure project. Collaboration. Those notes files were also some of the earliest newsgroups. But they went further. Talkomatic introduced real time text chats. The very concept of a digital community and its norms and boundaries were being tested and challenges we still face like discrimination even manifesting themselves then. But it was inspiring and between stints at Microsoft, Ray Ozzie founded Talko in 2012 based on what he learned in the 70s, working with Talkomatic. That company was acquired by Microsoft and some of the features ported into Skype. Another way Microsoft benefited from the work done on PLATO was with Microsoft Flight Simulator. That was originally written by Bruce Artwick after leaving the university based on the flight games he'd played on PLATO. Mordor: The Depths of Dejenol was cloned from Avatar Silas Warner was connected to PLATO from terminals at the University of Indiana. During and after school, he wrote software for companies but wrote Robot War for PLATO and then co-founded Muse Software where he wrote Escape!, a precursor for lots of other maze runners, and then Castle Wolfenstein. The name would get bought for $5,000 after his company went bankrupt and one of the early block-buster first-person shooters when released as Wolfenstein 3D. Then John Carmack and John Romero created Doom. But Warner would go on to work with some of the best in gaming, including Sid Meier. Paul Alfille built the game Freecell for PLATO and Control Data released it for all PLATO systems. Jim Horne played it from the PLATO terminals at the University of Alberta and eventually released it for DOS in 1988. Horn went to work for Microsoft who included it in the Microsoft Entertainment Pack, making it one of the most popular software titles played on early versions of Windows. He got 10 shares of Microsoft stock in return and it's still part of Windows 10 using the Microsoft Solitaire Collection.. Robert wood head and Andrew Greenberg got onto PLATO from their terminals at Cornell University where they were able to play games like Oubliette and Emprie. They would write a game called Wizardry that took some of the best that the dungeon crawl multi-players had to offer and bring them into a single player computer then console game. I spent countless hours playing Wizardry on the Nintendo NES and have played many of the spin-offs, which came as late as 2014. Not only did the game inspire generations of developers to write dungeon games, but some of the mechanics inspired features in the Ultima series, Dragon Quest, Might and Magic, The Bard's Tale, Dragon Warrior and countless Manga. Greenberg would go on to help with Q-Bert and other games before going on to work with the IEEE. Woodhead would go on to work on other games like Star Maze. I met Woodhead shortly after he wrote Virex, an early anti-virus program for the Mac that would later become McAfee VirusScan for the Mac. Paul Tenczar was in charge of the software developers for PLATO. After that he founded Computer Teaching Corporation and introduced EnCORE, which was changed to Tencore. They grew to 56 employees by 1990 and ran until 2000. He returned to the University of Illinois to put RFID tags on bees, contributing to computing for nearly 5 decades and counting. Michael Allen used PLATO at Ohio State University before looking to create a new language. He was hired at CDC where he became a director in charge of Research and Development for education systems There, he developed the ideas for a new computer language authoring system, which became Authorware, one of the most popular authoring packages for the Mac. That would merge with Macro-Mind to become Macromedia, where bits and pieces got put into Dreamweaver and Shockwave as they released those. After Adobe acquired Macromedia, he would write a number of books and create even more e-learning software authoring tools. So PLATO gave us multi-player games, new programming languages, instant messaging, online and multiple choice testing, collaboration forums, message boards, multiple person chat rooms, early rudimentary remote screen sharing, their own brand of plasma display and all the research behind printing circuits on glass for that, and early research into touch sensitive displays. And as we've shown in just a few of the many people that contributed to computing after, they helped inspire an early generation of programmers and innovators. If you like this episode I strongly suggest checking out The Friendly Orange Glow from Brian Dear. It's a lovely work with just the right mix of dry history and flourishes of prose. A short history like this can't hold a candle to a detailed anthology like Dear's book. Another well researched telling of the story can be found in a couple of chapters of A People's History Of Computing In The United States, from Joy Rankin. She does a great job drawing a parallel (and sometimes direct line from) the Dartmouth Time Sharing System and others as early networks. And yes, terminals dialing into a mainframe and using resources over telephone and leased lines was certainly a form of bridging infrastructures and seemed like a network at the time. But no mainframe could have scaled to the ability to become a utility in the sense that all of humanity could access what was hosted on it. Instead, the ARPANET was put online and growing from 1969 to 1990 and working out the hard scientific and engineering principals behind networking protocols gave us TCP/IP. In her book, Rankin makes great points about the BASIC and TUTOR applications helping shape more of our modern world in how they inspired the future of how we used personal devices once connected to a network. The scientists behind ARPANET, then NSFnet and the Internet, did the work to connect us. You see, those dial-up connections were expensive over long distances. By 1974 there were 47 computers connected to the ARPANET and by 1983 we had TCP/IPv4.And much like Bitzer allowing games, they didn't seem to care too much how people would use the technology but wanted to build the foundation - a playground for whatever people wanted to build on top of it. So the administrative and programming team at CERL deserve a lot of credit. The people who wrote the system, the generations who built features and code only to see it become obsolete came and went - but the compounding impact of their contributions can be felt across the technology landscape today. Some of that is people rediscovering work done at CERL, some is directly inspired, and some has been lost only to probably be rediscovered in the future. One thing is for certain, their contributions to e-learning are unparalleled with any other system out there. And their technical contributions, both in the form of those patented and those that were either unpatentable or where they didn't think of patenting, are immense. Bitzer and the first high schoolers and then graduate students across the world helped to shape the digital world we live in today. More from an almost sociological aspect than technical. And the deep thought applied to the system lives on today in so many aspects of our modern world. Sometimes that's a straight line and others it's dotted or curved. Looking around, most universities have licensing offices now, to capitalize on the research done. Check out a university near you and see what they have available for license. You might be surprised. As I'm sure many in Champagne were after all those years. Just because CDC couldn't capitalize on some great research doesn't mean we can't.
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Today we're going to cover one of the most important and widely distributed server platforms ever: The Apache Web Server. Today, Apache servers account for around 44% of the 1.7 Billion web sites on the Internet. But at one point it was zero. And this is crazy, it's down from over 70% in 2010. Tim Berners-Lee had put the first website up in 1991 and what we now know as the web was slowly growing. In 1994 and begins with the National Center for Supercomputing Applications, University of Illinois, Urbana-Champaign. Yup, NCSA is also the organization that gave us telnet and Mosaic, the web browser that would evolve into Netscape. After Rob leaves NCSA, the HTTPdaemon goes a little, um, dormant in development. The distress had forked and the extensions and bug fixes needed to get merged into a common distribution. Apache is a free and open source web server that was initially created by Robert McCool and written in C in 1995, the same year Berners-Lee coined the term World Wide Web. You can't make that name up. I'd always pictured him as a cheetah wearing sunglasses. Who knew that he'd build a tool that would host half of the web sites in the world. A tool that would go on to be built into plenty of computers so they can spin up sharing services. Times have changed since 1995. Originally the name was supposedly a cute name referring to a Patchy server, given that it was based on lots of existing patches of craptostic code from NCSA. So it was initially based on NCSA HTTPd is still alive and well all the way up to the configuration files. For example, on a Mac these are stored at /private/etc/apache2/httpd.conf. The original Apache group consisted of * Brian Behlendorf * Roy T. Fielding * Rob Hartill * David Robinson * Cliff Skolnick * Randy Terbush * Robert S. Thau * Andrew Wilson And there were additional contributions from Eric Hagberg, Frank Peters, and Nicolas Pioch. Within a year of that first shipping, Apache had become the most popular web server on the internet. The distributions and sites continued to grow to the point that they formed the Apache Software Foundation that would give financial, legal, and organizational support for Apache. They even started bringing other open source projects under that umbrella. Projects like Tomcat. And the distributions of Apache grew. Mod_ssl, which brought the first SSL functionality to Apache 1.17, was released in 1998. And it grew. The Apache Foundation came in 1999 to make sure the project outlived the participants and bring other tools under the umbrella. The first conference, ApacheCon came in 2000. Douglas Adams was there. I was not. There were 17 million web sites at the time. The number of web sites hosted on Apache servers continued to rise. Apache 2 was released in 2004. The number of web sites hosted on Apache servers continued to rise. By 2009, Apache was hosting over 100 million websites. By 2013 Apache had added that it was named “out of a respect for the Native American Indian tribe of Apache”. The history isn't the only thing that was rewritten. Apache itself was rewritten and is now distributed as Apache 2.0. there were over 670 million web sites by then. And we hit 1 billion sites in 2014. I can't help but wonder what percentage collections of fart jokes. Probably not nearly enough. But an estimated 75% are inactive sites. The job of a web server is to serve web pages on the internet. Those were initially flat HTML files but have gone on to include CGI, PHP, Python, Java, Javascript, and others. A web browser is then used to interpret those files. They access the .html or .htm (or other one of the other many file types that now exist) file and it opens a page and then loads the text, images, included files, and processes any scripts. Both use the http protocol; thus the URL begins with http or https if the site is being hosted over ssl. Apache is responsible for providing the access to those pages over that protocol. The way the scripts are interpreted is through Mods. These include mod_php, mod_python, mod_perl, etc. The modular nature of Apache makes it infinitely extensible. OK, maybe not infinitely. Nothing's really infinite. But the Loadable Dynamic Modules do make the system more extensible. For example, you can easily get TLS/SSL using mod_ssl. The great thing about Apache and its mods are that anyone can adapt the server for generic uses and they allow you to get into some pretty really specific needs. And the server as well as each of those mods has its source code available on the Interwebs. So if it doesn't do exactly what you want, you can conform the server to your specific needs. For example, if you wanna' hate life, there's a mod for FTP. Out of the box, Apache logs connections, includes a generic expression parser, supports webdav and cgi, can support Embedded Perl, PHP and Lua scripting, can be configured for public_html per-user web-page, supports htaccess to limit access to various directories as one of a few authorization access controls and allows for very in depth custom logging and log rotation. Those logs include things like the name and IP address of a host as well as geolocations. Can rewrite headers, URLs, and content. It's also simple to enable proxies Apache, along with MySQL, PHP and Linux became so popular that the term LAMP was coined, short for those products. The prevalence allowed the web development community to build hundreds or thousands of tools on top of Apache through the 90s and 2000s, including popular Content Management Systems, or CMS for short, such as Wordpress, Mamba, and Joomla. * Auto-indexing and content negotiation * Reverse proxy with caching * Multiple load balancing mechanisms * Fault tolerance and Failover with automatic recovery * WebSocket, FastCGI, SCGI, AJP and uWSGI support with caching * Dynamic configuration * Name- and IP address-based virtual servers * gzip compression and decompression * Server Side Includes * User and Session tracking * Generic expression parser * Real-time status views * XML support Today we have several web servers to choose from. Engine-X, spelled Nginx, is a newer web server that was initially released in 2004. Apache uses a thread per connection and so can only process the number of threads available; by default 10,000 in Linux and macOS. NGINX doesn't use threads so can scale differently, and is used by companies like AirBNB, Hulu, Netflix, and Pinterest. That 10,000 limit is easily controlled using concurrent connection limiting, request processing rate limiting, or bandwidth throttling. You can also scale with some serious load balancing and in-band health checks or with one of the many load balancing options. Having said that, Baidu.com, Apple.com, Adobe.com, and PayPal.com - all Apache. We also have other web servers provided by cloud services like Cloudflare and Google slowly increasing in popularity. Tomcat is another web server. But Tomcat is almost exclusively used to run various Java servers, servelets, EL, webscokets, etc. Today, each of the open source projects under the Apache Foundation has a Project Management committee. These provide direction and management of the projects. New members are added when someone who contributes a lot to the project get nominated to be a contributor and then a vote is held requiring unanimous support. Commits require three yes votes with no no votes. It's all ridiculously efficient in a very open source hacker kinda' way. The Apache server's impact on the open-source software community has been profound. It iis partly explained by the unique license from the Apache Software Foundation. The license was in fact written to protect the creators of Apache while giving access to the source code for others to hack away at it. The Apache License 1.1 was approved in 2000 and removed the requirement to attribute the use of the license in advertisements of software. Version two of the license came in 2004, which made the license easier for projects that weren't from the Apache Foundation. This made it easier for GPL compatibility, and using a reference for the whole project rather than attributing software in every file. The open source nature of Apache was critical to the growth of the web as we know it today. There were other projects to build web servers for sure. Heck, there were other protocols, like Gopher. But many died because of stringent licensing policies. Gopher did great until the University of Minnesota decided to charge for it. Then everyone realized it didn't have nearly as good of graphics as other web servers. Today the web is one of the single largest growth engines of the global economy. And much of that is owed to Apache. So thanks Apache, for helping us to alleviate a little of the suffering of the human condition for all creatures of the world. By the way, did you know you can buy hamster wheels on the web. Or cat food. Or flea meds for the dog. Speaking of which, I better get back to my chores. Thanks for taking time out of your busy schedule to listen! You probably get to your chores as well though. Sorry if I got you in trouble. But hey, thanks for tuning in to another episode of the History of Computing Podcast. We're lucky to have you. Have a great day!
Welcome to the History of Computing Podcast, where we explore the history of information technology. Because by understanding the past, we're able to be prepared for the innovations of the future! Today we're going to look at the emergence of the web through the lens of Netscape, the browser that pushed everything forward into the mainstream. The Netscape story starts back at the University of Illinois, Champaign-Urbana where the National Center for Supercomputing Applications (or NCSA) inspired Marc Andreessen and Eric Bina to write Mosaic, which was originally called xmosaic and built for X11 or the X Window System. In 1992 there were only 26 websites in the world. But that was up from the 1 that Internet pioneer Tim Berners-Lee built at info.cern.ch in 1991. The internet had really only been born a few years earlier in 1989. But funded by the Gore Bill, Andreessen and a team of developers released the Alpha version of the NCSA Mosaic browser in 1993 and ported it to Windows, Mac, and of course the Amiga. At this point there were about 130 websites. Version two of Mosaic came later that year and then the National Science Foundation picked up the tab to maintain Mosaic from 94 to 97. James Clark, a co-founder of Silicon Graphics and a legend in Silicon Valley, took notice. He recruited some of the Mosaic team, led by Marc Andreessen, to start Mosaic Communications Corporation, which released Netscape Navigator in 1994, the same year Andreessen graduated from college. By then there were over 2,700 websites, and a lot of other people were taking notice after 2 four digit growth years. Yahoo! and EXCITE were released in 1994 and enjoyed an explosion in popularity, entering a field with 25 million people accessing such a small number of sites. Justin Hall was posting personal stuff on links.net, one of the earliest forms of what we now call blogging. Someone else couldn't help but notice: Bill Gates from Microsoft. He considered cross-platform web pages and the commoditization of the operating system to be a huge problem for his maturing startup called Microsoft, and famously sent The Internet Tidal Wave memo to his direct reports, laying out a vision for how Microsoft would respond to this thread. We got Netscape for free at the University, but I remember when I went to the professional world we had to pay for it. The look and feel of Navigator then can still be seen in modern browsers today. There was an address bar, a customizable home page, a status bar, and you could write little javascripts to do cutesy things like have a message scroll here and there or have blinked things. 1995 also brought us HTML frames, fonts on pages, the ability to change the background color, the ability to embed various forms of media, and image maps. Building sites back then was a breeze. And with an 80% market share for browsers, testing was simple: just open Netscape and view your page! Netscape was a press darling. They had insane fans that loved them. And while they hadn't made money yet, they did something that a lot of companies do now, but few did then: they went IPO early and raked in $600 million in their first day, turning Marc Andreessen the poster child into an overnight sensation. They even started to say that the PC would live on the web - and it would do so using Netscape. Andreessen then committed the cardinal sin that put many in tech out of a job: he went after Microsoft claiming they'd reduce Microsoft to a set of “poorly debugged device drivers.” Microsoft finally responded. They had a meeting with Netscape and offered to acquire the company or they would put them out of business. Netscape lawyered up, claiming Microsoft offered to split the market up where they owned Windows and left the rest to Netscape. Internet Explorer 1 was released by Microsoft in 1995 - a fork of Mosaic which had been indirectly licensed from the code Andreessen had written while still working with the NCSA in college. And so began the “Browser Wars” with Netscape 2 being released and Internet Explorer 2, the same year. 1995 saw the web shoot up to over 23,000 sites. Netscape 2 added Netscape Mail, an email program with about as simple a name as Microsoft Mail, which had been in Windows since 1991. In 1995, Brendan Eich, a developer at Netscape wrote SpiderMonkey, the original JavaScript engine, a language many web apps still use today (just look for the .jsp extension). I was managing labs at the University of Georgia at the time and remember the fast pace that we were upgrading these browsers. NCSA telnet hadn't been updated in years but it had never been as cool as this Netscape thing. Geocities popped up and I can still remember my first time building a website there and accessing incredible amounts of content being built - and maybe even learning a thing or two while dinking around in those neighborhoods. 1995 had been a huge and eventful year, with nearly 45 million people now “on the web.” Amazon, early search engine Altavista, LYCOS, and eBay launching as well. The search engine space sure was heating up… Then came 1996. Things got fun. Point releases of browsers came monthly. New features dropped with each release. Plugins for Internet Explorer leveraged API hooks into the Windows operating system that made pages only work on IE. Those of us working on pages had to update for both, and test for both. By the end of 1996 there were over a quarter million web pages and over 77 million people were using the web. Apple, The New York Times, Dell.com appeared on the web, but 41 percent of people checked AOL regularly and other popular sites would be from ISPs for years to come. Finally, after a lot of talk and a lot of point releases, Netscape 3 was released in 1997. Javascript got a rev, a lot of styling elements some still use today like tables and frames came out and forms could be filled out automatically. There was also a gold version of Netscape 3 that allowed editing pages. But Dreamweaver gave us a nice WYSIWIG to build web pages that was far more feature rich. Netscape got buggier, they bit on more and more thus spreading developers thing. They just couldn't keep up. And Internet Explorer was made free in Windows as of IE 3, and had become equal to Netscape. It had a lot of plugins for Windows that made it work better on that platform, for better or worse. The Browser Wars ended when Netscape decided to open source their code in 1998, creating the Mozilla project by open sourcing the Netscape Browser Suite source code. This led to Waterfox, Pale Moon, SeaMonkey, Ice Weasel, Ice Cat, Wyzo, and of course, Tor Browser, Swiftfox, Swift Weasel, Timberwolf, TenFourFox, Comodo IceDragon, CometBird, Basilisk, Cliqz, AT&T Pogo, IceCat, and Flock. But most importantly, Mozilla released Firefox themselves, which still maintains between 8 and 10 percent marketshare for browser usage according to who you ask. Of course, ultimately everyone lost the browser wars now that Chrome owns a 67% market share! Netscape was sold to AOL in 1999 for $4.2 billion, the first year they dropped out of the website popularity contest called the top 10. At this point, Microsoft controlled the market with an 80% market share. That was the first year Amazon showed up on the top list of websites. The Netscape problems continued. AOL released Netscape 6 in 2000, which was buggy and I remember a concerted effort at the time to start removing Netscape from computers. In 2003, after being acquired by Time Warner, AOL finally killed off Netscape. This was the same year Apple released Safari. They released 7.2 in 2004 after outsourcing some of the development. Netscape 9, a port of Firefox, was released in 2007. The next year Google Chrome was released. Today, Mozilla is a half-billion dollar a year not-for profit. They ship the Firefox browser, the Firefox OS mobile OS, the online file sharing service Firefox Send, the Bugzilla bug tracking tool, the Rust programming language, the Thunderbird email client, and other tools like SpiderMonkey, which is still the javascript engine embedded into Firefox and Thunderbird. If the later stage of Netscape's code in the form of the open source Mozilla projects appeal to you, consider becoming a Mozilla Rep. You can help contribute, promote, document, and build the community with other passionate and knowledgeable humans that are on the forefront of pushing the web into new and beautiful places. For more on that, go to reps.mozilla.org. Andreessen went on to build Opsware with Ben Horowitz (who's not a bad author) and others. He sold the hosting business and in 2005 continued on with Horowitz founded Andreessen Horowitz which were early investors of Facebook, Foursquare, GitHub, Groupon, LinkedIn, Pinterest, Twitter, Jawbone, Zynga, Skype, and many, many others. He didn't win the browser wars, but he has been at the center of helping to shape the Internet as we know it today, and due to the open sourcing of the source code many other browsers popped up. The advent of the cloud has also validated many of his early arguments about the web making computer operating systems more of a commodity. Anyone who's used Office 365 online or Google apps can back that up. Ultimately, the story of Netscape could be looked at as yet another “Bill Gates screwed us” story. But I'm not sure that does it justice. Netscape did as much to shape the Internet in those early days as anything else. Many of those early contributions, like the open nature of the Internet, various languages and techniques, and of course the code in the form of Mozilla, live on today. There were other browsers, and the Internet might have grown to what it is today. But we might not have had as much of the velocity without Andreessen and Netscape and specifically the heated competition that led to so much innovation in such a short period of time - so we certainly owe them our gratitude that we've come as far as we have. And I owe you my gratitude. Thank you so very much for tuning into another episode of the History of Computing Podcast. We're lucky to have you. Have a great day!
We are all familiar with gravity's effects on Earth, but many of us have yet to learn what gravitational waves mean in the context of the Universe. Dr. Eliu Huerta, head of the Gravity Group at the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign, joins us today to answer our simple and complex questions about gravitational wave analysis. A theoretical astrophysicist by training, Dr. Huerta's research focuses on developing artificial intelligence that aids scientists in detecting and analyzing gravitational wave observations. These observations provide information about black holes and neutron stars in the Universe. Dr. Huerta explains how we quantify gravitational waves, and answer questions varying in complexity, from “What is gravity?” to “What is gravity in the context of space-time?” Tune in to hear about the previous, current, and future fascinating technologies of astrophysics. For more information, visit http://gravity.ncsa.illinois.edu/ and http://www.ncsa.illinois.edu/.
Juhan Sonin, designer, researcher, and MIT lecturer. Juhan specialized in software design and system engineering. He has worked at Apple, National Center for Supercomputing Applications, Massachusetts Institute of Technology (MIT), and MITRE. I had the opportunity to record this episode in Juhan’s GoInvo studio office, where he is the company’s Creative Director. Website: https://www.goinvo.com/ WE MUST SET HEALTHCARE FREE: Opensourcehealthcare.org Udemy Blockchain/Healthcare Course ($125 off with HEALTHUNCHAINED coupon): https://www.udemy.com/blockchain-and-healthcare/?couponCOde=HEALTHUNCHAINED Show Notes •Software Design and System Engineering •Asynchronous telemedicine •People don’t really care about their health until we are unwell •Blockchain use case to access medical records and proxy it from anywhere with internet •Location of conception will be part of your life (health) data •Ownership and co-ownership models for health data •Data Use Agreements •Open Genome Project •You’ve put your data out on the internet and your genetic data is open-sourced. Have you had any unexpected consequences from that decision? •Health Data Standards •Open-source Standard Health Record: http://standardhealthrecord.org/ •Data exchange problems are not only business and technology issues but generally human issues •Determinants of Health •Robot doctors and the future of healthcare •Black-box healthcare algorithms should be •Open source is the only way for Medicine https://medium.com/@marcus_baw/open-source-is-the-only-way-for-medicine-9e698de0447e •Primary Care Manifesto •Patients’ interests in owning their own health •Favorite books: The Elements of Style by William Strunk Jr; Automating Inequality by Virginia Eubanks; Democracy in Chains by Nancy MacLean; The Color of Law by Richard Rothstein News Corner: https://hitinfrastructure.com/news/aetna-ascension-sign-on-to-healthcare-blockchain-alliance On Dec 3rd, two new organizations announced that they will be joining the Alliance to be part of it’s first pilot project which seeks to determine if applying blockchain technology can help ensure the most current information about healthcare providers is available in the provider directories maintained by health insurers. The two organizations are Aetna, one of the top 3 health insurance companies in the US with $60 billion in revenue in 2017 AND Ascension, the largest Catholic health system in the world and the largest non-profit health system in the US. To me this is really exciting news because Aetna recently merged with CVS Health making the combined provider directory information from these organizations huge.
Welcome Tech News Pivotal IPO Facebook Revenue Surges Hotel Master Key Hack Sports News NFL - Draft NBA -playoffs NHL -playoffs Placing Bets Who will be the most worn out in Vegas? Big Topic Infinity Wars!!! This week/day in tech history April 25, 1990: Hubble Space Telescope deployed April 23, 2005: 1st YouTube video uploaded. Me at the zoo. April 22, 1993: Mosaic released by National Center for Supercomputing Applications. 1st web GUI, led dev was Marc Andreeson. Final Word
On today’s episode of “The Interview” with The Next Platform we talk about the use of petascale supercomputers for training deep learning algorithms. More specifically, how this happening in Astronomy to enable real-time analysis of LIGO detector data. We are joined by Daniel George, a researcher in the Gravity Group at the National Center for Supercomputing Applications, or NCSA. His team garnered a great deal of attention at the annual supercomputing conference in November with work blending traditional HPC simulation data and deep learning.
What is spacetime? (Hold on tight!) What is a spacetime continuum? (Testing Einstein's Universe, Stanford University) What is spacetime? (Wikipedia) What is spacetime, really? (Stephen Wolfram) CERN scientists simplify spacetime in 3 short videos (Ted-Ed) Golden syrup (CSR) What is at the edge of the universe? (Futurism) Scientists glimpse 'dark flow' lurking beyond the edge of the universe (The Telegraph) What lies beyond the edge of the observable universe (The Daily Galaxy) How far can we travel in space?...turns out we'll only ever see 0.00000000001% of the universe (Devour) Warning: take with a grain of salt - the balloon analogy of the expanding universe (Physics Forums) Brian Cox (Wikipedia) The Big Bang theory (ESA kids) The Big Bang theory (BBC) The universe's photo album: Chronology of the universe (Wikipedia) Everything in the universe came out of the Big Bang (Why-Sci) The initial singularity is proposed to have contained all the mass & spacetime of the universe...then BOOM! (Wikipedia) So what was there before the Big Bang?...There's no such thing as nothing (Jon Kaufman) What is nothing? Physics debate (livescience) Why is there something rather than nothing? (BBC) The beginning of time (Stephen Hawking) The illusion of time: What's real? (Space.com) At the third stroke: George the talking clock now on atomic time (SMH) What is redshift? (BBC) Red shift & the expanding universe (Exploratorium, Hubble) Cosmological red shift (Cosmos, Swinburne University) The Doppler Effect - animations (UNSW, School of Physics) Redshift occurs when an object goes further away; blueshift when it's coming closer (Space.com) What is gravity, really? (NASA Space Place) Space as a rubber sheet (University of Winnipeg) Gravity visualised - the rubber sheet in action (YouTube) Objects with mass bend spacetime - even you! (American Museum of Natural History) Gravity is still a mystery (livescience) Brian Cox explains gravity & all things General Relativity (The Infinite Monkey Cage, podcast) What is a gravitational well? (Qualitative Reasoning Group, Northwestern University) What is a Higgs Boson? Explained by a Fermilab scientist (YouTube) The Higgs Boson & mass: Universe doomsday? (livescience) Newton & his apple (New Scientist) As the earth rotates, we're moving at about 1,000 miles/hr or 1,600 km/hr (Scientific American) How fast are you moving when you're sitting still? (Astronomical Society of the Pacific) Circumference of a circle: 2πr, where r = radius (BBC) We're travelling at ~1.6 million miles/day around the sun (Physics & Astronomy Online) Boy Meets Girl wines Naked Wines How do we know this is all true? Putting relativity to the test (National Center for Supercomputing Applications) The Mercury transit of the sun test (Wikipedia) The Mercury transit of the sun test (National Center for Supercomputing Applications) The bending of star light around the sun test (Wikipedia) The bending of star light around the sun test (National Center for Supercomputing Applications) Original newspaper clipping from Arthur Eddington's 1919 light bending experiment (Testing Einstein's Universe, Stanford University) An original photo from 1919 of light bending around the sun (Wikipedia) May 29, 1919: A major eclipse, relatively speaking (Wired) Space & time warps (Stephen Hawking) Picture: bending of spacetime around Earth (The Conversation) Picture: bending of spacetime around the sun (Wikipedia) The 3D-spacetime episode of the Simpsons - audio a bit crackly, but whatever (YouTube) Special relativity came first in 1905 - then general relativity was developed in 1907-1915 (Wikipedia) Time isn't constant throughout the universe - it's aaall relative (Physics for Idiots) Newsflash: Time may not exist (Discover) Einstein reckons 'time travel' is possible (NASA) How the Star Trek transporter works (Wikia) The Star Trek warp drive lets them travel faster than light speed (Wikipedia) Warp drives & transporters: How Star Trek technology works (Space.com) Gravitational waves are 'ripples' in the fabric of spacetime (LIGO) LIGO can detect gravitational waves (LIGO) Why should we care about gravitational waves? (LIGO) Gravitational waves are proof that space & time are getting stretched (ABC, Australia) Light behaves as both a particle & a wave - here's the first ever photo of that (Phys.org) The real reason nothing can ever go faster than light (BBC) Gary Lineker (Wikipedia) There's something called 'spacetime foam' ... mad! (Wikipedia) Corrections Johnny may have got the Mercury transit & light bending tests mixed up: The light bending was the 'great exciting newspaper front page' (Testing Einstein's Universe, Stanford University) Can't find support for Johnny's 'light travels on a crisp' theory, but here's some smart people debating 'What stops photons from traveling faster than the speed of light' (Quora) Cheeky review? (If we may be so bold) It'd be amazing if you gave us a short review...it'll make us easier to find in iTunes: Click here for instructions. You're the best! We owe you a free hug and/or a glass of wine from our cellar
Brian O’Shea is an associate professor with a joint appointment to Lyman Briggs College and the Department of Physics and Astronomy in the College of Natural Science. Brian is a theoretical astrophysicist whose research focuses on galaxy formation and evolution. He uses supercomputers to perform large-scale numerical simulations of the formation of cosmological structure, starting from the first stars that form in the universe and continuing to the present day. He is particularly interested in the properties of clusters of galaxies, which have the potential to be useful probes of the fundamental properties of our universe. Brian teaches physics and astronomy courses at all levels, from introductory mechanics to graduate-level astrophysics courses and is one of the instructors in the Lyman Briggs 271/272 introductory physics course sequence. Brian has also collaborated with the Advanced Visualization Laboratory at the National Center for Supercomputing Applications to make movies for PBS Nova, Discovery Channel, and National Geographic television documentaries, as well as for the Denver Planetarium, the Hayden Planetarium in New York, and the Adler Planetarium in Chicago. He earned his doctorate in physics from the University of Illinois Champaign-Urbana.
Bro is a passive, open-source network traffic analyzer and was originally developed by Vern Paxson, who continues to lead the project now jointly with a core team of researchers and developers at the International Computer Science Institute in Berkeley, CA; and the National Center for Supercomputing Applications in Urbana-Champaign, IL. Liam Randall and Seth Hall are on to give us additional insight into how Bro IDS is used.
Bro is a passive, open-source network traffic analyzer and was originally developed by Vern Paxson, who continues to lead the project now jointly with a core team of researchers and developers at the International Computer Science Institute in Berkeley, CA; and the National Center for Supercomputing Applications in Urbana-Champaign, IL. Liam Randall and Seth Hall are on to give us additional insight into how Bro IDS is used.
While considering all existing hazards for humans due to (a) natural disastrous events, (b) failures of human hazard attention or (c) intentional harmful behaviors of humans, we address the problem of building hazard aware spaces (HAS) to alert innocent people. We have researched and developed components of a prototype HAS system for detecting fire using wireless "smart" micro electro-mechanical systems (MEMS) sensors, such as, the MICA sensors, and spectral cameras, for instance, thermal infrared (IR), visible spectrum and multi-spectral cameras. Within this context, my presentation overviews technical challenges and prototype scientific solutions to (1) robotic sensor deployment, (2) localization of sensors and objects, (3) synchronization of sensors and cameras, (4) calibration of spectral cameras and sensors, (5) proactive camera control, (6) hazard detection, (7) human alert, (8) hazard confirmation, and (9) hazard understanding and containment. The work presented will also include theoretical and practical limitations that have to be understood when working with novel technologies. http://www.ncsa.uiuc.edu/people/pbajcsy/ About the speaker: Peter Bajcsy has earned his Ph.D. degree from the Electrical and Computer Engineering Department, University of Illinois at Urbana-Champaign, IL, 1997, M.S. degree from the Electrical Engineering Department, University of Pennsylvania, Philadelphia, PA, 1994 and Diploma Engineer degree from the Electrical Engineering Department, Slovak Technical University, Bratislava, Slovakia, 1987. He is currently with the Automated Learning Group at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign (UIUC), Illinois, working as a research scientist, and offering seminars and advising students as an adjunct assistant professor of CS and ECE Departments at UIUC. Dr. Bajcsy
Large-scale collaborative applications are characterized by a large number of users and other processing end entities that are distributed over geographically disparate locations. Therefore, these applications use messaging infrastructures that scale to the application needs and enable users to process messages without concern for message transmission and delivery. Widespread use of these infrastructures is hindered by the need for scalable security services; viz., services for confidentiality, integrity, and authentication. Current solutions for providing security for these systems use trusted servers (or a network of servers), which consequently bear significant trust liabilities of maintaining confidentiality, integrity, and authentication of messages and keys that are processed by the servers. In this talk we look at current approaches for secure messaging in three commonly used messaging infrastructures: email, group communication, and publish/subscribe. We then show how novel encryption techniques can be used to minimize trust liabilities in these infrastructures in a scalable manner. We are in the process of developing prototypes of our solutions. We will discuss the prototype designs and present some initial experimentation results. About the speaker: Dr. Himanshu Khurana received his MS from the University of Maryland in 1999, and his PhD from the University of Maryland in 2002. He worked as a postdoctoral research at the Institute for Systems Research, University of Maryland from 2002 to 2003. Dr. Khurana is currently a Senior Security Engineer at the National Center for Supercomputing Applications. His research interests are in network and distributed system security, and he is currently working on projects in secure messaging, dynamic coalitions, web services, and wireless sensor networks. While at the University of Maryland he led the prototype development of tools for secure dynamic coalitions, which were selected for the Joint Warrior Integration Demonstration (JWID) in 2004.