POPULARITY
O Start Eldorado desta semana traz um especial sobre o Smart City Expo 2025 em Curitiba, o maior evento de cidades inteligentes das Américas. Reunindo autoridades e especialistas nacionais e internacionais, lá foram apresentadas soluções de inovação e tecnologia que vêm moldando uma nova era dos municípios digitais, impulsionando soluções urbanas mais sustentáveis, conectadas e humanas. Neste programa, você ouve Ana Carolina Benelli, consultora da Unesco e do Banco Mundial, pesquisadora em governo digital e diretora da Linked Data, vencedora do prêmio Smart City Brazilian Awards; Evisson Lucena, secretário de inovação de Recife; e José Renato Gonçalves, presidente da NEC no Brasil. Com apresentação de Daniel Gonzales, o programa vai ao ar todas as quartas-feiras, às 21h, em FM 107,3 para toda a Grande São Paulo, site, app, canais digitais e assistentes de voz.See omnystudio.com/listener for privacy information.
On this episode of the Crazy Wisdom Podcast, host Stewart Alsop welcomes Jessica Talisman, a senior information architect deeply immersed in the worlds of taxonomy, ontology, and knowledge management. The conversation spans the evolution of libraries, the shifting nature of public and private access to knowledge, and the role of institutions like the Internet Archive in preserving digital history. They also explore the fragility of information in the digital age, the ongoing battle over access to knowledge, and how AI is shaping—and being shaped by—structured data and knowledge graphs. To connect with Jessica Talisman, you can reach her via LinkedIn. Check out this GPT we trained on the conversation!Timestamps00:05 – Libraries, Democracy, Public vs. Private Knowledge Jessica explains how libraries have historically shifted between public and private control, shaping access to knowledge and democracy.00:10 – Internet Archive, Cyberattacks, Digital Preservation Stewart describes visiting the Internet Archive post-cyberattack, sparking a discussion on threats to digital preservation and free information.00:15 – AI, Structured Data, Ontologies, NIH, PubMed Jessica breaks down how AI trains on structured data from sources like NIH and PubMed but often lacks alignment with authoritative knowledge.00:20 – Linked Data, Knowledge Graphs, Semantic Web, Tim Berners-Lee They explore how linked data enables machines to understand connections between knowledge, referencing the vision behind the semantic web.00:25 – Entity Management, Cataloging, Provenance, Authority Jessica explains how libraries are transitioning from cataloging books to managing entities, ensuring provenance and verifiable knowledge.00:30 – Digital Dark Ages, Knowledge Loss, Corporate Control Stewart compares today's deletion of digital content to historical knowledge loss, warning about the fragility of digital memory.00:35 – War on Truth, Book Bans, Algorithmic Bias, Censorship They discuss how knowledge suppression—from book bans to algorithmic censorship—threatens free access to information.00:40 – AI, Search Engines, Metadata, Schema.org, RDF Jessica highlights how AI and search engines depend on structured metadata but often fail to prioritize authoritative sources.00:45 – Power Over Knowledge, Open vs. Closed Systems, AI Ethics They debate the battle between corporations, governments, and open-source efforts to control how knowledge is structured and accessed.00:50 – Librarians, AI Misinformation, Knowledge Organization Jessica emphasizes that librarians and structured knowledge systems are essential in combating misinformation in AI.00:55 – Future of Digital Memory, AI, Ethics, Information Access They reflect on whether AI and linked data will expand knowledge access or accelerate digital decay and misinformation.Key InsightsThe Evolution of Libraries Reflects Power Struggles Over Knowledge: Libraries have historically oscillated between being public and private institutions, reflecting broader societal shifts in who controls access to knowledge. Jessica Talisman highlights how figures like Andrew Carnegie helped establish the modern public library system, reinforcing libraries as democratic spaces where information is accessible to all. However, she also notes that as knowledge becomes digitized, new battles emerge over who owns and controls digital information.The Internet Archive Faces Systematic Attacks on Knowledge: Stewart Alsop shares his firsthand experience visiting the Internet Archive just after it had suffered a major cyberattack. This incident is part of a larger trend in which libraries and knowledge repositories worldwide, including those in Canada, have been targeted. The conversation raises concerns that these attacks are not random but part of a broader, well-funded effort to undermine access to information.AI and Knowledge Graphs Are Deeply Intertwined: AI systems, particularly large language models (LLMs), rely on structured data sources such as knowledge graphs, ontologies, and linked data. Talisman explains how institutions like the NIH and PubMed provide openly available, structured knowledge that AI systems train on. Yet, she points out a critical gap—AI often lacks alignment with real-world, authoritative sources, which leads to inaccuracies in machine-generated knowledge.Libraries Are Moving From Cataloging to Entity Management: Traditional library systems were built around cataloging books and documents, but modern libraries are transitioning toward entity management, which organizes knowledge in a way that allows for more dynamic connections. Linked data and knowledge graphs enable this shift, making it easier to navigate vast repositories of information while maintaining provenance and authority.The War on Truth and Information Is Accelerating: The episode touches on the increasing threats to truth and reliable information, from book bans to algorithmic suppression of knowledge. Talisman underscores the crucial role librarians play in preserving access to primary sources and maintaining records of historical truth. As AI becomes more prominent in knowledge dissemination, the need for robust, verifiable sources becomes even more urgent.Linked Data is the Foundation of Digital Knowledge: The conversation explores how linked data protocols, such as those championed by Tim Berners-Lee, allow machines and AI to interpret and connect information across the web. Talisman explains that institutions like NIH publish their taxonomies in RDF format, making them accessible as structured, authoritative sources. However, many organizations fail to leverage this interconnected data, leading to inefficiencies in knowledge management.Preserving Digital Memory is a Civilization-Defining Challenge: In the digital age, the loss of information is more severe than ever. Alsop compares the current state of digital impermanence to the Dark Ages, where crucial knowledge risks disappearing due to corporate decisions, cyberattacks, and lack of preservation infrastructure. Talisman agrees, emphasizing that digital archives like the Internet Archive, WorldCat, and Wikimedia are foundational to maintaining a collective human memory.
Five Business Models for Linked Data Hello, this is Hall T. Martin with the Startup Funding Espresso -- your daily shot of startup funding and investing. Linked data is structured data that is linked to other data making it more useful for searches and queries. A wiki site often holds linked data sets. Here's a list of five business models around the use of linked data. Subscription revenue such as monetizing the data directly. This works well for users who need a continuous feed of new data. Advertising revenue by placing ads around the data. This works well for non-subscription applications such as one-time use data. Authority revenue such as providing validation or compliance on the data. This works well for users that need to verify the validity of the data. Affiliate revenue such as including links in the data. This can be used for ecommerce applications. Value-add data revenue by combining multiple sources of data to create a new data set. An example is adding demographics and purchase history on prospective customers. This gives insight into what product the prospect may be interested in. You can also combine one or more of these models into one application. Consider these five monetization models for your linked data set. Thank you for joining us for the Startup Funding Espresso where we help startups and investors connect for funding. Let's go startup something today. _______________________________________________________ For more episodes from Investor Connect, please visit the site at: Check out our other podcasts here: For Investors check out: For Startups check out: For eGuides check out: For upcoming Events, check out For Feedback please contact info@tencapital.group Please , share, and leave a review. Music courtesy of .
In dieser Folge schauen wir uns das SOLID Projekt genauer an. Das Projekt stammt vom Erfinder des Internets Tim Berners-Lee und ist richtungsweisend für das Web 3.0. Wir beleuchten die aktuellen Probleme des Internets und erforschen, wie SOLID Antworten auf Fragen von Datenschutz und Privatsphäre bietet. Mit spannenden Ausschnitten aus Tim Berners-Lees Interviews besprechen wir die Realisierbarkeit und den dringenden Bedarf der Prinzipien. Wir erläutern das Kernprinzip von SOLID, das auf Linked Data basiert, und fokussieren uns auf Pods, dezentrale Datenbehälter, die Nutzern die Kontrolle über ihre Informationen zurückgeben. Wie werden Daten bei SOLID gespeichert? Welchen entscheidenden Vorteil bietet dies im Vergleich zur aktuellen Internetlandschaft? Erfahrt, wie SOLID die traditionelle Datenspeicherung aufbricht und Nutzer*innen die volle Kontrolle über ihre Daten zurück gibt. Wir diskutieren auch Fragen zur Implementierung von SOLID-Anforderungen, einschließlich Performance-Aspekten und möglichen Herausforderungen, wenn Daten aus Pods verschwinden. Diskutiere mit uns auf unsern Social Media Kanälen, wie SOLID bereits heute in der Praxis eingesetzt wird und welche Schlüsselfragen noch beantwortet werden müssen, um diese wegweisende Technologie weiter voranzutreiben! CHAPTERS (00:00) Intro (01:10) Das Problem des heutigen Internets (06:00) Tim-Berners Lee zu Linked Data (07:59) Grundprinzip von SOLID (10:08) Was sind Pods? (11:26) Datenspeicherung (13:35) Security Provider zur Authentifizierung (14:52) Der Vorteil im Gegensatz zur aktuellen Welt (16:12) Herausforderungen und Skepsis (19:30) Gibt es eine Zukunft für SOLID? (32:59) How to get involved LINKS Clip zu Raw Data Now aus dem TED Talk: https://youtube.com/clip/UgkxgbZjp56AOHcPttfFJ_YuFcZj-ZSt6Egw Clip zu Power to the people aus dem Interview aus 2023 https://www.youtube.com/clip/UgkxSC6aFWT2LQByzxRHuQBwtIdoaAI1gM4Z https://solidproject.org/ https://solid.mit.edu/ Vorhandene Apps, die SOLID implementieren: https://solidproject.org/apps SOLID Tools and Libraries: https://solidproject.org/developers/tools/ Link zu den Tutorials zu SOLID: https://solidproject.org/developers/tutorials/getting-started Was ist Social Linked Data https://www.csail.mit.edu/research/solid-social-linked-data Github Repo und Contributors https://github.com/solid/solid/blob/main/CONTRIBUTORS.md Solid Forum https://forum.solidproject.org/
Summary A significant amount of time in data engineering is dedicated to building connections and semantic meaning around pieces of information. Linked data technologies provide a means of tightly coupling metadata with raw information. In this episode Brian Platz explains how JSON-LD can be used as a shared representation of linked data for building semantic data products. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management This episode is brought to you by Datafold – a testing automation platform for data engineers that finds data quality issues before the code and data are deployed to production. Datafold leverages data-diffing to compare production and development environments and column-level lineage to show you the exact impact of every code change on data, metrics, and BI tools, keeping your team productive and stakeholders happy. Datafold integrates with dbt, the modern data stack, and seamlessly plugs in your data CI for team-wide and automated testing. If you are migrating to a modern data stack, Datafold can also help you automate data and code validation to speed up the migration. Learn more about Datafold by visiting dataengineeringpodcast.com/datafold (https://www.dataengineeringpodcast.com/datafold) Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack (https://www.dataengineeringpodcast.com/rudderstack) You shouldn't have to throw away the database to build with fast-changing data. You should be able to keep the familiarity of SQL and the proven architecture of cloud warehouses, but swap the decades-old batch computation model for an efficient incremental engine to get complex queries that are always up-to-date. With Materialize, you can! It's the only true SQL streaming database built from the ground up to meet the needs of modern data products. Whether it's real-time dashboarding and analytics, personalization and segmentation or automation and alerting, Materialize gives you the ability to work with fresh, correct, and scalable results — all in a familiar SQL interface. Go to dataengineeringpodcast.com/materialize (https://www.dataengineeringpodcast.com/materialize) today to get 2 weeks free! If you're a data person, you probably have to jump between different tools to run queries, build visualizations, write Python, and send around a lot of spreadsheets and CSV files. Hex brings everything together. Its powerful notebook UI lets you analyze data in SQL, Python, or no-code, in any combination, and work together with live multiplayer and version control. And now, Hex's magical AI tools can generate queries and code, create visualizations, and even kickstart a whole analysis for you – all from natural language prompts. It's like having an analytics co-pilot built right into where you're already doing your work. Then, when you're ready to share, you can use Hex's drag-and-drop app builder to configure beautiful reports or dashboards that anyone can use. Join the hundreds of data teams like Notion, AllTrails, Loom, Mixpanel and Algolia using Hex every day to make their work more impactful. Sign up today at dataengineeringpodcast.com/hex (https://www.dataengineeringpodcast.com/hex) to get a 30-day free trial of the Hex Team plan! Your host is Tobias Macey and today I'm interviewing Brian Platz about using JSON-LD for building linked-data products Interview Introduction How did you get involved in the area of data management? Can you describe what the term "linked data product" means and some examples of when you might build one? What is the overlap between knowledge graphs and "linked data products"? What is JSON-LD? What are the domains in which it is typically used? How does it assist in developing linked data products? what are the characteristics that distinguish a knowledge graph from What are the layers/stages of applications and data that can/should incorporate JSON-LD as the representation for records and events? What is the level of native support/compatibiliity that you see for JSON-LD in data systems? What are the modeling exercises that are necessary to ensure useful and appropriate linkages of different records within and between products and organizations? Can you describe the workflow for building autonomous linkages across data assets that are modelled as JSON-LD? What are the most interesting, innovative, or unexpected ways that you have seen JSON-LD used for data workflows? What are the most interesting, unexpected, or challenging lessons that you have learned while working on linked data products? When is JSON-LD the wrong choice? What are the future directions that you would like to see for JSON-LD and linked data in the data ecosystem? Contact Info LinkedIn (https://www.linkedin.com/in/brianplatz/) Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning. Visit the site (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com (mailto:hosts@dataengineeringpodcast.com)) with your story. To help other people find the show please leave a review on Apple Podcasts (https://podcasts.apple.com/us/podcast/data-engineering-podcast/id1193040557) and tell your friends and co-workers Links Fluree (https://flur.ee/) JSON-LD (https://json-ld.org/) Knowledge Graph (https://en.wikipedia.org/wiki/Knowledge_graph) Adjacency List (https://en.wikipedia.org/wiki/Adjacency_list) RDF == Resource Description Framework (https://www.w3.org/RDF/) Semantic Web (https://en.wikipedia.org/wiki/Semantic_Web) Open Graph (https://ogp.me/) Schema.org (https://schema.org/) RDF Triple (https://en.wikipedia.org/wiki/Semantic_triple) IDMP == Identification of Medicinal Products (https://www.fda.gov/industry/fda-data-standards-advisory-board/identification-medicinal-products-idmp) FIBO == Financial Industry Business Ontology (https://spec.edmcouncil.org/fibo/) OWL Standard (https://www.w3.org/OWL/) NP-Hard (https://en.wikipedia.org/wiki/NP-hardness) Forward-Chaining Rules (https://en.wikipedia.org/wiki/Forward_chaining) SHACL == Shapes Constraint Language) (https://www.w3.org/TR/shacl/) Zero Knowledge Cryptography (https://en.wikipedia.org/wiki/Zero-knowledge_proof) Turtle Serialization (https://www.w3.org/TR/turtle/) The intro and outro music is from The Hug (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / CC BY-SA (http://creativecommons.org/licenses/by-sa/3.0/)
Title "Hospital-service use in the last year of life by patients aged ⩾60 years who died of heart failure or cardiomyopathy: A retrospective linked data study" Description This episode features Dr Gursharan K Singh (Centre for Healthcare Transformation, Faculty of Health, Queensland University of Technology (QUT), Brisbane, QLD, Australia). What is already known about the topic? - Evidence based guidelines recommend palliative care for individuals with advanced heart failure, yet there is limited data on acute hospital and palliative care service use prior to death, to inform care planning and end-of-life services. What this paper adds? - Patients were generally aged ⩾80 years old and experienced repeat acute hospitalisations in the year preceding death. - Over half died in hospital, without a hospitalisation for palliative care recorded. - For those who had a hospitalisation for palliative care recorded, this mostly occurred in the last 7 days of life. Implications for practice, theory, or policy - Improving access to palliative care services in the outpatient or community setting is needed to reduce avoidable hospitalisation - Improving access to timely palliative care, including end-of-life care, for patients with heart failure is needed. Full paper available from: https://journals.sagepub.com/doi/full/10.1177/02692163231180912 If you would like to record a podcast about your published (or accepted) Palliative Medicine paper, please contact Dr Amara Nwosu: a.nwosu@lancaster.ac.uk
Terhi is a Senior Lecturer in Digital Humanities at the Australian National University. Her research focuses on interdisciplinary experimentation into ways digital technologies can support and diversify research in the Humanities, Arts, and Social Sciences, and in relation to public culture (including Web Science, and the cultural heritage sector). Her book "Linked Data for the Digital Humanities" is currently under review with Routledge. She's currently working on Liberal Sydney, a project that investigates the development of liberalism as a movement in Australian politics. She will be taking up the Gale Asia-Pacific Digital Humanities Oxford Fellowship at the Bodleian Libraries at the University of Oxford, UK in early 2023. This episode is brought to you by N2N's Illuminate App, The iPaaS for Higher Education. Learn more at https://illuminateapp.com/web/higher-education/Subscribe and listen to more episodes at IlluminateHigherEducation.comContact Dr. Terhi Nurmikko-Fuller: https://www.linkedin.com/in/terhinurmikkofullerDr. Terhi Nurmikko-Fuller Instagram: https://www.instagram.com/tmtn.f/Learn more about The Australian National University: https://www.anu.edu.au
Artificial Intelligence in Construction Håkon helps me understand better how AI works or it should work, and it's impact on the built environment. Håkon is a serial entrepreneur who aims to disrupt the built environment and make it more sustainable, using the most advanced technologies and paradigms (AI, AR, Linked Data, 5G etc). I4 Technology is a new type of consulting company in the Norwegian Construction Industry started by Håkon Reisvang. The primary focus is to help the industry by implementing new tools such as artificial intelligence and robot technology in construction projects. Subscribe Spotify | Apple | YouTube | Google | Stitcher | Player.fm | iHeart Resources Håkon's LinkedIn profile i4Technology Graphmetrix Nordic Building Room Inrupt/Solid Open AR Cloud
Voice recordings don't play a role in personalized advertising, Instagram says. But then how do the social network's algorithms figure out what we like or dislike?
Transforming Cities in Supercomputers Håkon is explaining how our cities are getting transformed into super computers. Håkon is a serial entrepreneur who aims to disrupt the built environment and make it more sustainable, using the most advanced technologies and paradigms (AI, AR, Linked Data, 5G etc). I4 Technology is a new type of consulting company in the Norwegian construction industry started by Håkon Reisvang. The primary focus is to help the industry by implementing new tools such as artificial intelligence and robot technology in construction projects. Subscribe Spotify | Apple | YouTube | Google | Stitcher | Player.fm | iHeart Resources Håkon's LinkedIn profile i4Technology Nordic Building Room Inrupt/Solid Open AR Cloud
IPFS is a distributed storage network. The content is accessible through peers located anywhere in the world that might relay information store it or both, and IPFS finds data by its content address rather than its locations. We discuss the main principles behind IPFS, the current use cases, and how it changes the basic unit economics of some businesses, as well as its interplanetary future.Chapters:00:00 Intros02:25 What is IPFS?13:58 Three Principles of IPFS15:01 Content-addressable URIs18:44 Content Linking in a DAG21:27 Distributed Hash Table for Discovery22:48 Pinning Content30:58 Censorship-resistance36:38 Used for NFTs39:04 Used for Video and Music Streaming42:56 Use for Package Manager49:14 Use for Machine Learning54:38 Interplanetary Linked Data56:37 A Key Building Block59:22 As a Public Good01:06:36 Developers Tools on top of IPFS01:10:07 Shifting Operational Burden01:18:15 The Interplanetary FutureLinks/Resources:Content Addressing https://simpleaswater.com/ipfs-cids/Linked Data https://ontola.io/what-is-linked-data/Distributed Hash Tables https://www.cs.cmu.edu/~dga/15-744/S07/lectures/16-dht.pdfNapster, Kaaza, Gnutella https://www.slideshare.net/uschmidt/peertopeer-systems/20-Comparison_Napster_Gnutella_KaZaAType_ofJuan Benet of IPFS https://research.protocol.ai/authors/juan-benet/ProtoSchool https://proto.school/Marc Andressen's Blog Archive https://pmarchive.com/Left Pad Debacle https://www.davidhaney.io/npm-left-pad-have-we-forgotten-how-to-program/NPM as a private company https://www.youtube.com/watch?v=MO8hZlgK5zc&t=46sTransfer Learning https://builtin.com/data-science/transfer-learningDeno Programming language https://deno.land/IPLD https://ipld.io/Jack Dorsey Regrets Shutting down API https://www.revyuh.com/news/software/developers/twitters-founder-admits-that-shutting-down-the-api-was-worst-thing-we-did-it-affected-users-and-developers/Datomic https://www.datomic.com/===== About “The Technium” =====The Technium is a weekly podcast discussing the edge of technology and what we can build with it. Each week, Sri and Wil introduce a big idea in the future of computing and extrapolate the effect it will have on the world.Follow us for new videos every week on web3, cryptocurrency, programming languages, machine learning, artificial intelligence, and more!===== Socials =====WEBSITE: https://technium.transistor.fm/SPOTIFY: https://open.spotify.com/show/1ljTFMgTeRQJ69KRWAkBy7
Talk recorded at 2017 Connected Data London Conference Connected Data encompasses data acquisition and data management requirements from a range of areas including the Semantic Web, Linked Data, Knowledge Management, Knowledge Representation and many others. Yet for the true value of many of these visions to be realised both within the public domain and within organisations requires the assembly of often huge datasets. Thus far this has proven problematic for humans to achieve within acceptable timeframes, budgets and quality levels. This panel discussion by Paul Groth, Spyros Kotoulas, Tara Rafaat, Freddy Lecue & moderator Szymon Klarman tackles these issues
Information is knowledge, says Rafael Espino in this IT Talks episode. His talk covers knowledge graphs, semantic web, machine learning, intelligent multi-agent systems, RDF storage, and LGP. By listening to this episode you will get the answer to questions like: When does data become information? What is an intelligent agent? How does RDF storage compare to knowledge graphs? And much more.
As the interest in, and hype around, Knowledge Graphs is growing, there is also a growing need for sharing experience and best practices around them. Let's talk about definitions, best practices, hype, and reality. What is a Knowledge Graph? How can I use a Knowledge Graph & how do i start building one? This panel is an opportunity to hear from industry experts using these technologies & approaches to discuss best practices, common pitfalls and where this space is headed next. --- Katariina Kari Research Engineer, Zalando Tech-Hub Katariina Kari (née Nyberg) is a research engineer at the Zalando Tech-Hub in Helsinki. Katariina holds a Master in Science and Master in Music and is specialised in semantic web and guiding the art business to the digital age. At Zalando she is modelling the Fashion Knowledge Graph, a common vocabulary for fashion with which Zalando improves is customer experience. Katariina also consults art institutions to embrace the digital age in their business and see its opportunities. Panos Alexopoulos Head of Ontology, Textkernel BV Panos Alexopoulos has been working at the intersection of data, semantics, language and software for years, and is leading a team at Textkernel developing a large cross-lingual Knowledge Graph for HR and Recruitment. Alexopoulos holds a PhD in Knowledge Engineering and Management from National Technical University of Athens, and has published 60 papers at international conferences, journals and books. Sebastian Hellman dbpedia.org Sebastian is a senior member of the “Agile Knowledge Engineering and Semantic Web” AKSW research center, focusing on semantic technology research – often in combination with other areas such as machine learning, databases, and natural language processing. Sebastian is head of the “Knowledge Integration and Language Technologies (KILT)” Competence Center at InfAI. He also is the executive director and board member of the non-profit DBpedia Association. Sebastian is also a contributor to various open-source projects and communities such as DBpedia, NLP2RDF, DL-Learner and OWLG, and has been involved in numerous EU research projects. Natasa Varitimou Information Architect, Thomson Reuters Natasa has been working as a Linked Data architect in banking, life science, consumer goods, oil & gas and EU projects. She believes data will eventually become the strongest asset in any organization, and works with Semantic Web technologies, which she finds great in describing the meaning of data, integrating data and making it interoperable and of high quality. Natasa combines, links, expands and builds upon vocabularies from various sources to create flexible and lightweight information easily adaptable to different use cases. She queries these models with their data directly with SPARQL, guarantees data quality based on business rules, creates new information and defines services to bring together diverse data from different applications easily and with the semantics of data directly accessible.
This episode is the last of Season Two of our podcast series 'The Edge: Accelerating Higher Education', supported by Salesforce.org. The Edge takes a fresh look at higher education and digital transformation and this episode is all about recruitment and admissions. What's in this episode? In this episode, Sophie Bailey is in conversation with Dr. Gerd Kortemeyer, Director of Educational Development and Technology at ETH Zurich - one of the world’s most prestigious science and technology universities - talking about the concept of Social Linked Data otherwise known as SOLID. In the context of Higher Education, we talk about students taking ownership of their data and having a learner identity that permeates their whole learning lives. We also cover other systems questions Higher Education leaders need to consider when thinking about evolving their University offering. Continue the conversation online at @podcastedtech and @SalesforceOrg with #EdtechEdge and #edtechpod. People Dr. Gerd Kortemeyer, Director of Educational Development and Technology at Eidgenössische Technische Hochschule Zürich (ETH Zürich) | Twitter: @ETH; @ETH_en Sophie Bailey is the Founder and Presenter of The Edtech Podcast | Twitter: @podcastedtech A huge thank you to everyone for listening in, to our series guests, and for Salesforce.Org for making it all possible. We look forward to planning our next series over the Summer to land with you in the Autumn or Fall of 2021. Show Notes and References Check out https://theedtechpodcast.com/edtechpodcast for the full show notes Tell us your story We'd love to hear your thoughts. Record a quick free voicemail via speakpipe for inclusion in the next episode. Or you can post your thoughts or follow-on links via twitter @podcastedtech or via The Edtech Podcast Facebook page or Instagram.
La linguistica computazionale si occupa del trattamento automatico del linguaggio: i computer fanno automaticamente l'analisi dei dati linguistici. E' la disciplina che permette che noi scriviamo messaggi sul telefono che vengono corretti, facciamo ricerche su google, interagiamo vocalmente con l'auto. Ne parla Marco Passarotti, docente di Computational linguistics.
Information is knowledge, says Rafael Espino in this IT Talks episode. His talk covers knowledge graphs, semantic web, machine learning, intelligent multi-agent systems, RDF storage, and LGP. By listening to this episode you will get the answer to questions like: When does data become information? What is an intelligent agent? How does RDF storage compare to knowledge graphs? And much more.
Whether we call it Semantic Web or Linked Data, Tim Berner Lee's vision never really caught on among users and developers. Although part of this vision is about decentralization, and this is something a few people are working on, Semantic Web technology remains largely underutilized by them. In this panel, we will explore how the Semantic Web and decentralization can benefit each other. Getting together people from both communities, and exploring questions such as: Is the Semantic Web technological stack really as complex as it is perceived to be? How can it be made more accessible, and align better with today's realities in software development? What are the issues facing people working in decentralization, and how could Semantic Web technology provide solutions? What about sustainability? How can efforts aiming to provide services to the public at large find a way to sustain themselves, navigating a challenging business landscape? Andre Garzia from Mozilla, Sebastian Hellman from DBpedia, Ruben Verborgh from Ghent University, moderated by Jonathan Holtby from Hub of All Things
Linked Data e web semantico - Intervista con Gabriele Antonelli (Spazio Dati Gruppo Cerved) - 15 by Web Radio Activa
Phil Ritchie is the Chief Technology Officer at Vistatec where he directs all Language Technology and Research and Development activities. Phil has a Batchelor of Science Degree and 20 years of industry experience. He is a frequent speaker at industry conferences and events and a founding Industrial Partner of the ADAPT Research Centre, where he serves as Chairman of its Industrial Advisory Board. Phil has been a partner in two highly successful European Commission Projects and a member of several Technical Committees and Standards Groups including the W3C, Chairman of the ITS 2.0 Interest Group, Multilingual Web - Language Technologies, Linked Data for Language Technologies, Co-Chair of W3C RAX Group, and the XLIFF and XLIFF OMOS Technical Committees. Phil is the lead architect of the open-source Ocelot XLIFF Editor and JLIFF Graph Tools library and has been known to write the odd line of code when he has the opportunity!
Linked data is full of promises, but its benefits have mostly been reserved for experts. If more people and organisations are able to link their data, they will be more visible, discoverable and easier to combine with other data among the web – take combining weather data with transport data if the weather affects our journey time, for example. ODI Members and software company OpenDataSoft will describe why it is important that more people can link their data together, and the impact this can have on the usage of their data. They will also introduce their new tool that helps data producers without little to no background knowledge to link data more easily, improving its discoverability and use. The talk may be of interest to anybody working with data and willing to improve its quality, visibility or discoverability. About the speaker Benjamin Moreau is a PhD student at the LS2N Digital Sciences Lab in Nantes while also working in industry at OpenDataSoft. His field of research is the semantic web and, more particularly, controlling usage in the web of data.
Simpleton says, "We go a bit geeky on this one, but in a way that should be very accessible to most anyone." Our guests on this episode are Josh Wilson and Gabriel Viganotti. Both are front-end developers for MaidSafe, working with Resource Description Framework (RDF) in a number of forms, to enhance the uses of the SAFE Network. We really get down to what RDF is, how it relates to Linked Data and the Socially Linked Data (SOLID) project, and other use cases that are creating a real revolution in how we manage data, making it both more useful as well as returning control of users' data to users. Your host learned a lot. Hope you find it useful. MUSIC Music for this episode: *Safe Crossroads Beta*, an original piece composed and performed by Nicholas Koteskey of Two Faced Heroes LINKS safenetwork.tech - https://safenetwork.tech MaidSafe - http://maidsafe.net SAFE Network School series - https://safecrossroads.net/safe-network-school/ SAFE Network Forum - https://safenetforum.org Proposal for Public Name System using RDF on the SAFE Network - https://github.com/joshuef/rfcs/blob/PnsAndResolveableMap/text/0000-RDF-for-public-name-resolution/0000-RDF-for-public-name-resolution.md
Semantic technologies - based on advanced statistics, data mining, machine learning and knowledge management - have existed for years, but are gaining much more momentum dealing with the growing need for efficient information management (IM). Better application of semantic technologies in linked data combined with the urgent need for better data-sharing may well be the next disruptive wave for efficient information handling in healthcare and life sciences. Drawing on experience from 3 respected KOL’s in the exciting field of Semantic Technology this webinar episode will present some dynamic and forward thinking perspectives on harnessing the true value of Big Data within the Life Sciences Industry and ultimately how we can translate this knowledge into effective new therapeutics for mankind. Sponsored by Ontoforce If you’d like to view the original webinar recording then you can do so following the link below: https://www.ddw-online.com/webinars/p321000-disruptive-change-by-linked-data-and-semantic-technologies-in-healthcare-and-life-sciences.html If you’d like to view our other webinars then you can do so below: http://www.ddw-online.com/webinars/
Is linked data an appropriate technology for implementing an archive’s catalogue? Dr Jean-Luc Cochard from the Swiss Federal Archives presents the results of two studies conducted to explore the potential of linked data in supporting archival information systems. The Big Ideas talks series is supported by the Friends of The National Archives.
6/14/18: Evil Twin; DNS-Linked Data Theft; Foscam Vulnerabilities; Internet Weather | AT&T ThreatTraq
Teodora Petkova, a philologist fascinated by the metamorphoses of text on the web, discusses the Semantic Web and Linked Data. What is the Semantic Web, is it here already, and how should authors begin working with it?
We often associate open data with the publication of data by governments and businesses. But openly licensed data is also being collected, curated and maintained by a variety of communities across the web. Whether you're interested in music or maps, comics or comets there are ways that we can all contribute directly to the creation of some unique open datasets. In this talk I'll be exploring some of those projects to highlight their work, how they're being used and thinking about what they can teach us about building a sustainable open data commons. About Leigh Dodds Leigh is an experienced software engineer, product manager and technical consultant. He spent 10 years working in the publishing industry dealing with data integration and management issues. As CTO of Publishing Technology’s scholarly division, Leigh was responsible for designing an innovative publishing framework based on semantic web technology. On moving to Talis Group in 2011, Leigh looked after their “Data as a Service” products, overseeing product development and launching their Linked Data consulting business.
As last year, Konrad visited the WikiCite 2017 event that seeks out "...to design a central bibliographic repository, as well as tools and strategies to improve information quality and verifiability in Wikimedia projects." This year he took the opportunity to do a couple of interviews with participants. This interview is with Jodi Schneider, who is a researcher in Linked Data, Argumentation, Scholarly Communication and Computer Supported Collaboration (CSCW), currently working at the School of Information Sciences of the University of Illinois at Urbana-Champaign. She brings quite a bit of experience from biomedical citation networks to the WikiCite table. Enjoy!
In this episode we feature an interview with Steve Baskauf, senior lecturer in biological sciences at Vanderbilt University. Steve coordinates the introductory biological sciences labs, trains and mentors the undergraduate and graduate student teaching assistants for those labs, and designing and assesses inquiry-based lab curricula. However, this interview focuses on another aspect of his work at Vanderbilt: biodiversity informatics. Steve has developed Bioimages, an online image database with over 10,000 annotated plant and ecosystem images, and he has created mobile-friendly tree tours of the Vanderbilt campus. We talked with Steve about the semantic web, linked data, and the challenges and opportunities of creating and using machine-readable datasets. Links: • Steve Baskauf’s website, https://my.vanderbilt.edu/baskauf/ • @baskaufs on Twitter, https://twitter.com/baskaufs • @vutrees on Instagram, https://www.instagram.com/vutrees/ • Vanderbilt Arboretum tree tours, http://vanderbilt.edu/trees/tours/ • Bioimages website, http://bioimages.vanderbilt.edu/ • Linked Data and Semantic Web working group at the Vanderbilt Library, https://heardlibrary.github.io/semantic-web/
Donna Kurtz talks about working with Sebastian over a number of years on various Linked Cultural Heritage projects.
In this episode, we feature two interviews conducted by Cliff Anderson, the Director of Scholarly Communications at the Vanderbilt library. Both interviews focus on Neo4j, an open source platform that can be used to visualize and analyze data and connections among data. Cliff interviews his Vanderbilt library colleague Suellen Stringer-Hye, Linked Data and Semantic Web Coordinator. Suellen has worked with a number of faculty members and students here at Vanderbilt, helping them use Neo4j in their research. In the interview, she talks about some of those projects and how a database tool like Neo4j can be easier to use than one might think. In the second interview, Cliff interviews Michael Hunger, who handles developer relations for Neo Technology, the company that has developed Neo4j. Michael shares a few more examples of how Neo4j has been used and how it supports collaborative data visualization and analysis. Links: • Suellen Stringer-Hye’s staff page: http://www.library.vanderbilt.edu/staffmember.php?staff_id=140 • @suellenshye on Twitter: https://twitter.com/suellenshye • Getting Started with GraphGists: http://heardlibrary.github.io/workshops/edtech/2016/06/01/graphgists.html • Michael Hunger on the Neo4j community: https://neo4j.com/blog/contributor/michael-hunger/ • @mesirii on Twitter: https://twitter.com/mesirii • Neo4j GraphGist site: https://neo4j.com/graphgists/
Mike Atherton This is not your usual UX talk. It’s a little about information architecture, a bit about content strategy, but mostly it’s about how we can use information floating around the web to build content-rich products, and how we should try harder to weave our work into the fabric of the network. Linked Open Data lets us expose, share, and connect knowledge. A single, extensible network of information stretching across an increasingly semantic web, making our products more findable, and our content more reusable. It’s happening now. Used by the BBC to build their Music pages, the New York Times to build topic pages, and by Google and Facebook to build their knowledge graphs. Where once we linked up pages, now we need to link up the underlying information and tell connected stories with the whole web.
Carolin Rindfleisch, (Faculty of Music, University of Oxford), gives a talk for the 2016 Digital Humanities at Oxford Summer School. Richard Wagner’s music, and particularly his composition with ‘leitmotifs’ (musical entities with a characteristic identity, that are used to construct musical form and to convey musical meaning) have been interpreted differently in a wide variety of academic as well as audience-aimed introductory literature. A comprehensive analysis of these interpretations can help us find out how Wagner’s music-dramas have been heard, seen and understood in different historical and cultural environments. Using this example, the lecture presents how methods and techniques of Linked Data and Semantic Web can facilitate a large-scale reception study that can deal with a wide range of source material and still compare interpretations in detail. It will discuss different ways of digitally enhancing the study of the reception and interpretation of artworks, and address the question of how we can reconcile these methods with more traditional methodologies in the Humanities. It will focus particularly on presenting the design of an ontology that not only enables the linking and structuring of digitised source material, but also enables the systematic representation and comparison of the interpretations contained in the sources.
Oct. 23, 2015. A discussion on the different aspects of and issues surrounding linked data from theory to practical applications at the annual meeting of the Potomac Technical Processing Librarians. For transcript, captions, and more information, visit http://www.loc.gov/today/cyberlc/feature_wdesc.php?rec=7255
A triple set of interviews with Nicole Fenton, Alastair Somerville and Mike Atherton recorded at UXLx 2015. Nicole is an editor, writer, content strategist. Alastair is a specialist in sensory cognition and accessibility. Mike is a user experience designer and information architect.
Case Study Radio is a podcast series produced by SemanticWeb.com that focuses on case studies and successful implementations of Semantic Web standards, Linked Data, and other Semantic Technologies. Episode 1 focuses on Linked Data and is sponsored by Zepheira.
Ian Holt, Head of Developer Outreach at Ordnance Survey, explained and demonstrated what difference the addition of linked data sets has made to the organisation’s open map data. Using examples and case studies, he showed what linked data makes possible for publishers and app developers, and what it might mean for those of us who use maps every day.
Case Study Radio is a podcast series produced by SemanticWeb.com that focuses on case studies and successful implementations of Semantic Web standards, Linked Data, and other Semantic Technologies. Episode 1 focuses on Linked Data and is sponsored by Zepheira.
Fakultät für Mathematik, Informatik und Statistik - Digitale Hochschulschriften der LMU - Teil 01/02
Relational learning is concerned with learning from data where information is primarily represented in form of relations between entities. In recent years, this branch of machine learning has become increasingly important, as relational data is generated in an unprecedented amount and has become ubiquitous in many fields of application such as bioinformatics, artificial intelligence and social network analysis. However, relational learning is a very challenging task, due to the network structure and the high dimensionality of relational data. In this thesis we propose that tensor factorization can be the basis for scalable solutions for learning from relational data and present novel tensor factorization algorithms that are particularly suited for this task. In the first part of the thesis, we present the RESCAL model -- a novel tensor factorization for relational learning -- and discuss its capabilities for exploiting the idiosyncratic properties of relational data. In particular, we show that, unlike existing tensor factorizations, our proposed method is capable of exploiting contextual information that is more distant in the relational graph. Furthermore, we present an efficient algorithm for computing the factorization. We show that our method achieves better or on-par results on common benchmark data sets, when compared to current state-of-the-art relational learning methods, while being significantly faster to compute. In the second part of the thesis, we focus on large-scale relational learning and its applications to Linked Data. By exploiting the inherent sparsity of relational data, an efficient computation of RESCAL can scale up to the size of large knowledge bases, consisting of millions of entities, hundreds of relations and billions of known facts. We show this analytically via a thorough analysis of the runtime and memory complexity of the algorithm as well as experimentally via the factorization of the YAGO2 core ontology and the prediction of relationships in this large knowledge base on a single desktop computer. Furthermore, we derive a new procedure to reduce the runtime complexity for regularized factorizations from O(r^5) to O(r^3) -- where r denotes the number of latent components of the factorization -- by exploiting special properties of the factorization. We also present an efficient method for including attributes of entities in the factorization through a novel coupled tensor-matrix factorization. Experimentally, we show that RESCAL allows us to approach several relational learning tasks that are important to Linked Data. In the third part of this thesis, we focus on the theoretical analysis of learning with tensor factorizations. Although tensor factorizations have become increasingly popular for solving machine learning tasks on various forms of structured data, there exist only very few theoretical results on the generalization abilities of these methods. Here, we present the first known generalization error bounds for tensor factorizations. To derive these bounds, we extend known bounds for matrix factorizations to the tensor case. Furthermore, we analyze how these bounds behave for learning on over- and understructured representations, for instance, when matrix factorizations are applied to tensor data. In the course of deriving generalization bounds, we also discuss the tensor product as a principled way to represent structured data in vector spaces for machine learning tasks. In addition, we evaluate our theoretical discussion with experiments on synthetic data, which support our analysis.
Pour tout savoir sur le Linked Data, vous pouvez consulter l'ouvrage :http://www.uni-koblenz-landau.de/campus-koblenz/fb4/west/teaching/ws1213/seminar-web-science/linked-data.pdf
Unsupported Operation Episode 82 Misc JavaBlogs.com closed by Atlassian.Latest Dart VM beats Java 64 bit VM in Delta Blue BenchmarkJava version number scheme officially updated to make managing all these security fixes easierNewly minted just in time for more unmaintainable build messes: Gradle 1.6? Progress being made on Android build system, facebook builds their own BuckIntelliJ IDEA 12.1.3 - moar upgrades!YourKit Java Profiler 12.0.5Typesafe release Typesafe Activator- part of the Typesafe PlatformJaCoCoverage: java 7 code coverage for NetbeansBefore Google I/O, Square announce Seven Days of Open Source and release (thus far) - OkHttp, Dagger, MimeCraft, ProtoParser, JavaWriter, Roboelectric 2.0, Intellij Plugins - a lot focused around Android.Java gets a REPL - scary, and awesome, but very, very scary. Sonar GroovyCassaforte - Client client API for Apache Cassandra 1.2+ Apache Apache Camel 2.11 - new Camel CMIS module for integration with CMS systems, new camel-couchdb, camel-elasticsearch modules.Three major defects found in Tomcat (groan, yes again) - Chunked transfer encoding extension size is not limited, Session fixation with FORM authenticator, Request mix-up if AsyncListener method throws RuntimeException. Tomcat 6.0.37 released. 7. something too.Apache Curator 2.0.0-incubator - A ZooKeeper Keeper - utils making ZooKeeper easier to use.Apache Gora 0.3 - in memory and persistence for Big Data for using HadoopApache HttpComponents HttpCore 4.3-beta2Apache Buildr 1.4.12Apache Giraph 1.0 - first release out of incubation - Apache Giraph is an scalable and distributed iterative graph processing system that is inspired by BSP (bulk synchronous parallel) and Google's Pregel. Giraph distinguishes itself from those projects by being open-source, running on Hadoop infrastructure, and going beyond the Pregel model with features such as master computation, sharded aggregators, out-of-core support, no single point of failure design, and more.Lucene and Solar 4.3Commons Codec 1.8Apache Marmotta 3.0 - incubating - Apache Marmotta is an extensible and modular implementation of a Linked Data Platform and contains a server component as well as many useful libraries for accessing Linked Data resources. Apache Marmotta is a continuation of the Linked Media Framework (LMF) originally developed by Salzburg Research, but considerably improved with respect to modularity, reliability, performance and also licensing. Since the last LMF release was the 2.x series, Apache Marmotta starts with the version number 3.0.0-incubating.Open JPA 2.2.2 Other news Greg’s favourite NoSQL database gets incremental backup
This Week on developerWorks has a new home page at: http://ibm.com/developerworks/blog/twodw Barb Wetmore introduces the new Big Data zone: http://ibm.com/developerworks/bigdata Linked Data Interfaces by Arthur Ryman http://ibm.com/developerworks/rational/library/linked-data-oslc-resource-shapes/ Simplify backup and recovery with IBM DB2 Merge Backup for Linux, UNIX, and Windows by Anson Kokkat, and Soid Quintero http://www.ibm.com/developerworks/data/library/techarticle/dm-1303merge/ A flexible and scalable WebSphere MQ topology pattern: Part 1: Building a multi-purpose WebSphere MQ infrastructure with scalability and high availability by Peter Broadhurst http://ibm.com/developerworks/websphere/library/techarticles/1303_broadhurst/1303_broadhurst.html
Another book for Semantic Web and Linked Data !It exists in PDF format :http://www.markwatson.com/opencontent_data/book_java.pdfCC by-nc-nd
Martin Malmsten is a developer/architect/technical lead at the National Library of Sweden. He works with many aspects of LIBRIS[1], the Swedish union catalog. He created the linked data interface to LIBRIS[2] to experiment with RDF (Resource Description Framework) and linking to other datasets. Malmsten is interested in how libraries continually re-link vast datasets. He believes that making information available to anyone is what libraries are about, and he supports linking to information using technologies created/used outside the library community. Miriam Safstrom is the metadata coordinator for the National Library of Sweden and works closely with Malmsten. Speaker Biography: Martin Malmsten is a developer/architect/technical lead at the National Library of Sweden. He works with many aspects of LIBRIS[1], the Swedish union catalog. Speaker Biography: Miriam Safstrom is metadata coordinator at the National Library of Sweden. For captions, transcripts, and more information visit http://www.loc.gov/today/cyberlc/feature_wdesc.php?rec=5521.
Linked Data and RDA: Looking at Next-Generation Cataloging will discuss how the Resource Description Framework (RDF) and Linked Data, two hot topics in the field of information modeling, have the potential to change how library cataloging is done and how discovery systems operate. The new Resource Description and Access (RDA) standard that has been the subject of much discussion in the library cataloging community in the last few years will be used as an example of modeling, sharing, and consuming information as Linked Data. Presentation slides and more information may be found at http://digitaldiscussions.web.unc.edu/session-august-2012-linked-data-and-rda-looking-at-next-generation-cataloging/.
The Australian War Memorial is connecting and enriching online archives and collections toward building a platform for telling history. Through Drupal 7 and Linked Data, the Memorial intends to develop tools that designers, researchers and historians can use to help find new ways of building historical narratives. During this session we will demonstrate some early prototypes and experiments, key uses of Linked Data, practical publishing tools and discuss how this work is unfolding inside one of Australia’s major collecting institutions. Adam Bell leads the web production team at the Australian War Memorial, where he works with curators and historians to publish the Memorial’s vast archives and collections online. He has a background as an artist, cultural worker, teacher and printmaker and plays in a rock n roll band. Follow Adam on Twitter: @bumphead David Peterson has been pushing at the boundaries of Web development since 1995; that combined with a background in wildlife cinematography brings fresh insight into what can be a geeky sort of space. He has built a number of high profile sites for the ABC, Australian science groups and many others. David works as a consultant with PreviousNext and lives way down south in cool Tasmania - regularly breathing on his fingers just to tap away at the keyboard. He is busy building Web apps built with Open Source toolkits utilising Java, PHP, Python, Linked Data and the almighty Drupal. He enjoys exploring deep into the guts of the Semantic Web and Linked Data to discover new connections and visualisations that help empower story tellers. Follow David on Twitter: @davidseth Licensed as Creative Commons Attribution NonCommercial ShareAlike 3.0 Unported license (http://creativecommons.org/licenses/by-nc-sa/3.0/).
OCLC Senior Research Scientist Ralph LeVan dives deeper beneath the surface to describe the open source technologies he uses to expose records in text databases as Linked Data. He also talks about Java, Servlet Filters, XSLT, and SRU, plus explains how anyone can use his framework to make their database content available as Linked Data.
Jon Voss will discuss Civil War Data 150 ("CWD150"), a collaborative project of the Archives of Michigan, the Internet Archive, and Freebase. CWD150 seeks to link Civil War archives and data from separate state and national sources in an open community-maintained database (Freebase), and create interactive web applications to help crowdsource the data linking. The project presents research questions of particular interest to archivists regarding the use of strong identifiers and shared ontologies, as well as uses for shared metadata in the context of the Semantic Web.
Jon Voss will discuss Civil War Data 150 ("CWD150"), a collaborative project of the Archives of Michigan, the Internet Archive, and Freebase. CWD150 seeks to link Civil War archives and data from separate state and national sources in an open community-maintained database (Freebase), and create interactive web applications to help crowdsource the data linking. The project presents research questions of particular interest to archivists regarding the use of strong identifiers and shared ontologies, as well as uses for shared metadata in the context of the Semantic Web.
In this webinar, OCLC Senior Research Scientist Ralph LeVan explains what Linked Data is about and how OCLC produces it, using examples from VIAF (The Virtual International Authority File). He also talks about topics such as Real World Objects, Generic Documents, Content Negotiation and RDF.
Hear how Drupal, Semantic MediaWiki and other bleeding edge tech were enlisted along with pixie dust, FOAF, RDF, OWL, SPARQL, Linked Data (basically all the Semantic Web stuff) to build a distributed social network. The focus will be not on evangelism (I don’t really care about that) but how disparate open source platforms can talk and work together. This stuff actually works and makes development more fluid. These technologies make local development easier, but when it is time to broaden your scope, classic search is still king. How can you leverage this? Newcomers such as Yahoo Searchmonkey can play an important role in the creation of a truly distributed information system. David Peterson has been a web developer since 1995. He works way up north in the tropics of Townsville, about as far from any tech as possible. Currently he is Head of Research at BoaB interactive and is working hard to kickstart the Semantic Web down under. Not only that, but he is an Advisory Committee representative to the W3C. Wow. His wonderful family, making lovely photographs and searching for the perfect espresso keeps him happy. Licensed as Creative Commons Attribution-Share Alike 3.0 (http://creativecommons.org/licenses/by-sa/3.0/).