Welcome to the Connected Data Podcast, powered by Connected Data World, the leading conference for those who use the relationships, meaning and context in Data to achieve great things Stay tuned and dive into our diverse content. Learn & share!
#CDW21 kicks off with a live discussion among our top-notch contributors and Program Committee members, and you're all invited! Join us as we have a sneak peek through the CDW21 program, and discuss the Connected Data landscape. The CDW21 Program Committee members will go through the 50+ sessions and 70+ speakers, and talk about: The Connected Data landscapeKnowledge GraphsGraph DatabasesGraph AnalyticsGraph Data Science &Semantic TechnologyTopics, speakers and talks that piqued our interestOur own work in the domain and how it cross-cuts #CDW21Community chat and Ask Me AnythingMore in-depth topics as time permits:Hiring a team for building knowledge graphs: required roles and skills, what can be taught? I want a knowledge graph! What next? The process of starting to build a knowledge graph for an organisation: assessment of need, use cases, support needed from management etc.Triple Store vs Labelled Property Graphs: It's not either-or, it's both and more! With an all-star Program Committee and lineup, this will be a tour de force in Connected Data.
ABOUT THIS TALK What does graph have to do with machine learning? A lot, actually. And it goes both ways Machine learning can help bootstrap and populate knowledge graphs. The information contained in graphs can boost the efficiency of machine learning approaches. Machine learning, and its deep learning subdomain, make a great match for graphs. Machine learning on graphs is still a nascent technology, but one which is full of promise. Amazon, Alibaba, Apple, Facebook and Twitter are just some of the organizations using this in production, and advancing the state of the art. More than 25% of the research published in top AI conferences is graph-related. Domain knowledge can effectively help a deep learning system bootstrap its knowledge, by encoding primitives instead of forcing the model to learn these from scratch. Machine learning can effectively help the semantic modeling process needed to construct knowledge graphs, and consequently populate them with information. Key Topics What can knowledge-based technologies do for Deep Learning?What is Graph AI, how does it work, what can it do?What's next? What are the roadblocks and opportunities? Target Audience Machine Learning PractitionersData ScientistsData ModelersCxOsInvestors Goals Explore the interplay between machine learning and knowledge based technologiesAnswer questions that matterHow can those approaches complement one another, and what would that unlock?What is the current state of the art, how and where is it used in the wild?What are the next milestones / roadblocks?Where are the opportunities for investment? Session outline IntroductionMeet and GreetSetting the stageKnowledge Graphs, meet Machine LearningHow can machine learning help create and populate knowledge graphs?What kind of problems can we solve by using it?Where is this used in production?What is the current state of the art in knowledge graph bootstrapping and population?What are the major roadblocks / goals, how could we address them, and what would that enable?Who are some key players to keep an eye on?Graph Machine LearningWhat is special about Graph Machine Learning?What kind of problems can we solve by using it?Where is it used in production?What is the current state of the art?What are the major roadblocks / goals, how could we address them, and what would that enable?Who are some key players to keep an eye on? Format Extended panelExpert discussion, coordinated by moderator2 hours running timeRunning time includes modules of expert discussion, interspersed with modules of audience Q&A / interaction Level Intermediate - Advanced Prerequisite Knowledge Basic understanding of Knowledge GraphsBasic understanding of Machine Learning / Deep Learning
Abstract: Personalized medicine. Predictive call centers. Digital twins for IoT. Predictive supply chain management, and domain-specific Q&A applications. These are just a few AI-driven applications organizations across a broad range of industries are deploying. Graph databases and Knowledge Graphs are now viewed as a must-have by Enterprises serious about leveraging AI and predictive analytics within their organization. See how Franz Inc. is helping organizations deploy novel Entity-Event Knowledge Graph Solutions to gain a holistic view of customers, patients, students or other important entities, and the ability to discover deep connections, uncover new patterns and attain explainable results. Description: To support ubiquitous AI, a Knowledge Graph system will have to fuse and integrate data, not just in representation, but in context (ontologies, metadata, domain knowledge, terminology systems), and time (temporal relationships between components of data). Building from ‘Entities' (e.g. Customers, Patients, Bill of Materials) requires a new data model approach that unifies typical enterprise data with knowledge bases such as industry terms and other domain knowledge. Entity-Event Knowledge Graphs are about connecting the many dots, from different contexts and throughout time, to support and recommend industry-specific solutions that can take into account all the subtle differences and nuisances of entities and their relevant interactions to deliver insights and drive growth. The Entity-Event Data Model we present puts core entities of interest at the center and then collects several layers of knowledge related to the entity as ‘Events'. Franz Inc. is working with organizations across a broad range of industries to deploy large-scale, high-performance Entity-Event Knowledge Graphs that serve as the foundation for AI-driven applications for personalized medicine, predictive call centers, digital twins for IoT, predictive supply chain management and domain-specific Q&A applications—just to name a few. During this presentation we will explain and demonstrate how Entity-Event Knowledge Graphs are the future of AI in the Enterprise.
Talk recorded at 2017 Connected Data London Conference Connected Data encompasses data acquisition and data management requirements from a range of areas including the Semantic Web, Linked Data, Knowledge Management, Knowledge Representation and many others. Yet for the true value of many of these visions to be realised both within the public domain and within organisations requires the assembly of often huge datasets. Thus far this has proven problematic for humans to achieve within acceptable timeframes, budgets and quality levels. This panel discussion by Paul Groth, Spyros Kotoulas, Tara Rafaat, Freddy Lecue & moderator Szymon Klarman tackles these issues
"The most important contribution management needs to make in the 21st Century is to increase the productivity of knowledge work and the knowledge worker", said Peter F. Drucker in 1999, and time has proven him right. Even NASA is no exception, as it faces a number of challenges. NASA has hundreds of millions of documents, reports, project data, lessons learned, scientific research, medical analysis, geospatial data, IT logs, and all kinds of other data stored nation-wide. The data is growing in terms of variety, velocity, volume, value and veracity. NASA needs to provide accessibility to engineering data sources, whose visibility is currently limited. To convert data to knowledge a convergence of Knowledge Management, Information Architecture and Data Science is necessary. This is what David Meza, Acting Branch Chief - People Analytics, Sr. Data Scientist at NASA, calls "Knowledge Architecture": the people, processes, and technology of designing, implementing, and applying the intellectual infrastructure of organizations.
As the interest in, and hype around, Knowledge Graphs is growing, there is also a growing need for sharing experience and best practices around them. Let's talk about definitions, best practices, hype, and reality. What is a Knowledge Graph? How can I use a Knowledge Graph & how do i start building one? This panel is an opportunity to hear from industry experts using these technologies & approaches to discuss best practices, common pitfalls and where this space is headed next. --- Katariina Kari Research Engineer, Zalando Tech-Hub Katariina Kari (née Nyberg) is a research engineer at the Zalando Tech-Hub in Helsinki. Katariina holds a Master in Science and Master in Music and is specialised in semantic web and guiding the art business to the digital age. At Zalando she is modelling the Fashion Knowledge Graph, a common vocabulary for fashion with which Zalando improves is customer experience. Katariina also consults art institutions to embrace the digital age in their business and see its opportunities. Panos Alexopoulos Head of Ontology, Textkernel BV Panos Alexopoulos has been working at the intersection of data, semantics, language and software for years, and is leading a team at Textkernel developing a large cross-lingual Knowledge Graph for HR and Recruitment. Alexopoulos holds a PhD in Knowledge Engineering and Management from National Technical University of Athens, and has published 60 papers at international conferences, journals and books. Sebastian Hellman dbpedia.org Sebastian is a senior member of the “Agile Knowledge Engineering and Semantic Web” AKSW research center, focusing on semantic technology research – often in combination with other areas such as machine learning, databases, and natural language processing. Sebastian is head of the “Knowledge Integration and Language Technologies (KILT)” Competence Center at InfAI. He also is the executive director and board member of the non-profit DBpedia Association. Sebastian is also a contributor to various open-source projects and communities such as DBpedia, NLP2RDF, DL-Learner and OWLG, and has been involved in numerous EU research projects. Natasa Varitimou Information Architect, Thomson Reuters Natasa has been working as a Linked Data architect in banking, life science, consumer goods, oil & gas and EU projects. She believes data will eventually become the strongest asset in any organization, and works with Semantic Web technologies, which she finds great in describing the meaning of data, integrating data and making it interoperable and of high quality. Natasa combines, links, expands and builds upon vocabularies from various sources to create flexible and lightweight information easily adaptable to different use cases. She queries these models with their data directly with SPARQL, guarantees data quality based on business rules, creates new information and defines services to bring together diverse data from different applications easily and with the semantics of data directly accessible.
May graph technology improve the deployment of humanitarian projects? The goal of using what we call “Graphs for good at Action Against Hunger” is to be more efficient and transparent, and this can have a crucial impact on people's lives. Is there common behaviour factors between different projects? Can elements of different resources or projects be related? For example, security incidents in a city could influence the way other projects run in there. The explained use case data comes from a project called Kit For Autonomous Cash Transfer in Humanitarian Emergencies (KACHE) whose goal is to deploy electronic cash transfers in emergency situations when no suitable infrastructure is available. It also offers the opportunity to track transactions in order to better recognize crisis-affected population behaviours, understanding goods distribution network to improve recommendations, identifying the role of culture in transactional patterns, as well as most required items for every place.
Big Data has transformed the world big time. It led many companies to strongly focus on data analytics, trying to collect and control gigantic amounts of data. After years in the rat race, several of them are slowly realizing that the continuous striving for having more data than others is maybe not the most meaningful business objective for everyone. In fact, the data collection craze is steadily killing innovation. In this talk, I will discuss post-Big Data thinking in which data is controlled again by people, outlining the goals and ambitions of the Solid project. Check Ruben's presentation here: https://rubenverborgh.github.io/Connected-Data-London-2019/
AI is transforming the financial media industry, impacting everything from content creation to consumption trends. Childs shares insights into how Dow Jones is reimagining what the news looks like. Learn how Dow Jones' knowledge graph platform – powered by Stardog – enables the company to unify structured and unstructured data from a vast range of news sources and deliver cutting-edge insights for customers and partners globally. Featuring Clancy Childs, former general manager of Dow Jones' knowledge enablement unit, and Mike Grove, Stardog co-founder
In Knowledge Connexions 2020, we had the honor and the privilege of hosting Hamlet Batista, alongside Dawn Anderson, David Amerland, Jason Barnard, and Andrea Volpini This great group of people shared their insights on "Knowledge Graphs and SEO: The next chapter" It is with deep sadness that we have learned that Hamlet Batista passed away in January 2021 Though our encounter was brief, we can only attest to the opinions of everyone who knew him: Hamlet was deeply knowledgeable and a pleasure to work with. We share the insights of this panel on the interplay between semantic technology, SEO, and knowledge graphs with the community, as a tribute to Hamlet Batista's memory ---- Workshop Description --- A talk by Dawn Anderson (Bertey), David Amerland (Davidamerland.com), Jason Barnard (Kalicube), Hamlet Batista (RankSense) & Andrea Volpini (Wordlift). “Knowledge Graph” is an overloaded term. Today Knowledge Graphs are becoming mainstream, and as this happens, more and more people associate Knowledge Graphs with data models, semantics, knowledge management, and ontologies For many other people, however, Knowledge Graphs still mean Google Search Info Boxes, panels, SERPs, and SEO (Search Engine Optimization). They are all right. The term Knowledge Graph was introduced by Google to signify the huge improvement that semantic technology brought to its search engine. Over time, the extended search capabilities and components enabled by semantic technology have become namesakes for Knowledge Graph. While the term Knowledge Graph has more meanings than this, it's useful to return to the source. The evolution of Knowledge Graph-powered Google search now extends to voice, assimilates information from JSON-LD markup beyond Wikipedia, and advances the state of the art in NLP (Natural Language Processing). Let's explore how this influences, and is influenced by, advances in semantic technology, where the evolution of SEO is headed, and what this means for knowledge graphs at large.
The Financial Industry Business Ontology, FIBO, is a business conceptual model of how all financial instruments, business entities and processes work in the financial industry. FIBO combines existing financial industry data standards with ontological approaches. It has been developed by the EDM Council, a non-profit global association created to advance Data Management best practices, standards and education. At this podcast, Mike Bennet shares a FIBO perspective on Data Model vs Ontology Development Slides here: https://www.slideshare.net/ConnectedDataLondon/mike-bennett
Data integration, data interoperation and data quality are major challenges that continue to haunt enterprises. Every enterprise either by choice or by chance has created massive silos of data in different formats, with duplications and quality issues. Knowledge graphs have proven to be a viable solution to address the integration and interoperation problem. Semantic technologies in particular provide an intelligent way of creating an abstract layer for the enterprise data model and mapping of siloed data to that model, allowing a smooth integration and a common view of the data. Technologies like OWL (Web Ontology Language) and RDF (Resource Description Framework) are the back bone of semantics for knowledge graph implementation. Enterprises use OWL to build an ontology model to create a common definition for concepts and how they are connected to each other in their specific domain. They then use RDF to create a triple format representation of their data by mapping it to the Ontology. This approach makes their data smart and machine understandable. But how can enterprises control and validate the quality of this mapped data? Furthermore, how can they use this one abstract representation of data to meet all their different business requirements? Different departments, different LoBs and different business branches all have their own data needs, creating a new challenge to be tackled by the enterprise. In this talk we will look at how the power of SHACL (SHAPES and Constraints Language), a W3C standard for defining constraint sets over data; complements the two core semantic technologies OWL and RDF. What are the similarities, the overlaps and the differences. We will talk about how SHACL gives enterprises the power to reuse, customize and validate their data for various scenarios, uses cases and business requirements; making the application of semantics even more practical.
Knowledge graphs generation is outpacing the ability to intelligently use the information that they contain. Octavian's work is pioneering Graph Artificial Intelligence to provide the brains to make knowledge graphs useful. Our neural networks can take questions and knowledge graphs and return answers. Imagine: a google assistant that reads your own knowledge graph (and actually works) a BI tool reads your business' knowledge graph a legal assistant that reads the graph of your case Taking a neural network approach is important because neural networks deal better with the noise in data and variety in schema. Using neural networks allows people to ask questions of the knowledge graph in their own words, not via code or query languages. Octavian's approach is to develop neural networks that can learn to manipulate graph knowledge into answers. This approach is radically different to using networks to generate graph embeddings. We believe this approach could transform how we interact with databases. Prior knowledge of Neural Networks is not required and the talk will include a simple demonstration of how a Neural Network can use graph data. About the speaker: Andy Jefferson believes that graphs have the potential to provide both a representation of the world and a technical interface that allows us to develop better AI and to turn it rapidly into useful products. Andy combines expertise in machine learning with experience building and operating distributed software systems and an understanding of the scientific process. Before he worked as a software engineer, Andy was a chemist, and he enjoys using the tensor algebra that he learned in quantum chemistry when working on neural networks.
In recent years graphs have been increasingly adopted in financial services for everything from fraud detection to Know Your Customer (KYC) to regulatory requirements. At the same time Environmental Social Governance (ESG) investing has become the fastest growing segment of financial services. In this session James Phare, Neural Alpha CEO and Founder, discusses how many of these historical graph techniques are now being enhanced for the era of sustainable investing. Going beyond definitions, let's identify use cases, discuss news and trends, and wrap up with an ask me anything session. Graph Databases have been rapidly adopted within financial services since 2008 financial crisis as regulatory drivers have mandated Banks, Asset Managers and others to ramp up efforts to understand their customers better (KYC), report to regulators more precisely in areas such as data lineage and prevent fraud and sanctions breaches. In recent years there has also been a trend towards a ‘greening' of financial services driven partly by regulators but also by a new generation of financial consumers demanding investments that no longer damage the environment or society at large. Environmental Social Governance (ESG) investing is now the fastest growing segment of the Fund Management industry with the number of funds growing 80% since 2012 to now exceed $1.8tn in assets. In this talk James discusses what he's been up to recently in his day job as CEO of Neural Alpha – a sustainable fintech based in London. He will give an overview of why he sees graphs as an essential part of the technologists' toolkit for making the financial industry more sustainable, some of the challenges in large scale, graph centric data integration projects and some of the unique analyses possible from leveraging the power of the graph. James will give a deep dive into how the team are using graph technology to develop tools for the financial industry to more thoroughly screen investments and develop new products. He will also show these tools are being used by NGOs and researchers in the fight against deforestation in projects such as www.trase.finance and in the process are exposing instances of fraud, political corruption, labor rights abuses and other barriers to sustainable investment. Slides available here: https://www.slideshare.net/ConnectedDataLondon/graphs-in-sustainable-finance
What is graph all about, and why should you care? Graphs come in many shapes and forms, and can be used for different applications: Graph Analytics, Graph AI, Knowledge Graphs, and Graph Databases. Up until the beginning of the 2010s, the world was mostly running on spreadsheets and relational databases. To a large extent, it still does. But the NoSQL wave of databases has largely succeeded in instilling the “best tool for the job” mindset. After relational, key-value, document, and columnar, the latest link in this evolutionary proliferation of data structures is graph. Graph analytics, Graph AI, Knowledge Graphs and Graph Databases have been making waves, included in hype cycles for the last couple of years. The Year of the Graph marked the beginning of it all before the Gartners of the world got in the game. The Year of the Graph is a term coined to convey the fact that the time has come for this technology to flourish. The eponymous article that set the tone was published in January 2018 on ZDNet by domain expert George Anadiotis. George has been working with, and keeping an eye on, all things Graph since the early 2000s. He was one of the first to note the continuing rise of Graph Databases, and to bring this technology in front of a mainstream audience. The Year of the Graph has been going strong since 2018. In August 2018, Gartner started including Graph in its hype cycles. Ever since, Graph has been riding the upward slope of the Hype Cycle. The need for knowledge on these technologies is constantly growing. To respond to that need, the Year of the Graph newsletter was released in April 2018. In addition, a constant flow of graph-related news and resources is being shared on social media. To help people make educated choices, the Year of the Graph Database Report was released. The report has been hailed as the most comprehensive of its kind in the market, consistently helping people choose the most appropriate solution for their use case since 2018. The report, articles, news stream, and the newsletter have been reaching thousands of people, helping them understand and navigate this landscape. We'll talk about the Year of the Graph, the different shapes, forms, and applications for graphs, the latest news and trends, and wrap up with an ask me anything session. Slides available here: https://www.slideshare.net/ConnectedDataLondon/the-years-of-the-graph-the-future-of-the-future-is-here
Knowledge graphs are all the rage these days, but for many they are still an exotic notion which is hard to come to terms with. In this panel, experts who have been working with knowledge graphs before it was cool will share their experience. More specifically, we'll be looking into the interplay between semantics, SEO, schema.org, JSON-LD, and knowledge graphs. Though it may not be obvious, if you are doing SEO today, you are working with knowledge graphs. Ever since Google popularized the notion of knowledge graphs, it's been things, not strings. The “things” that search engines can understand are all in schema.org, which is, you guessed it, a framework for building knowledge graphs. Semantic SEO experts Jono Alderson and Andrea Volpini, and expert knowledge graph builder Panos Alexopoulos will share how to onboard yourself to knowledge graphs via schema.org and JSON-LD, as well as the specifics of working with these technologies, and how they can be used to kick-start your own knowledge graphs. We'll also look at the other direction in this equation: how you can use your knowledge graphs to boost your SEO. Last but not least, we will examine the evolution of schema.org Moderated by David Amerland. Jono Alderson, Yoast. Panos Alexopoulos, Textkernel. Andrea Volpini, Wordlift
David Amerland, George Anadiotis, Panos Alexopoulos, and Teodora Petkova, have a few things in common. Besides being successful professionals, each in their own way, they also share a passion not many people share: the Semantic Web. As the Semantic Web is turning 20, they come together to talk about the passion. The first rule of the Semantic Web in the 2020's is, you don't talk about the Semantic Web. Most people don't. For most people, the Semantic Web is something they may be vaguely familiar with, and perhaps something that has been tried, and failed. Truth is, you may not know it, but you use the Semantic Web every day, and you love it. The Semantic Web, soon to celebrate its 20th anniversary, may not enjoy the kind of universal acclaim the WWW got on its 30th birthday, although both were kickstarted by Tim Berners Lee. It does, however, underpin Knowledge Graphs, and in that sense, it is in its heyday. We have known, loved, and used the Semantic Web for a long time, and we'll share why we think you should, too.
Find the answers to: Why go semantic?Should i use RDF or OWL?What is the difference, what is the link?Did you say smart data? In this podcast you can check RDF Integration examples, learn about Ontologies and OWL Keynote by Tara Raafat, (PhD) Chief Ontologist at Mphasis.
Whether we call it Semantic Web or Linked Data, Tim Berner Lee's vision never really caught on among users and developers. Although part of this vision is about decentralization, and this is something a few people are working on, Semantic Web technology remains largely underutilized by them. In this panel, we will explore how the Semantic Web and decentralization can benefit each other. Getting together people from both communities, and exploring questions such as: Is the Semantic Web technological stack really as complex as it is perceived to be? How can it be made more accessible, and align better with today's realities in software development? What are the issues facing people working in decentralization, and how could Semantic Web technology provide solutions? What about sustainability? How can efforts aiming to provide services to the public at large find a way to sustain themselves, navigating a challenging business landscape? Andre Garzia from Mozilla, Sebastian Hellman from DBpedia, Ruben Verborgh from Ghent University, moderated by Jonathan Holtby from Hub of All Things
Scientists, health researchers and policymakers are using all the tools they can get their hands on to try and beat the current global pandemic. Germany's National Centre for Diabetes Research (DZD) is one of the organizations turning to Artificial Intelligence, advanced visualisation techniques and other tools to aid the search for a vaccine and effective treatments. Graph technology is key in this effort. DZD is integrating data from various sources and linking them in a dedicated COVID-19 Knowledge Graph to help researchers and scientists quickly and efficiently find their way through the more than 40,000 publications out there on the problem. DZD's Head of Data Management and Knowledge Management, Dr. Alexander Jarasch, notes that, “Graph enables a new dimension of data analysis by helping us to connect highly heterogeneous data from various disciplines.” The COVID GRAPH project is a voluntary initiative of graph enthusiasts and companies with the goal to build a knowledge graph with relevant information about the COVID-19 virus. It's a knowledge graph on COVID-19 that integrates various public datasets. This includes relevant publications, case statistics, genes and functions, molecular data and much more. Still, the global scientific knowledge base is little more than a collection of documents. It is written by humans for humans, and we have done so for a long time. This makes perfect sense, after all it is people that make up the audience, and researchers in particular. Yet, with the monumental progress in information technologies over the more recent decades, one may wonder why it is that the scientific knowledge communicated in scholarly literature remains largely inaccessible to machines. Surely it would be useful if some of that knowledge is more available to automated processing. The Open Research Knowledge Graph (ORKG) project is working on answers and solutions. The project, recently initiated, and coordinated by TIB (Leibniz Information Centre for Science and Technology and University Library) is open to the community. ORKG actively engages research infrastructures and research communities in the development of technologies and use cases for open graphs about research knowledge. Dr. Sören Auer, TIB Director and ORKG Lead, states that "Knowledge Graphs..allow us to interlink, interconnect and integrate heterogeneous data from various sources in various formats, modalities, levels of structuredness, governance schemes etc. As a result the effort required for preparing and integrating data for answering specific research questions is dramatically reduced, and AI techniques can more directly applied". Join us as George Anadiotis hosts Alexander Jarasch and Sören Auer in a discussion that will go over: The chronic issues that plague scientific research, and how they apply to life sciences, and SARS-CoV-2 research in specific The way data, analytics, and AI can help deal with the issues and facilitate research The COVID GRAPH project. What is the goal? Who set it up? What does it include? Whom is it for? How does it work? Differences and similarities between property graphs and knowledge graphs, and how that applies in the COVID GRAPH project. Can ORKG and COVID GRAPH work together? What are next steps / outlook? How can people get involved?
For as long as people have been thinking about thinking, we have imagined that somewhere in the inner reaches of our minds there are ghostly, intangible things called ideas which can be linked together to create representations of the world around us — a world that has a certain structure, conforms to certain rules, and to a certain extent, can be predicted and manipulated on the basis of our ideas. Rationalist philosophers have struggled for centuries to make a solid case for this intuitive, almost inborn view of human experience, but it is only with the advent of modern computing that we have the opportunity to build machines which truly think the way we think we think. For the first time, we can give concrete form to our mental representations as graphs or hypergraphs, explicitly specify our mental schemas as ontologies, and formally define the rules by which we reason and act on new information. If we so choose, we can even use these human-like building blocks to construct systems that carry far more information than any single human brain, and that connect and serve millions of people in real time. As enterprise knowledge graphs become increasingly mainstream, we appear to be headed in that direction, although there is no guarantee that the momentum will continue unless actively sustained. Where knowledge graphs are likely to be the most essential, in the long run, is at the interface between human and machine; mental representation versus formal knowledge representation. In this talk, we will take a step back from the many practical and social challenges of building large-scale knowledge graphs, which at this point are well-known. Instead, we will take up the quest for an ideal data model for knowledge representation and data integration, seeking common ground among the most popular data models used in industry and open source software, surveying what we suspect to be true of our own inner models, and previewing structure and process in Apache TinkerPop, version 4. We will also take a tentative step forward into the world of augmented perception via graph stream processing. Keynote by Joshua Shinavier, Uber Research Scientist, Apache TinkerPop co-founder, at Connected Data London 2019
Some of us may have been saying that for years, but now the Gartners of the world are picking up on it too. So, the oracles have spoken: “The application of graph processing and graph DBMSs will grow at 100 percent annually through 2022 to continuously accelerate data preparation and enable more complex and adaptive data science”. That all sounds great, in theory. In practice, however, things are messy. If you're out to shop for a graph database, you will soon realize that there are no universally supported standards, performance evaluation is a dark art, and the vendor space seems to be expanding by the minute. Recently, the W3C initiated an effort to brings the various strands of graph databases closer together, but it's still a long way from fruition. So, what's all the fuss about? What are some of the things graph databases are being used for, what are they good at, and what are they not so good at? Property graphs and RDF are the 2 prevalent ways to model the world in graph; What is each of these good at, specifically? What problems does each of these have, and how are they being addressed? RDF* is a proposal that could help bridge graph models across property graphs and RDF. What is it, how does it work, and when will it be available to use in production? What about query languages? In the RDF world, SPARQL rules, but what about property graphs? Can Gremlin be the one graph virtual machine to unite them all? What about the future of graph databases? Could graph turn out to be a way to model data universally? Moderated by George Anadiotis. Panelists: Geoffrey Horrell Director of Applied Innovation, London Lab at Refinitiv Steve Sarsfield VP of product, Cambridge Semantics Joshua Shinavier Research scientist, Uber