RDF query language
POPULARITY
Von Kanten und Knoten: Ein Einstieg in Graph-DatenbankenWelche Relationen die einzelnen Datensätze in deiner Datenbank haben, kann eine Rolle bei der Entscheidung spielen, welche Art von Datenbank du am besten einsetzen solltest. Wenn du unabhängige Datensätze hast, die keine Relation zueinander haben oder häufige One to Many-Relationen, sind relationale Datenbanken gut geeignet. Wenn du jedoch sehr viele Many to Many Relationen hast, spielt eine Datenbank-Art ihre Vorteile aus: Graph Datenbanken.Ein gutes Beispiel sind wohl soziale Netzwerke wie LinkedIn oder Facebook, wo Events, Personen, Firmen und Posts mit Kommentaren eine durchgehende Beziehung zueinander haben. Auch bekannt als Social Graph. Natürlich kann dies auch alles in einer relationalen Datenbank gespeichert werden, aber Fragen wie “Gib mir bitte alle Personen, über die ich im 3. Grad verbunden bin, die aus Deutschland kommen und bei Aldi gearbeitet haben” sind schwer zu beantworten. Für Graph-Datenbanken jedoch ein Klacks. Grund genug, diesem Thema eine Bühne zu geben. Darum geht es in dieser Episode.In dem Interview mit dem Experten Michael Hunger klären wir, was eine Graph-Datenbank ist, welche Anwendungsfälle sich dadurch besser abbilden lassen, als z. B. in relationalen Datenbanken, was der Ursprung von Graph Datenbanken ist, was der Unterschied eines Property-Graph-Model und dem Triple-Store-Model ist, wie man mithilfe von Sprachen wie Cypher, SPARQL und Datalog, Daten aus einem Graph extrahiert, für welche Use Cases dies ggf. nicht die richtige Datenstruktur ist und geben einen Einblick in die Themen Knowledge Graphen, LLMs und GraphRAG.Bonus: Was der Film Matrix mit Graph-Datenbanken zu tun hat.Unsere aktuellen Werbepartner findest du auf https://engineeringkiosk.dev/partnersDas schnelle Feedback zur Episode:
Years ago, I got to be an advisor for this company called data.world, and at the time, they were just getting started on helping figuring out how do you converge all the data sets that are in the world and help people work with them and combine them and share them. They built this thing that was kind of like GitHub for data. I was interested in it because I could see at the time where the world was going and we're going to need these much more advanced tools for being able to manage data. I tried to contribute my small way, but my favorite thing about it is that I got to know Bryon Jacob, who's the CTO of data.world. Brian is delightful guy. This is one of the guys who's been thinking about the the nature of data, the structure of data, how we work with that in computers for his entire career. And he got onto a track that you could consider a little bit fringe, of using graph databases decades ago, the semantic models that we use to understand data from the thinking around RDF and the early semantic web. And now what he's built is the system that when ingests any kind of data, it parses that out, takes it in a graph database and makes it accessible through a query language called SPARQL, which you'll hear us refer to. This is a kind of "advanced mode episode" and I know we're going to lose some people We refer to a lot of technical stuff that probably only data nerds are really going to be interested in. I won't be offended if you check out. But, if you have any interest in data or the future of analyzing data and using data in AIS, you need to listen and understand this conversation. Brian is an expert. He's built one of the most important king pin tools for using all the data in large-scale organizations or projects within the new generative AI context. If you are trying to use something like ChatGPT or another LLM as an interface to structured data, you're doing it wrong, and I think you'll be convinced about that as you start to understand what we're discussing today. So, hang in there. I promise this is a really REALLY valuable conversation for anybody who is trying to work at the forefront of using AIS for data analytics. I'm thrilled that we get to share this conversation with Bryon with you today. Important Links SPARQL data.world HomeAway About Bryon Jacob Bryon Jacob is the CTO and co-founder of data.world - on a mission to build the world's most meaningful, collaborative, and abundant data resource. Bryon is a recognized leader in building large-scale consumer internet systems and an expert in data integration solutions. Bryon's twenty years of academic and professional experience spans AI research at Case Western Reserve University, enterprise configuration software at Trilogy, and consumer web experience at Amazon and most recently in ten years building HomeAway.com. At HomeAway, Bryon oversaw platform development and the integration of thirty acquisitions while building the world's largest online marketplace for vacation rentals.
In dieser Folge spricht Jacqueline Klusik-Eckert mit Harald Sack und Holger Simon über den Knowledge Graph und seine Bedeutung für die Kunstgeschichte. Schon jetzt finden sich viele Informationen über Kulturgüter und Geschichte digital im Netz. Datenbanken und Repositories bieten zwar verstärkt Zugang zu Informationen, sind aber oft in isolierten Silos verstreut. Institutionen nutzen unterschiedliche Systeme zur Datenbereitstellung, was zu einer fragmentierten Landschaft führt. Meta-Datenbanken wie Europeana und die Deutsche Digitale Bibliothek versuchen, diese Fragmentierung zu überwinden, aber ihr Erfolg ist begrenzt. Es scheint, als bräuchten wir eine Meta-Meta-Datenbank, um diese Silos zu durchbrechen. Die Grundidee des Internets und das Konzept Linked Open Data (LOD) versprechen hier Abhilfe zu leisten. Die Herausforderung besteht darin, dieses vernetzte Wissen digital abzubilden. Hier kommt der Knowledge Graph ins Spiel. Im Rahmen des NFDI 4 Culture Projekts entsteht ein solcher Wissensgraph. Während in anderen Bereichen die Technologie des Knowledge Graphs schon länger im Einsatz ist – Google hat seit 2012 einen solchen Graphen etabliert –, befindet sich die Kunstgeschichte möglicherweise noch am Anfang dieser Entwicklung. GLAM-Institutionen (Galleries, Libraries, Archives, Museums) haben eine wichtige Rolle bei der Datenbereitstellung, müssen aber auch Anreize für den Austausch schaffen und erhalten.Für die Forschung eröffnet der Knowledge Graph neue Horizonte. Er ermöglicht nicht nur andere Fragestellungen und Visualisierungen von Datenmassen, sondern auch eine komplexere Anreicherung von Museumsinformationen. Aber letztendlich gewinnt der Mensch durch die Erkenntnisse, die aus diesen Daten gezogen werden.Von der Modellierung im Graphen bei der Digitalisierung bis hin zur Unterstützung durch die NFDI gibt es verschiedene Wege, sich einzubringen. Doch letztendlich liegt die Herausforderung darin, wie wir als Gemeinschaft von Forschenden und Kulturerbebewahrenden diese komplexe Datenlandschaft gemeinsam gestalten und nutzen können.Prof. Dr. Harald Sack isst Bereichsleiter für Information Service Engineering bei FIZ Karlsruhe – Leibniz-Institut für Informationsinfrastruktur und Professor am Karlsruher Institut für Technologie (KIT), Institut für Angewandte Informatik und Formale Beschreibungsverfahren (AIFB) mit der Forschungsgruppe „Information Service Engineering“. Und NFDI4Culture Co-Spockesperson von FIz KarlsruheProf. Dr. Holger Simon ist Sprecher im AK Digitale Kunstgeschichte, Geschäftsführer der Pausanio GmbH & Co. KG, Er ist außerplanmäßiger Professor an der Universität zu Köln und im Culture Steering Board NFDI 4 Culture.Begleitmaterial zu den Folgen findest du auf der Homepage unter https://www.arthistoricum.net/themen/podcasts/arthistocastAlle Folgen des Podcasts werden bei heidICON mit Metadaten und persistentem Identifier gespeichert. Die Folgen haben die Creative-Commons-Lizenz CC BY 4.0 und können heruntergeladen werden. Du findest sie unterhttps://doi.org/10.11588/heidicon/1738702Bei Fragen, Anregungen, Kritik und gerne auch Lob kannst du gerne per Mail an uns schicken unterpodcast@digitale-kunstgeschichte.de
如果喜欢我们的节目,欢迎通过爱发电打赏支持:https://afdian.net/@pythonhunter 嘉宾 Wey Gu 主播 laike9m laixintao Ada Wen 时间轴 00:00:13 开场 00:00:27 嘉宾自我介绍 00:01:10 话题:图、图数据库介绍 00:06:30 提问:知识图谱与外卖路径规划 00:07:51 提问补充:知识图谱能不能当做传统的图数据库来用 00:08:53 话题:微信用 NebulaGraph 做存储的例子 00:10:32 提问:在好友关系查询上传统 SQL 与图数据库的区别 00:11:56 提问补充:NebulaGraph 两点之间边的数量增加,查询成本以什么量级增加增加 00:13:59 提问:不同 Graph Database(图数据库) 的查询语言是否存在一套标准类似 SQL 00:19:21 提问:查询是直接写语句还是有 SDK,以及结果格式是什么样的 00:20:21 提问:使用不同的图数据库针对同一个查询结果返回的数据结构是不是相同的 00:23:13 提问:业界对于 GQL 的支持程度如何 00:25:43 话题:图数据库与大模型的关系 00:33:57 提问:图数据库是如何辅助大模型的 00:39:00 提问:图数据库与大模型的结合需要用户有怎样的基础 00:41:30 延展:知识图谱的建立和使用都可以与大模型结合 00:43:50 延展:从 图数据库驱动的基础设施运维示例 延展 00:53:18 延展:信涛的 Prometheus 之旅 00:53:45 话题:在开源公司的工作体验 01:18:18 话题:古老师的 开源面对面 播客 01:26:00 固定话题:好物分享 01:35:33 结束语 相关链接 00:01:32 柯尼斯堡七桥问题 维基百科 | 百度百科 00:02:39 neo4j 00:03:28 RDF(Resource Description Framework | 资源描述框架) 维基百科 | 百度百科 00:03:56 知识图谱 维基百科 | 百度百科 00:06:49 GIS(Geographic Information System | 地理信息系统) 维基百科 | 百度百科 00:07:01 NebulaGraph 00:08:36 TAO: The power of the graph 00:13:09 Redis geospatial 00:16:52 GQL 维基百科 00:17:02 Cypher Query Language 00:17:39 Gremlin Home Page | 维基百科 00:17:56 nGQL | nGQL 简明教程 00:18:22 SPARQL 00:29:01 往期节目关联:Ep 39. 和 Alex 聊聊向量数据库与职业规划 00:35:02 古老师提到的 ref: 一 | 二 00:35:27 langchain | llama_index 00:43:50 图数据库驱动的基础设施运维示例 00:53:18 Prometheus 00:56:33 Teahour | Teahour#90: 和 PingCAP CTO 畅谈数据库和编程语言 01:01:13 信涛的 Awesome Commandline 录像和资料 01:18:18 开源面对面 01:26:18 Cursor: The AI-first Code Editor 01:30:37 Copilot | Copilot for CLI 01:34:08 信涛分享的三门 udemy 课程一 Building Your English Brain 01:34:08 信涛分享的三门 udemy 课程二 Getting Started with Wireshark 01:34:08 信涛分享的三门 udemy 课程三 Discovering Backend Bottlenecks 01:35:00 温老师的分享
Provided as a free resource by DataStax https://www.datastax.com/products/datastax-astra?utm_source=DataMeshRadio (AstraDB) https://www.patreon.com/datameshradio (Data Mesh Radio Patreon) - get access to interviews well before they are released Transcript for this episode (https://docs.google.com/document/d/1ls5QawrOffb3VGIfZCmPYHG0v729Ye7Gu7oKRaOYi8c/edit (link)) provided by Starburst. See their Data Mesh Summit recordings https://www.starburst.io/learn/events-webinars/datanova-on-demand/?datameshradio (here) and their great data mesh resource center https://www.starburst.io/info/distributed-data-mesh-resource-center/?datameshradio (here) Knowledge Graph Conference website: https://www.knowledgegraph.tech/ (https://www.knowledgegraph.tech/) Free Ticket Raffle for Knowledge Graph Conference (submissions must be by April 18 at 11:59pm PST): https://forms.gle/Gy8KSMNDxbBfib2Z6 (Google Form) In this episode of the Knowledge Graph Conference takeover week, special guest host Ellie Young (https://www.linkedin.com/in/sellieyoung/ (Link)) interviewed Philippe Höij, Founder at DFRNT. At the wrap-up, Philippe mentioned that data architects should be able to communicate in ways other than PowerPoint. We need new and better ways to express ourselves, and the way things are connected. We will always need metadata around our data, we need text to express our ambiguity; but we don't have great ways to express things that are slightly ambiguous - not fully formed but also mostly known. A good tool allows to more easily query your model of the world to iterate and increment on it. That is where knowledge graphs can be the most helpful. Ellie responded, "It's not difficult, it's just complicated." Philippe shared his journey towards knowledge graphs, especially thinking about the AIDITTO project he and a team built out of the "Hack the Crisis Sweden" event in 2020 around COVID-19. He needed a way to prototype, visualize and collaborate on data and the connections between data at scale. A regular data model does not convey enough information about what the data is and how it relates. Ellie then shared some insight into the difficulties around collaborating on data across organizations and people in her climate change work at Common Action. Collaborating across organizations, all with different ways of working, you need a common "language" or way of communicating relative to data but can't easily develop a shared schema. Knowledge graphs provide incremental capabilities for collaboration. Philippe talked about when collaborating across organizations, you still have the needs master data management (MDM) tries - and often fails - to address but there is zero capability to manage the other organizations' data flow into the shared data "pool". Philippe was having issues with open-ended knowledge graphs like SparQL or OWL - they needed composable data structures to be able to be flexible in the case they can't fully decompose a concept, especially as the concept or their understanding of the concept evolves. For Philippe, TerminusDB was a big win because it allowed for composable data structures and much easier querying across the graph. Ellie discussed the origins of TerminusDB being about collaboration across many entities/organizations so it has a much different approach to accepting data that doesn't necessarily conform to a schema or data model. The "git for data" concept in TerminusDB also really was a big win for Philippe as it made experimentation much easier. Ellie shared some of the challenges in her work at Common Action around working with many different entities, many of which are small and not that data literate - or "data native" - and how they need to enable collaboration without rigidity as things are so dynamic. Philippe discussed the need for enabling people to collaborate in a "messy" environment - the world is changing and trying to spend all your time and effort categorizing it into a single schema isn't...
On The Cloud Pod this week, Jonathan's got his detective hat on. Plus Akamai steps up to CloudFare with Linode acquisition, AWS' CloudFormation Hooks lift us up, and EPYC instances are now available. A big thanks to this week's sponsors: Foghorn Consulting, which provides full-stack cloud solutions with a focus on strategy, planning and execution for enterprises seeking to take advantage of the transformative capabilities of AWS, Google Cloud and Azure. This week's highlights
As the interest in, and hype around, Knowledge Graphs is growing, there is also a growing need for sharing experience and best practices around them. Let's talk about definitions, best practices, hype, and reality. What is a Knowledge Graph? How can I use a Knowledge Graph & how do i start building one? This panel is an opportunity to hear from industry experts using these technologies & approaches to discuss best practices, common pitfalls and where this space is headed next. --- Katariina Kari Research Engineer, Zalando Tech-Hub Katariina Kari (née Nyberg) is a research engineer at the Zalando Tech-Hub in Helsinki. Katariina holds a Master in Science and Master in Music and is specialised in semantic web and guiding the art business to the digital age. At Zalando she is modelling the Fashion Knowledge Graph, a common vocabulary for fashion with which Zalando improves is customer experience. Katariina also consults art institutions to embrace the digital age in their business and see its opportunities. Panos Alexopoulos Head of Ontology, Textkernel BV Panos Alexopoulos has been working at the intersection of data, semantics, language and software for years, and is leading a team at Textkernel developing a large cross-lingual Knowledge Graph for HR and Recruitment. Alexopoulos holds a PhD in Knowledge Engineering and Management from National Technical University of Athens, and has published 60 papers at international conferences, journals and books. Sebastian Hellman dbpedia.org Sebastian is a senior member of the “Agile Knowledge Engineering and Semantic Web” AKSW research center, focusing on semantic technology research – often in combination with other areas such as machine learning, databases, and natural language processing. Sebastian is head of the “Knowledge Integration and Language Technologies (KILT)” Competence Center at InfAI. He also is the executive director and board member of the non-profit DBpedia Association. Sebastian is also a contributor to various open-source projects and communities such as DBpedia, NLP2RDF, DL-Learner and OWLG, and has been involved in numerous EU research projects. Natasa Varitimou Information Architect, Thomson Reuters Natasa has been working as a Linked Data architect in banking, life science, consumer goods, oil & gas and EU projects. She believes data will eventually become the strongest asset in any organization, and works with Semantic Web technologies, which she finds great in describing the meaning of data, integrating data and making it interoperable and of high quality. Natasa combines, links, expands and builds upon vocabularies from various sources to create flexible and lightweight information easily adaptable to different use cases. She queries these models with their data directly with SPARQL, guarantees data quality based on business rules, creates new information and defines services to bring together diverse data from different applications easily and with the semantics of data directly accessible.
Wikidata is a community project with the goal to store all of the world’s knowledge in a machine readable format. This data can be viewed, edited and -most excitingly - queried by everyone. In this talk I will show how the Wikidata Query Interface allows us to ask questions about the world around us and can help us explore these with automatic interactive visualisations. To do so, I will introduce the Wikidata project and explain how data is modelled in it. With this knowledge we will go on to learn about the SPARQL query language which allows us to write queries to Wikidata. We can then use the Wikidata Query Builder which is an online interface that turns queries into timelines, tables and other interactively explorable visualisations. What kinds of queries can we expect? The world’s biggest cities with a female mayor. A map of German election districts coloured by winning party (or gender, or age, …). An interactive timeline of politicians who became federal ministers. Handball player nick names… (Almost) everything is possible with Wikidata! Presenter: Knut Hühne
Wikidata is a community project with the goal to store all of the world’s knowledge in a machine readable format. This data can be viewed, edited and -most excitingly - queried by everyone. In this talk I will show how the Wikidata Query Interface allows us to ask questions about the world around us and can help us explore these with automatic interactive visualisations. To do so, I will introduce the Wikidata project and explain how data is modelled in it. With this knowledge we will go on to learn about the SPARQL query language which allows us to write queries to Wikidata. We can then use the Wikidata Query Builder which is an online interface that turns queries into timelines, tables and other interactively explorable visualisations. What kinds of queries can we expect? The world’s biggest cities with a female mayor. A map of German election districts coloured by winning party (or gender, or age, …). An interactive timeline of politicians who became federal ministers. Handball player nick names… (Almost) everything is possible with Wikidata! Presenter: Knut Hühne
Some of us may have been saying that for years, but now the Gartners of the world are picking up on it too. So, the oracles have spoken: “The application of graph processing and graph DBMSs will grow at 100 percent annually through 2022 to continuously accelerate data preparation and enable more complex and adaptive data science”. That all sounds great, in theory. In practice, however, things are messy. If you're out to shop for a graph database, you will soon realize that there are no universally supported standards, performance evaluation is a dark art, and the vendor space seems to be expanding by the minute. Recently, the W3C initiated an effort to brings the various strands of graph databases closer together, but it's still a long way from fruition. So, what's all the fuss about? What are some of the things graph databases are being used for, what are they good at, and what are they not so good at? Property graphs and RDF are the 2 prevalent ways to model the world in graph; What is each of these good at, specifically? What problems does each of these have, and how are they being addressed? RDF* is a proposal that could help bridge graph models across property graphs and RDF. What is it, how does it work, and when will it be available to use in production? What about query languages? In the RDF world, SPARQL rules, but what about property graphs? Can Gremlin be the one graph virtual machine to unite them all? What about the future of graph databases? Could graph turn out to be a way to model data universally? Moderated by George Anadiotis. Panelists: Geoffrey Horrell Director of Applied Innovation, London Lab at Refinitiv Steve Sarsfield VP of product, Cambridge Semantics Joshua Shinavier Research scientist, Uber
Simon and Nicki share a broad range of interesting updates! 00:48 Storage 01:43 Compute 05:27 Networking 16:07 Databases 12:03 Developer Tools 13:18 Analytics 19:06 IoT 20:42 Customer Engagement 21:03 End User Computing 22:31 Machine Learning 25:27 Application Integration 27:35 Management and Governance 29:17 Media 30:53 Security 32:56 Blockchain 33:14 Quick Starts 33:51 Training 36:11 Public Datasets 37:12 Robotics Shownotes: Topic || Storage AWS Snowball Edge now supports offline software updates for Snowball Edge devices in air-gapped environments | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-snowball-edge-now-supports-offline-software-updates-for-snowball-edge-devices-in-air-gapped-environments/ Topic || Compute Now Available: Amazon EC2 High Memory Instances with up to 24 TB of memory, Purpose-built to Run Large In-memory Databases, like SAP HANA | https://aws.amazon.com/about-aws/whats-new/2019/10/now-available-amazon-ec2-high-memory-instances-purpose-built-run-large-in-memory-databases/ Introducing Availability of Amazon EC2 A1 Bare Metal Instances | https://aws.amazon.com/about-aws/whats-new/2019/10/introducing-availability-of-amazon-ec2-a1-bare-metal-instances/ Windows Nodes Supported by Amazon EKS | https://aws.amazon.com/about-aws/whats-new/2019/10/windows-nodes-supported-by-amazon-eks/ Amazon ECS now Supports ECS Image SHA Tracking | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-ecs-now-supports-ecs-image-sha-tracking/ AWS Serverless Application Model feature support updates for Amazon API Gateway and more | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-serverless-application-model-feature-support-updates-for-amazon-api-gateway-and-more/ Queuing Purchases of EC2 RIs | https://aws.amazon.com/about-aws/whats-new/2019/10/queuing-purchases-of-ec2-ris/ Topic || Network AWS Direct Connect Announces the Support for Granular Cost Allocation and Removal of Payer ID Restriction for Direct Connect Gateway Association. | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-direct-connect-aws-direct-connect-announces-the-support-for-granular-cost-allocation-and-removal-of-payer-id-restriction-for-direct-connect-gateway-association/ AWS Direct Connect Announces Resiliency Toolkit to Help Customers Order Resilient Connectivity to AWS | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-direct-connect-announces-resiliency-toolkit-to-help-customers-order-resilient-connectivity-to-aws/ Amazon VPC Traffic Mirroring Now Supports AWS CloudFormation | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-vpc-traffic-mirroring-now-supports-aws-cloudformation/ Application Load Balancer and Network Load Balancer Add New Security Policies for Forward Secrecy with More Stringent Protocols and Ciphers | https://aws.amazon.com/about-aws/whats-new/2019/10/application-load-balancer-and-network-load-balancer-add-new-security-policies-for-forward-secrecy-with-more-strigent-protocols-and-ciphers/ Topic || Databases Amazon RDS on VMware is now generally available | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-rds-on-vmware-is-now-generally-available/ Amazon RDS Enables Detailed Backup Storage Billing | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-rds-enables-detailed-backup-storage-billing/ Amazon RDS for PostgreSQL Supports Minor Version 11.5, 10.10, 9.6.15, 9.5.19, 9.4.24, adds Transportable Database Feature | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-rds-for-postgresql-supports-minor-version-115-1010-9615-9515-9424-adds-transportable-database-feature/ Amazon ElastiCache launches self-service updates for Memcached and Redis Cache Clusters | https://aws.amazon.com/about-aws/whats-new/2019/10/elasticache-memcached-self-service-updates/ Amazon DocumentDB (with MongoDB compatibility) adds additional Aggregation Pipeline Capabilities including $lookup | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-documentdb-add-additional-aggregation-pipeline-capabilities/ Amazon Neptune now supports Streams to capture graph data changes | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-neptune-now-supports-streams-to-capture-graph-data-changes/ Amazon Neptune now supports SPARQL 1.1 federated query | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-neptune-now-supports-SPARQL-11-federated-query/ Topic || Developer Tools AWS CodePipeline Enables Setting Environment Variables on AWS CodeBuild Build Jobs | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-codepipeline-enables-setting-environment-variables-on-aws-codebuild-build-jobs/ AWS CodePipeline Adds Execution Visualization to Pipeline Execution History | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-codepipeline-adds-execution-visualization-to-pipeline-execution-history/ Topic || Analytics Amazon Redshift introduces AZ64, a new compression encoding for optimized storage and high query performance | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-redshift-introduces-az64-a-new-compression-encoding-for-optimized-storage-and-high-query-performance/ Amazon Redshift Improves Performance of Inter-Region Snapshot Transfers | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-redshift-improves-performance-of-inter-region-snapshot-transfers/ Amazon Elasticsearch Service provides option to mandate HTTPS | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-elasticsearch-service-provides-option-to-mandate-https/ Amazon Athena now provides an interface VPC endpoint | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-athena-now-provides-an-interface-VPC-endpoint/ Amazon Kinesis Data Firehose adds cross-account delivery to Amazon Elasticsearch Service | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-kinesis-data-firehose-adds-cross-account-delivery-to-amazon-elasticsearch-service/ Amazon Kinesis Data Firehose adds support for data stream delivery to Amazon Elasticsearch Service 7.x clusters | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-kinesis-data-firehose-adds-support-data-stream-delivery-amazon-elasticsearch-service/ Amazon QuickSight announces Data Source Sharing, Table Transpose, New Filtering and Analytical Capabilities | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-quicksight-announces-data-source-sharing-table-transpose-new-filtering-analytics-capabilities/ AWS Glue now provides ability to use custom certificates for JDBC Connections | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-glue-now-provides-ability-to-use-custom-certificates-for-jdbc-connections/ You can now expand your Amazon MSK clusters and deploy new clusters across 2-AZs | https://aws.amazon.com/about-aws/whats-new/2019/10/now-expand-your-amazon-msk-clusters-and-deploy-new-clusters-across-2-azs/ Amazon EMR Adds Support for Spark 2.4.4, Flink 1.8.1, and the Ability to Reconfigure Multiple Master Nodes | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-emr-adds-support-for-spark-2-4-4-flink-1-8-1-and-ability-to-reconfigure-multiple-master-nodes/ Topic || IoT Two New Solution Accelerators for AWS IoT Greengrass Machine Learning Inference and Extract, Transform, Load Functions | https://aws.amazon.com/about-aws/whats-new/2019/10/two-new-solution-accelerators-for-aws-iot-greengrass-machine-lea/ AWS IoT Core Adds the Ability to Retrieve Data from DynamoDB using Rule SQL | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-iot-core-adds-ability-to-retrieve-data-from-dynamodb-using-rule-sql/ PSoC 62 Prototyping Kit is now qualified for Amazon FreeRTOS | https://aws.amazon.com/about-aws/whats-new/2019/10/psoc-62-prototyping-kit-qualified-for-amazon-freertos/ Topic || Customer Engagement Amazon Pinpoint Adds Support for Message Templates | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-pinpoint-adds-support-for-message-templates/ Topic || End User Computing Amazon AppStream 2.0 adds support for 4K Ultra HD resolution on 2 monitors and 2K resolution on 4 monitors | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-appstream-2-adds-support-for-4k-ultra-hd-resolution-on-2-monitors-and-2k-resolution-on-4-monitors/ Amazon AppStream 2.0 Now Supports FIPS 140-2 Compliant Endpoints | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-appstream-2-now-supports-fips-140-2-compliant-endpoints/ Amazon Chime now supports screen sharing from Mozilla Firefox and Google Chrome without a plug-in or extension | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-chime-now-supports-screen-sharing-from-mozilla-firefox-and-google-chrome-without-a-plug-in-or-extension/ Topic || Machine Learning Amazon Translate now adds support for seven new languages - Greek, Romanian, Hungarian, Ukrainian, Vietnamese, Thai, and Urdu | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-translate-adds-support-seven-new-languages/ Introducing Amazon SageMaker ml.p3dn.24xlarge instances, optimized for distributed machine learning with up to 4x the network bandwidth of ml.p3.16xlarge instances | https://aws.amazon.com/about-aws/whats-new/2019/10/introducing-amazon-sagemaker-mlp3dn24xlarge-instances/ SageMaker Notebooks now support diffing | https://aws.amazon.com/about-aws/whats-new/2019/10/sagemaker-notebooks-now-support-diffing/ Amazon Lex Adds Support for Checkpoints in Session APIs | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-lex-adds-support-for-checkpoints-in-session-apis/ Amazon SageMaker Ground Truth Adds Built-in Workflows for the Verification and Adjustment of Data Labels | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-sagemaker-ground-truth-adds-built-in-workflows-for-verification-and-adjustment-of-data-labels/ AWS Chatbot Now Supports Notifications from AWS Config | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-chatbot-now-supports-notifications-from-aws-config/ AWS Deep Learning Containers now support PyTorch | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-deep-learning-containers-now-support-pytorch/ Topic || Application Integration AWS Step Functions expands Amazon SageMaker service integration | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-step-functions-expands-amazon-sagemaker-service-integration/ Amazon EventBridge now supports AWS CloudFormation | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-eventbridge-supports-aws-cloudformation/ Amazon API Gateway now supports access logging to Amazon Kinesis Data Firehose | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-api-gateway-now-supports-access-logging-to-amazon-kinesis-data-firehose/ Topic || Management and Governance AWS Backup Enhances SNS Notifications to filter on job status | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-backup-enhances-sns-notifications-to-filter-on-job-status/ AWS Managed Services Console now supports search and usage-based filtering to improve change type discovery | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-managed-services-console-now-supports-search-and-usage-based-filtering-to-improve-change-type-discovery/ AWS Console Mobile Application Launches Federated Login for iOS | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-console-mobile-application-launches-federated-login-for-ios/ Topic || Media Announcing New AWS Elemental MediaConvert Features for Accelerated Transcoding, DASH, and AVC Video Quality | https://aws.amazon.com/about-aws/whats-new/2019/10/announcing-new-aws-elemental-mediaconvert-features-for-accelerated-transcoding-dash-and-avc-video-quality/ Topic || Security Amazon Cognito Increases CloudFormation Support | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-cognito-increases-cloudformation-support/ Amazon Inspector adds CIS Benchmark support for Windows 2016 | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-inspector-adds-cis-benchmark-support-for-windows-2016/ AWS Firewall Manager now supports management of Amazon VPC security groups | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-firewall-manager-now-supports-management-of-amazon-vpc-security-groups/ Amazon GuardDuty Adds Three New Threat Detections | https://aws.amazon.com/about-aws/whats-new/2019/10/amazon-guardduty-adds-three-new-threat-detections/ Topic || Block Chain New Quick Start deploys Amazon Managed Blockchain | https://aws.amazon.com/about-aws/whats-new/2019/10/new-quick-start-deploys-amazon-managed-blockchain/ Topic || AWS Quick Starts New Quick Start deploys TIBCO JasperReports Server on AWS | https://aws.amazon.com/about-aws/whats-new/2019/10/new-quick-start-deploys-tibco-jasperreports-server-on-aws/ Topic || Training New Training Courses Teach New APN Partners to Better Help Their Customers | https://aws.amazon.com/about-aws/whats-new/2019/10/new-training-courses-teach-new-apn-partners-to-better-help-their-customers/ New Courses Available to Help You Grow and Accelerate Your AWS Cloud Skills | https://aws.amazon.com/about-aws/whats-new/2019/10/new-courses-available-to-help-you-grow-and-accelerate-your-aws-cloud-skills/ New Digital Course on Coursera - AWS Fundamentals: Migrating to the Cloud | https://aws.amazon.com/about-aws/whats-new/2019/10/new-digital-course-on-coursera-aws-fundamentals-migrating-to-the-cloud/ Topic || Public Data Sets New AWS Public Datasets Available from Audi, MIT, Allen Institute for Cell Science, Finnish Meteorological Institute, and others | https://aws.amazon.com/about-aws/whats-new/2019/10/new-aws-public-datasets-available/ Topic || Robotics AWS RoboMaker introduces support for Robot Operating System 2 (ROS2) in beta release | https://aws.amazon.com/about-aws/whats-new/2019/10/aws-robomaker-introduces-support-robot-operating-system-2-beta-release/
Les références : Wikidata Les projets de la Fondation Wikimedia Le septième anniversaire de Wikidata L'interface de requête de Wikidata Plein d'exemples de requêtes Un bon tutoriel pour débuter avec le langage de requête SPARQL La licence Creative Commons Zéro Une requête Wikidata : quels sont les éléments qui ont la valeur 42 pour une de leurs propriétés ? April (Q2858967) sur Wikidata Monumental : un logiciel libre qui recherche des informations sur les monuments classés autour d'un point en se basant sur Wikidata, Wikipedia and Wikimedia Commons Mieux protéger les espèces menacées avec Wikidata : une initiative du Ministère de l'Europe et des Affaires étrangères avec Open Knowledge France Les salles de spectacles et Wikidata, en quelques déclarations : résultat d'une recherche publiée par l'Initiative canadienne Un avenir numérique lié en 2019 Une nouvelle vidéo de la Conférence gesticulée « Un Faible Degré d'Originalité » d'Antoine Defoort, toujours sous licence libre Creative Commons BY-SAVous pouvez commenter les émissions, nous faire des retours pour nous améliorer, ou encore des suggestions. Et même mettre une note sur 5 étoiles si vous le souhaitez. Il est important pour nous d'avoir vos retours car, contrairement par exemple à une conférence, nous n'avons pas un public en face de nous qui peut réagir. Pour cela, rendez-vous sur la page dédiée.
The last episode in our Semantics series goes in-depth into RDF, SPARQL, SKOS and Triples, explaining how these, combined with microdata as found on Schema.org and other open linked data sources, provide the semantic context to help computers "understand" what is being described on a web page.
Informatische Werkzeuge in den Geistes- und Sozialwissenschaften II 2019 (QHD 1920 - Video & Folien)
Dennis es Ingeniero en informática y trabaja con grandes volúmenes de datos. Es gestor y desarrollador de aplicaciones del ámbito académico y usando MediaWiki para proyectos de índole colaborativa. Es integrante del Directorio de Wikimedia Chile Ideas: Wikidata es el proyecto de datos abiertos para que las máquinas puedan identificar el conocimiento y ayudarnos a sacarle valor desde Wikipedia. Es necesario que las organizaciones que son almacenes de datos suban sus contenidos a Wikidata. Wikidata esta trabajando mucho con datos de personas, paises y ciudades. Esos datos se muestran Wikipedia a través de Wikidata Cualquier persona puede subir (hay herramientas específicas) datos a Wikidata. Wikidata tiene herramientas para vincular los elementos de los datos que se suben y los que ya existen Autores.uy subió datos de todos los autores de Wikipedia a Uruguay Wikidata prefiere conectarse a otras bases antes que recibir la subida de datos estáticos. De esta forma Wikipedia se modificaría dinámicamente a medida que cambien los recursos Los objetos deben tener identificadores únicos para identificar el proceso. Tener un URI es más importante que el formato. La biblioteca Nacional de Chile libero datos describiendo ontologías y en documentos RDF. Posiblemente haya sido el primer ente público en hacerlo en latinoamérica Wikidata genera un entorno en el que es posible hacer consultas SPARQL cruzadas hacia diferentes orígenes. Wikipedia no es el único cliente de Wikidata. Hay gente usando a modo personal Wikidata desde OpenRefine WolframAlpha usa otros recursos más formales pero tiene una lógica parecida a Wikidata La mejor forma de generar impacto con datos es pensar el aquello que mejore la calidad de vida de las personas. Hay que mirar a: Eduado Graels
Fakultät für Mathematik, Informatik und Statistik - Digitale Hochschulschriften der LMU - Teil 01/02
For the last decade a multitude of new data formats for the World Wide Web have been developed, and a huge amount of heterogeneous semi-structured data is flourishing online. With the ever increasing number of documents on the Web, rules have been identified as the means of choice for reasoning about this data, transforming and integrating it. Query languages such as SPARQL and rule languages such as Xcerpt use compound queries that are matched or unified with semi-structured data. This notion of unification is different from the one that is known from logic programming engines in that it (i) provides constructs that allow queries to be incomplete in several ways (ii) in that variables may have different types, (iii) in that it results in sets of substitutions for the variables in the query instead of a single substitution and (iv) in that subsumption between queries is much harder to decide than in logic programming. This thesis abstracts from Xcerpt query term simulation, SPARQL graph pattern matching and XPath XML document matching, and shows that all of them can be considered as a form of rich unification. Given a set of mappings between substitution sets of different languages, this abstraction opens up the possibility for format-versatile querying, i.e. combination of queries in different formats, or transformation of one format into another format within a single rule. To show the superiority of this approach, this thesis introduces an extension of Xcerpt called Xcrdf, and describes use-cases for the combined querying and integration of RDF and XML data. With XML being the predominant Web format, and RDF the predominant Semantic Web format, Xcrdf extends Xcerpt by a set of RDF query terms and construct terms, including query primitives for RDF containers collections and reifications. Moreover, Xcrdf includes an RDF path query language called RPL that is more expressive than previously proposed polynomial-time RDF path query languages, but can still be evaluated in polynomial time combined complexity. Besides the introduction of this framework for data integration based on rich unification, this thesis extends the theoretical knowledge about Xcerpt in several ways: We show that Xcerpt simulation unification is decidable, and give complexity bounds for subsumption in several fragments of Xcerpt query terms. The proof is based on a set of subsumption monotone query term transformations, and is only feasible because of the injectivity requirement on subterms of Xcerpt queries. The proof gives rise to an algorithm for deciding Xcerpt query term simulation. Moreover, we give a semantics to locally and weakly stratified Xcerpt programs, but this semantics is applicable not only to Xcerpt, but to any rule language with rich unification, including multi-rule SPARQL programs. Finally, we show how Xcerpt grouping stratification can be reduced to Xcerpt negation stratification, thereby also introducing the notion of local grouping stratification and weak grouping stratification.
Software Engineering Radio - The Podcast for Professional Software Developers
In this episode we're talking to James A. Hendler about the semantic web. We start with a definition of the semantic web and by discussing the main ingredients. We then look at (more or less) related topics such as prolog, artificial intelligence, wisdom of the crowds, and tagging. In the next section we discuss the core semantic web technologies: RDF, OWL, inference engines, SPARQL, and GRDDL. We conclude our discussion by looking at the status of the semantic web today and a couple of example applications.
Software Engineering Radio - The Podcast for Professional Software Developers
In this episode we're talking to James A. Hendler about the semantic web. We start with a definition of the semantic web and by discussing the main ingredients. We then look at (more or less) related topics such as prolog, artificial intelligence, wisdom of the crowds, and tagging. In the next section we discuss the core semantic web technologies: RDF, OWL, inference engines, SPARQL, and GRDDL. We conclude our discussion by looking at the status of the semantic web today and a couple of example applications.
Software Engineering Radio - The Podcast for Professional Software Developers
In this episode we're talking to James A. Hendler about the semantic web. We start with a definition of the semantic web and by discussing the main ingredients. We then look at (more or less) related topics such as prolog, artificial intelligence, wisdom of the crowds, and tagging. In the next section we discuss the core semantic web technologies: RDF, OWL, inference engines, SPARQL, and GRDDL. We conclude our discussion by looking at the status of the semantic web today and a couple of example applications.
Hear how Drupal, Semantic MediaWiki and other bleeding edge tech were enlisted along with pixie dust, FOAF, RDF, OWL, SPARQL, Linked Data (basically all the Semantic Web stuff) to build a distributed social network. The focus will be not on evangelism (I don’t really care about that) but how disparate open source platforms can talk and work together. This stuff actually works and makes development more fluid. These technologies make local development easier, but when it is time to broaden your scope, classic search is still king. How can you leverage this? Newcomers such as Yahoo Searchmonkey can play an important role in the creation of a truly distributed information system. David Peterson has been a web developer since 1995. He works way up north in the tropics of Townsville, about as far from any tech as possible. Currently he is Head of Research at BoaB interactive and is working hard to kickstart the Semantic Web down under. Not only that, but he is an Advisory Committee representative to the W3C. Wow. His wonderful family, making lovely photographs and searching for the perfect espresso keeps him happy. Licensed as Creative Commons Attribution-Share Alike 3.0 (http://creativecommons.org/licenses/by-sa/3.0/).
Fakultät für Mathematik, Informatik und Statistik - Digitale Hochschulschriften der LMU - Teil 01/02
Visions of the next generation Web such as the "Semantic Web" or the "Web 2.0" have triggered the emergence of a multitude of data formats. These formats have different characteristics as far as the shape of data is concerned (for example tree- vs. graph-shaped). They are accompanied by a puzzlingly large number of query languages each limited to one data format. Thus, a key feature of the Web, namely to make it possible to access anything published by anyone, is compromised. This thesis is devoted to versatile query languages capable of accessing data in a variety of Web formats. The issue is addressed from three angles: language design, common, yet uniform semantics, and common, yet uniform evaluation. % Thus it is divided in three parts: First, we consider the query language Xcerpt as an example of the advocated class of versatile Web query languages. Using this concrete exemplar allows us to clarify and discuss the vision of versatility in detail. Second, a number of query languages, XPath, XQuery, SPARQL, and Xcerpt, are translated into a common intermediary language, CIQLog. This language has a purely logical semantics, which makes it easily amenable to optimizations. As a side effect, this provides the, to the best of our knowledge, first logical semantics for XQuery and SPARQL. It is a very useful tool for understanding the commonalities and differences of the considered languages. Third, the intermediate logical language is translated into a query algebra, CIQCAG. The core feature of CIQCAG is that it scales from tree- to graph-shaped data and queries without efficiency losses when tree-data and -queries are considered: it is shown that, in these cases, optimal complexities are achieved. CIQCAG is also shown to evaluate each of the aforementioned query languages with a complexity at least as good as the best known evaluation methods so far. For example, navigational XPath is evaluated with space complexity O(q d) and time complexity O(q n) where q is the query size, n the data size, and d the depth of the (tree-shaped) data. CIQCAG is further shown to provide linear time and space evaluation of tree-shaped queries for a larger class of graph-shaped data than any method previously proposed. This larger class of graph-shaped data, called continuous-image graphs, short CIGs, is introduced for the first time in this thesis. A (directed) graph is a CIG if its nodes can be totally ordered in such a manner that, for this order, the children of any node form a continuous interval. CIQCAG achieves these properties by employing a novel data structure, called sequence map, that allows an efficient evaluation of tree-shaped queries, or of tree-shaped cores of graph-shaped queries on any graph-shaped data. While being ideally suited to trees and CIGs, the data structure gracefully degrades to unrestricted graphs. It yields a remarkably efficient evaluation on graph-shaped data that only a few edges prevent from being trees or CIGs.