LDE is a suite of Node.js libraries to power your Linked Data applications and pipelines. Use it to efficiently query, analyze, transform, enrich and validate RDF datasets.
LDE is built on standards including SPARQL, SHACL and DCAT-AP 3.0.
- Discover and retrieve datasets from DCAT-AP 3.0 registries.
- Query and transform datasets with pure SPARQL queries (instead of code or a DSL): no vendor lock-in.
- Use SPARQL endpoints directly when possible; import data dumps to a local endpoint when necessary.
- Compose pipelines with YAML (for non-technical users) or TypeScript code (for developers).
- Get started quickly with ready-to-use Docker images.
LDE is an Nx monorepo that includes the following packages:
- @lde/dataset: core objects dataset and distribution
- @lde/dataset-registry-client: retrieve dataset descriptions from DCAT-AP 3.0 registries
- @lde/distribution-download: download distributions for processing locally
- @lde/docgen: generate documentation from RDF such as SHACL shapes
- @lde/fastify-rdf: Fastify plugin for serving RDF data with content negotiation
- @lde/local-sparql-endpoint: quickly start a local SPARQL endpoint for testing and development
- @lde/pipeline: build pipelines that query, transform and enrich Linked Data
- @lde/pipeline-console-reporter: console progress reporter for pipelines
- @lde/pipeline-void: VoID statistical analysis for RDF datasets
- @lde/sparql-importer: import data dumps to a local SPARQL endpoint for querying
- @lde/sparql-monitor: monitor SPARQL endpoints with periodic checks
- @lde/sparql-qlever: QLever SPARQL adapter for importing and serving data
- @lde/task-runner: task runner core classes and interfaces
- @lde/task-runner-docker: run tasks in Docker containers
- @lde/task-runner-native: run tasks natively on the host system
- @lde/validator: validate datasets and pipeline outputs against SHACL shapes
- @lde/wait-for-sparql: wait for a SPARQL endpoint to become available