This document provides a detailed description of the features referenced in our SLA. All use of software available on LinkedData.Center website is subject to LinkedData.Center’s terms of use. In the event of a conflict or disagreement between this document and the Terms of Use, the Terms of Use will prevail. SPARQL query endpointYou will be provided with an endpoint that fully supports the following W3C recommendations:Depending on the subscribed plan, you can have a dedicated or a shared endpoint. Our SPARQL query endpoints use a unique caching algorithm that allows performances burst.SPARQL update endpointYou will be provided with an endpoint that fully supports the W3C recommendations for SPARQL 1.1 Update Language. Depending on the subscribed plan you can have a dedicated or a shared endpoint. RDF storageLinked Data are contained in files or provided by web services. To be queried, data must be indexed and the resulting index must be stored somewhere. LinkedData.Center allocates to each customer a private area where to store the data index. The index capacity is depending on the subscribed plan. See our continuity policy for more info about capacity burst handling.eKB profiling APIsThrough EKB profiling, API's customers customize the graph engine behavior and manage API access credentials. See the EKB APIs reference documentation for more info. eKB ingestion APIsAll ingestion features are driven by a set of open RESTful APIs. See the EKB APIs reference documentation for more info. Interactive control panelAt cpanel.linkeddata.center you can experiment our interactive SPARQL control panel. Compatible with all plans. Knowledge base management LinkedData.Center implements a semantic system where information is described as a set of statement accordingly to the W3C standard Resource Description Framework (RDF). Knowledge Exchange Engine Schema (KEES) supportAn automated data ingestion process asynchronously (re)indexes your data sources starting from a formal knowledge base description. LinkedData.Center ingestion API recognizes a Knowledge Base described with the open KEES Language profile. The support to the language profile is modular and depends on your plan. Learn more about Knowledge Base Configuration. TBox ingestionData ingestion API loads vocabularies in a particular TBox graphs. It also manages forward reasoning by materializing the rules described in the knowledge base configuration. Learn more about Knowledge Base Configuration. Bulk ingestionAllows loading of RDF resources serialized in one of the following formats: turtle, n3, XML/RDF. The API supports ABox graph with an accrual policy of type kees:BulkIngestion. Learn more about Knowledge Base Configuration. LOD Laundromat ingestionIt is a special type of bulk ingestions that allows loading data and metadata from a LOD Laundromat server. The API supports ABox graph with an accrual policy of type kees:LODLaundroatIngestion. Learn more about Knowledge Base Configuration. Ingestion from external SPARQL endpointsAllows to import RDF triples from an existing SPARQL endpoint. The API supports ABox graph with an accrual policy of type kees:SparqlIngestion. Learn more about Knowledge Base Configuration. Data provenance detectionEach triple in the data store maintains its provenance info. Data ingestion APIs creates a named graph for each indexed resource. Named graph metadata are expressed with prov ontology This allows you to manage data inconsistencies by using your trust maps and ranking algorithms.Forward reasoning capabilitiesMore data can be inferred applying axioms and rules during automated data ingestion. A reasoning window takes place after each learning window, applying rules and materializing new RDF triples. Elastic scaleLinkedData.Center elastic scale computation and the storage resources in order to keep service operational and responsive. This process is completely transparent to customers. Schema-less data model support (ontologies)You can use unlimited data schemas and ontologies; we fully support the RDF model in storing, querying and managing data. Software updatesAll required software and platform configuration is automatically updated by our deploy platform. Enterprise customers can choose to manually upgrade their platform by using our deploy platform. Ticket supportYour questions will be answered using our ticket system. Premium tickets are served in a priority queue with respect to standard support tickets. Normally tickets are served in 24 hours. We answer to question related to our services and APIs. We can't guarantee answers to questions related to standards and languages (i.e. SPARQL queries). Encrypted SSL connectionThe connection from your client and our server is protected by a SSL encrypted channel. Continuous deployAllows you to access our deploy platform and configure all deploy parameters giving you full control over automation deployment. You can configure APIs, ingestion engine and graph database on different HW with specialized scaling policy and HA configurations. The deploy platform will safely align all systems.Physical or virtualOur platform runs on any host (virtual or physical) that matches minimum requirements (Linux Ubuntu 64 bit 14.04, 8GB ram). Cloud hostingFor all white label plans we partnership with BT cloud and AMAZON AWS. Our Enterprise platform supports any cloud provider. Location fluidityOur platform runs everywhere: on clouds, on VPN and also on private LAN. Moving the platform from one location to another takes just a few minutes. Our platform tries to relocate automatically to the best location according to data gravity. Automatic locations work only in Europe. Enterprise users are free to decide where to place the platform near their data source (i.e. accommodate data gravity) minimizing network load. Custom domainYou can customize SPARQL and API endpoint URLs with your domains. Customizable graph database engineThe Appliance and Corporate plans allow you to use your preferred RDF storage engine with your license. Full support to SPARQL 1.1. the protocol is required. Graph DB Legacy APIsAllows you to access all legacy features of backing storage engines, such as backward reasoning, transactions, semantic test search, and geographic queries. Web scouting feasibility reportA report provided by LinkedData.Staff that analyzes the customer expectations about a web scouting activity. Web scouting reportA report provided by LinkedData.Staff that report the result of a web scouting activity. Knowledge base feasibility reportA report provided by LinkedData.Staff that analyzes the feasibility of a specific knowledge base configuration. Knowledge base configurationThe result of a professional activity executed by LinkedData.Staff that update knowledge base TBOXes and the KEES kb configuration description. License Check reportA report which summarizes data licenses is used in the knowledge base. The licenses are checked toward user requirements. Validated requirement IDIt is a code released by LinkedData.Center's staff that states that the user requirements are feasible. In some services, this code entitles the subscriber to get a full money back guarantee. A validated requirement ID is valid for one month. |
Policies >
Features catalog
System | Feature | Spec.Version | Process |
---|---|---|---|
agency | Bulk ingestion | 1 | operation |
agency | Data provenance detection | 1 | operation |
agency | Forward reasoning capabilities | 1 | operation |
agency | Ingestion from external SPARQL endpoints | 1 | operation |
agency | Knowledge Exchange Engine Schema (KEES) support | 1 | operation |
agency | LOD Laundromat ingestion | 1 | transition |
agency | TBox ingestion | 1 | operation |
api | Custom domain | 2 | operation |
api | eKB profiling APIs | 2 | operation |
api | Encrypted SSL connection | 2 | operation |
api | SPARQL query endpoint | 3 | operation |
api | SPARQL query engine acceleration | 3 | operation |
api | SPARQL update endpoint | 2 | operation |
cpanel | Interactive control panel | 1 | operation |
deploy platform | Cloud hosting | 1 | operation |
deploy platform | Continuous deploy | 1 | operation |
deploy platform | Elastic scale | 2 | operation |
deploy platform | Location fluidity | 2 | operation |
deploy platform | Physical or virtual | 2 | operation |
deploy platform | Software updates | 2 | transition |
graph engine | Customizable graph database engine | 2 | transition |
graph engine | Graph DB Legacy APIs | 1 | operation |
graph engine | RDF storage | 1 | operation |
graph engine | Schema less data model support (ontologies) | 1 | operation |
help desk | Knowledge base configuration | 2 | transition |
help desk | Knowledge base feasibility report | 1 | transition |
help desk | License Check report | 1 | transition |
help desk | Ticket support | 1 | operation |
help desk | Validated requirement ID | 1 | operation |
help desk | Web scouting feasibility report | 1 | transition |
help desk | Web scouting report | 1 | transition |
Showing 31 items