NXP Wins the EU Linked Data Award

We are pleased to be able to report that the Product Marketing group at NXP Semiconductors has been awarded first prize in both the Dutch Best Linked Data Application of 2015 contest and the 2015 1st European Linked Data Award contest. Both awards were given in the category Linked Enterprise Data, for demonstrating how, by applying the linked data paradigm to their marketing and product data, NXP increased its value facilitating and accelerating a variety of sales, publication, and reporting processes.

NXP Enterprise Data Hub

The international European Linked Data Contest (ELDC) award jury from over 15 European countries elected the NXP Enterprise Data Hub project as the winner from 53 submissions in 22 countries:

The NXP Enterprise Data Hub integrates data and metadata from several enterprises systems, to provide a single, up-to-date ‘canonical’ source of information that is easy to use and that data consumers—be they human or machine—can trust. NXP Semiconductors have taken the Linked Data principles to heart and successfully rolled out a Linked Data solution across the whole company. The jury finds that this is worth to [be] honored by the ELDC 2015.

Continue reading...

Looking for an RDF Patch Format

As we move towards “Linked Data Platform”[0] support, we need to provide a mechanism to modify selectively repository content which represents containers and collections. Yes, in principle, the SPARQL and Graph Store protocols provide such facilities, but there is a wide-spread reluctance to admit their suitability due to one common issue: the restrictions placed on blank node designators. Any sorts of models which involve blank nodes require special procssing in order to specify the statements to modify. We note, for this, there is no effective standard method and have considered whether the “Linked Data Patch Format” could serve this purpose.

Based on the information to hand, despite both the extensive analysis which has been devoted to the abstract issues, and the purposeful deliberation leading up to the LDPatch proposal itself, we conclude that the suggested mechanism is inappropriate for inclusion in an RDF data management service. First, it does not appear possible for it to fulfill its relative preformance guarantees, second, it requires additional state and process control management from the client and, finally, it encumbers the server implementation and access protocol with elements which, given the other factors, serve no useful purpose.

In order to decide how to proceed, we consider the deliberations which led to the proposal and whether alternatives exist. To begin, there are several perspectives:

  • the original essay from Tim Berners-Lee and Dan Connoly [1],
  • the notes from James Carroll [2], concerning graph matching and isomorphism,
  • a much longer exploration of the complexities introduced by blank nodes from Axel Polleres [3],
  • a talk by Patrick Hayes which includes alternative notions of blank node semantics and, in particular, handles the salient issue neglected by Polleres: scope.
  • a note about a “Linked Data Patch Format” from the Linked Data Platform Working Group to cover a proposal which failed to achieve recommendation status [4], and
  • a shorter note from one of that note’s authors, Alexandre Bertails, which seeks to justify the “Patch Format” approach [5].

Despite the repeated analyses, none yields a standard approach to the problem. All rely on a misapprehension of the nature of “blank nodes” in a “physical symbol system”, fabricate a problem for which they then fail to find a solution, where neither need exist…

Continue reading...

Repository Revision History

We have now enabled repository revisioning for all Dydra evaluation users on the cloud service. Customers with dedicated and on-site servers will receive this feature in their next scheduled system upgrade.

Repository revisioning means that your Dydra repositories maintain revision history that permits read-only access to snapshots of the previous contents of the repository. Every transaction committed (via a SPARQL request or a file import job) on a repository will create a new revision snapshot, enabling you to download and query both the current and previous state of the repository. Further, you can also download the difference between successive revisions to obtain the sets of statements removed and added in the respective transaction…

Continue reading...