Author Archives: Alejandra Garcia Rojas

GeoKnow Research Contributions

Geospatial information extraction and management

One of the main contributions of GeoKnow was to take geospatial data out of GIS for making it accessible on the web. This will allow the access to non experts, and to have a self-explanatory spatial information structures accessible via standard Web protocols and support for ad-hoc definable, flexible information structures. GeoKnow developed and improved Sparqlify and TripleGeo. These tools allow for transforming geospatial data from several conventional formats, into RDF triples, compliant with several standards (GeoSPARQL, Virtuoso vocabulary, etc). Sparqlify has been tested for mapping OpenStreetMap (OSM) database to RDF, and TripleGeo supports several different geospatial databases (PostgreSQL, Oracle, MySQL, IBM DB2) , and also file formats (ESRI shapes, GML, KML).

As the need for location data within Linked Data applications has increased it has accordingly been a requirement for RDF Triple stores to support multiple geometries at Web scale. At the beginning of the GeoKnow project the Virtuoso RDF QUAD store only supported the point Geometry type and durning the course of the project it has been enhanced to support some 14 additional Geometry types (Pointlist, Ring, Box, Linestring, Multilinestring, Polygon, Multipolygon, Collection, Curve, Closedcurve, Curvepolygon and Multicurve) and associated functions, with near full compliance with the GeoSPARQL / OGC standards now in place. Support for the GEOS (Geometry Engine – Open Source) Library has been implemented in Virtuoso, enhancing its Geospatial capabilities further.

The Virtuoso query optimiser has been enhanced to improve geospatial query performance including parallelisation of their execution. In addition improvements have been made to the RDF storage optimisation reorganising physical data placement according to geospatial properties implementing a structured-aware RDF using Characteristic Sets. Over the course of the project annual benchmarking of the Virtuoso QUAD store have been performed to demonstrate the improvements in the state of the art in Geospatial querying by using the GeoBench tool. In the very last report Virtuoso was benchmarked using the newer OSM dataset. In the Description of Work, a scalability up to 25 billion triples and query times below 0.5s was envisioned, but presented results used more than 50 billion triples, and average query execution times of 0.46s for a power run, thus, results have exceeded expectations.

Spatial knowledge aggregation and fusing

GeoKnow project also aims at enriching the web of data with the geospatial dimension, so it has contributed with the development of interlinking and fusing methods adopted to spatial information. Two of the first tools achieving this are Data Extraction and Enrichment Framework (DEER)}, formerly known as GeoLift, and LIMES. DEER adds the spatial dimension in a dataset describing locations found in the links or in unstructured data (using a NLP component). LIMES was extended to enable linking within complex resources in geo-spatial data sets (e.g., polygons, line-strings, etc.). Furthermore, the improvement of these geo-linking tools were extended to scale using map-reduce algorithms in a cloud-based architecture. For evaluating these developments, the corresponding benchmarks were created (see \ref{sec:benchmarking}). This experimental benchmark consisted of linking cities from DBpedia and LinkedGeoData. Initial results suggested that by using a geospatial dimension and a mean distance when linking datasets, a perfect linking accuracy could be achieved. The result of this research was accredited with the Best Research Paper award at ESWC 2013.
For working directly with geometries, FAGI framework was created for fusing different RDF representations of geometries into a consistent map. This tool receives as input two datasets and a set of links that interlink entities between the datasets and produces a new dataset where each pair of linked entities is fused into a single entity. The fusion is performed for each pair of matched properties between two linked entities, according to a selected fusion action, and considers both spatial and non-spatial properties (metadata). Fusing geospatial data may lead to a very time consuming process. Thus, improvements were proposed and implemented for optimising several processes (focusing on the minimisation of data transfer and the exploitation of graph-joining functionality) and a benchmark was designed to evaluate those improvements\footnote. FAGI was also extended with additional functionality that support exploration, manual authoring, several options for batch fusion actions and link discovery and learning mechanisms for recommending fusion actions and annotation classes for fused entities.

Quality Assessment

GeoKnow also worked on providing tools to improve the quality of existing datasets. The OSM community constantly contributes with enrichment and enhancement of OSM maps, and provided the needed tools for doing so. GeoKnow contributed in improving the quality of the annotations providing by the users by generating classification and clustering models in order to recommend categories for new entities inserted into OSM. OSMRec, the tool developed for this aim, can be used for recommending OSM categories for newly created geospatial entities, based on already existing annotated entities in OSM. Other data quality assessment on geospatial data was investigated, first by identifying the metrics that can be used to asses the data pertaining to various aspects such as coverage, surface area and structuredness. These metrics were used to evaluate community-generated datasets. The metrics outcome was used to create two software tools for assessing the quality of the datasets. CROCUS produces statistics about the data, it generates three types of Data Cubes, where the first Data Cube refers to the accuracy, second and third DataCube addresses the completeness and consistency of spatial data. And, the GeoKnow Quality Evaluator (GQE), reuses CROCUS and implements a set of geospatial data quality metrics (e.g., dataset coverage, coherence, avg. polygons/per class, etc) to compare different datasets across these metrics. These tools were used to evaluate three different datasets: LinkedGeoData, NUTS, GeoLinkedData. The results from this evaluation helped to understand the overall structure of the datasets and the variety of the data. Another data assessment tool created in GeoKnow was the RDF Data Validation Tool, which is based on integrity constraints defined by the Integrity constraints defined by the RDF Data Cube vocabulary, and is focused on statistical data.

Visualisation and Data Curation

The exploration and visualisation of data is a crucial task for final users. GeoKnow aimed at creating maps that are dynamically enriched and adopted to the needs of special user communities. Thus, modern software frameworks were explored to support the creation of such interfaces. GeoKnow developed reusable JavaScript libraries for interfacing with SPARQL endpoints. These libraries were used for instance in Mappify, which is a tool for easily generating and sharing maps as widgets, and Facete, which is a faceted browser for RDF spatial and non-spatial data enhanced with editing support. The editing capabilities consist basically in the definition of the interaction between an endpoint and the UI (Facete). The RDF Edit eXtension (REX) tool interface was implemented to support two kinds of data editing, one dealing with geospatial data on a map, and other for editing triples. Furthermore, Lodtenant was developed to support curating RDF data by means of workflows realised as batch processes. After data curation process, one may require the possibility of saving changes for using them later or propagating them to the other datasets. One of the Unister requirements consisted in the capability of managing and synchronising changes between different versions of private and public interlinked datasets\footnote{\url{http://svn.aksw.org/projects/GeoKnow/Public/D4.3.1_Concept_for_public-private_Co-Evolution.pdf}}. This requirement derived the deployment of the Co-Evolution Service component, which is a web application with a REST interface that allow managing dataset changes.
Another visualization component was developed for visualising spatio-temporal data. \textit{Exploratory Spatio-Temporal Analysis of Linked Data ESTA-LD} is a tool for spatiotemporal analysis of linked statistical data that appear at different levels of granularity. Finally, Mobile-based visualisation was also covered in GeoKnow. The GEM application allows to perform faceted browsing fully exploits the Linked Open Data paradigm. This tool allows browsing any number of SPARQL endpoints and filtering resources based on their type and constraints on properties, as well as leverage GPS positioning to deliver semantic routing.

GeoKnow Generator Workbench

The GeoKnow Generator Workbench provides an unified interface and data access to most of the tools described earlier in this section, and is available online to test here. It enables simple access and interaction with the different components needed in the LD Lifecycle. This Workbench was designed under the requirements specification of the GeoKnow use cases. In general, these requirements include:

  • Scalability for working with large data sets
  • Authentication, Authorisation and Role Management as a primary requirement in companies
  • Data Provenance tracking for traceability of changes
  • Job Monitoring and Robustness for applicability in production
  • Modularity and Composability in order to provide flexibility w.r.t. integrating linked data tools

EDF2015 and Linked Data Europe: Big Geospatial Data Workshop

In 2015, the European Data Forum took place in Luxembourg on the 16th and 17th November. GeoKnow team had the pleasure to be present at the event with a booth for showing GeoKnow results. The conference welcomed over 700 participants from industry, research, policy makers, and community initiatives form all over Europe.

Our representrs at the EDF2015

Our representers at the EDF2015

The day after the conference we participated at the Linked Data Europe Workshop, that was organized by IQmulus, GeoKnow, LEO and MELODIES teams. Jens Lehmann of the University in Leipzig and Jonas Schulz from Ontos AG demonstrated our GeoKnow workbench, talked about the tools in our Linked Data Stack and had insights into other projects with the scope of Linked Geo Data and Big Data. Overall 10 projects were presented and the workshop ended with an informative discussion about Linked Geo Data tools replacing or extending existing GIS solutions.

Thanks to everyone, who organized the EDF and the workshop.

GeoKnow Public Datasets

In this blogpost we want to present three public datasets that were improved/created in GeoKnow project.

LinkedGeoData
Size: 177GB zipped turtle file
URL: http://linkedgeodata.org/

LinkedGeoData is the RDF version of Open Street Map (OSM), which covers the entire planet geospatial data information. As of September 2014 the zipped xml file from OSM had 36GB of data, while the size of zipped LGD files in turtle format is 177GB. The detailed description of the dataset can be found in the D1.3.2 Continuous Report on Performance Evaluation. Technically, LinkedGeoData is set of SQL files, database-to-rdf (RDB2RDF) mappings, and bash scripts. The actual RDF conversion is carried out by the SPARQL-to-SQL rewriter Sparqlify. You can view the Sparqlify Mappings for LinkedGeoData here. Within The maintenance and improvement of the Mappings required to transform OSM data to RDF has being done during all the project. This dataset has being used in several use cases, but specially for all benchmarking tasks within GeoKnow.

Wikimapia
URL: http://wikimapia.org/api/

Wikimapia is a crowdsourced, open-content, collaborative mapping initiative, where users can contribute mapping information. This dataset existed already before the project started. However it was only accessible through Wikimapia’s API⁴ and provided in XML or JSON formats. Within GeoKnow, we downloaded several sets of geospatial entities from Wikimapia, including both spatial and non-spatial attributes for each entity and transformed them into RDF data. The process we followed is described next. We considered a set of cities throughout the world (Athens, London, Leipzig, Berlin, New York) and downloaded the whole content provided by Wikimapia regarding the geospatial entities included in those geographical areas. These cities where preferred since they are the base cities of several partners in the project, while the rest two cities were randomly selected, with the aim to reach our target of more than 100000 spatial entities from Wikimapia. Apart from geometries, Wikimapia provided a very rich set of metadata (non-spatial properties) for each entity (e.g. tags and categories describing the geospatial entities, topological relations with nearby entities, comments of the users, etc.). The aforementioned dumps were transformed into RDF triples in a straightforward way: (a) defining intermediate resources (functioning as blank nodes) where information was organized in more than one levels, (b) flattening the information of deeper levels where possible in order to simplify the structure of the dataset and (c) transforming tags into OWL classes. Specifically, we developed a parsing tool to communicate with the Wikimapia API and construct appropriate n-triples from the dataset. The tool takes as input a bounding box in the form of wgs84 coordinates (min long, min lat, max long, max lat). We chose five initial bounding boxes: one for each of the cities mentioned above. The bounding box was defined in such way so that it covered the whole area of the selected city. Each bounding box was then further divided by the tool into a grid of smaller bounding boxes in order to overcome the upper limit per area of the returned entities from Wikimapia API. For each place returned, we transformed all properties into RDF triples. Every tag was assigned an OWL class and an appropriate label, corresponding to the textual description in the initial Wikimapia XML file. Each place became an instance of the classes provided by its tags. For the rest of the returned Wikimapia attributes, we created a custom property in a uniform way for each attribute of the returned Wikimapia XML file. The properties resulting from the Wikimapia XML attributes point to their literal values. For example, we construct properties about each place’s language id, Wikipedia link, URL link, title, description, edit info, location info, global administrative areas, available languages and geometry information. If these attributes follow a deeper tree structure, we assign the properties at intermediate custom nodes by concatenating the property with the place ID; these nodes function as blank nodes and connect the initial entity with a set of properties and the respective values. This process resulted to creating an initial geospatial RDF dataset containing, for each entity, the polygon geometry that represents it, along with a wealth of non-spatial properties of the entity. The dataset contains 102,019 geospatial entities and 4,629,223 triples.
Upon that, in order to create a synthetically interlinked pair of datasets, we split the Wikimapia RDF dataset, duplicating the geometries and dividing them into the two datasets in the following way. For each polygon geometry, we created another point geometry located in the centroid of the polygon and then shifted the point by a random (but bounded) factor⁵. The polygon was left in the first dataset where the point was transferred to the second dataset. The rest of the properties where distributed between the two datasets as follows: The first dataset consists of metadata containing the main information about the Wikimapia places and edit information about users, timestamps, deletion state and editors. The second dataset consists of metadata concerning basic info, location and language information. This way, the two sub-datasets essentially refer to the same Wikimapia entities, differing only in geometric and metadata information. Each of the two sub-datasets contains 102,019 geospatial entities and the first one contains 1,225,049 triples while the second one 4,633,603 triples.

Seven Greek INSPIRE-compliant data themes of Annex I
URL: http://geodata.gov.gr/sparql/

For the INSPIRE to RDF use case, we selected seven data themes from Annex I,that are describes in the Table below. Although all metadata in geodata.gov.gr is fully compatible with INSPIRE regulations, data is not because it has been integrated from several diverse sources, which have rarely followed the proper standards. Thus, due to data variety, provenance, and excessive volume, its transformation into INSPIRE-compliant datasets is a time-consuming and demanding task. The first step was the alignment of the data to INSPIRE Annex I. To this goal, we utilised the Humboldt Alignment Editor, a powerful open-source tool with a graphical interface and a high-level language for expressing custom alignments. Such transformation can be used to turn a non-harmonised data source to an INSPIRE-compliant dataset. It only requires a source schema (an .xsd for the local GML file) and a target one (an .xsd implementing an INSPIRE data schema). As soon as the schema mapping was defined, the source GML data was loaded, and the INSPIRE-aligned GML file was produced. The second step was the transformation into RDF. This process was quite straightforward, provided the set of suitable XSL stylesheets. We developed all these transformations in XSLT 2.0, implementing one parametrised stylesheet per selected data theme. By default, all geometries were encoded in WKT serialisations according to GeoSPARQL.The produced RDF triples were finally loaded and made available in both Virtuoso and Parliament RDF stores, in http://geodata.gov.gr/sparql, as a proof of concept.

INSPIRE Data Theme Greek dataset Number of features Number of triples
[GN] Geographical names Settlements, towns, and localities in Greece. 13 259 304 957
[AU] Administrative units All Greek municipalities after the most recent restructuring (”Kallikratis”). 326 9 454
[AD] Addresses Street addresses in Kalamaria municipality. 10 776 277 838
[CP] Cadastral parcels The building blocks in Kalamaria are used. Data from the official Greek Cadastre are not available through geodata. gov.gr. 965 13 510
[TN] Transport networks Urban road network in Kalamaria. 2 584 59 432
[HY] Hydrography All rivers and waterstreams in Greece. 4299 120 372
[PS] Protected sites All areas of natural preservation in Greece according to the EU Natura 2000 network. 419 10 894

GeoKnow at Semantics 2015, Vienna

Several partners of GeoKnow were present this year at the Semantics conference 2015.
The previous day of the conference we organised a workshop about the work done during these last three years in GeoKnow.
In the conference, three papers with GeoKnow acknowledgement were presented:

  • Integrating custom index extensions into Virtuoso RDF store for E-Commerce applications, presented by Matthias Wauer,
  • An Optimization Approach for Load Balancing in Parallel Link Discovery presented by Mohamed Ahmed Sherif, and
  • Data Licensing on the Cloud – Empirical Insights and Implications for Linked Dat, presented by Ivan Emilov

And two posters in the posters sessions:

  • The GeoKnow Generator Workbench – An Integrated Tool Supporting the Linked Data Lifecycle for Enterprise Usage, and
  • RDF Editing on the Web

Moreover, the GeoKnow team was demonstrating tools and the Workbench at the Booth reserved for us. It was a nice experience and good opportunity to share our work and to see other peoples projects.

conference

The 2nd Geospatial Linked Data Workshop

This week we the 2nd GeoLD workshop took place previous to the Semantics conference 2015 in Vienna. We had as invited speaker Franz Knibbe from Goedan in Netherlands. Franz is currently contributing to the Spatial Data on the Web Working Group, where people from OGC and W3C are trying to define the best ways to integrate geospatial data on the web of data. His talk was very inspiring, for instance he described us part of the spatial aspects that matter for both working groups, data that goes from gathering data from the galaxy, to microscopical skin structures. You can discover little bit more of his talk in slideshare.
The workshop continued with the presentation of three software tools for exploring geospatial data on the web. Facete is a faceted browser of geospatial data in RDF format, and also allows to edit the data. The second tool was ESTA-LD, which can be used for exploring statistical data that is represented using the Data Cube Vocabulary. And DEER, a data extraction and enrichment framework, allows to create pipelines for analysing unstructured data, finding interlinks with other datasets, and extracting knowledge form the linked datasets in order to enrich the data.
We also presented the GeoKnow Generator demo, which integrates the tools presented +9, offering enterprise ready features, in order to support the company usage of such tools. The usability of GeoKnow tools was demonstrated with two more presentations. The Supply Management showed how they integrated spatial data for improving information and decision making in the supply chain. Finally, the Tourism e-Commerce showed how the integration of geospatial data is used to improve recommendations in a motive-based user request.

2015-09-15 09.09.26

2nd edition of GeoLD Workshop at Semantics Conference

We are preparing the second edition of the Geospatial Linked Data Workshop that will be held before the Semantics conference the 15th of September 2015, in Vienna.

For the GeoLD workshop we have invited an active member of the Spatial Data on the Web Working Group, who will be presenting the story so far carried out by this WG. This WG was created a year ago, and brought together two major standards bodies, the Open Geospatial Consortium (OGC) and the W3C with the objective to improve interoperability and integration of spatial data on the Web.

The rest of the presentations at the workshop are about useful tools for exploring geospatial data on the web, and enriching data with geospatial features. These tools and a complete Use Case scenarios will demonstrate the importance of integrating geospatial data to solve business questions.

You can have a detailed agenda in the workshop website. You can register for free in the conference website HERE.

The GeoKnow Generator Workbench v1.1.0 Release Announcement

To demonstrate GeoKnow software tools we are developing a Workbench that integrates different components to support users in the tasks of generating Linked data out of spatial data. Several tools can be used for transforming, authoring, interlinking and visualising spatial data as linked data. In this post we want to introduce the public release of the GeoKnow Generator Workbench which implements most of the user requirements collected at the beginning of the project and integrates Virtuoso, Limes, TripleGeo, GeoLift, FAGI-gis, Mappify, Facete and Sparqlify.

The Workbench also provides Single Sign On functionality, user and role management and data access control for the different users. The Workbench is comprised of a front-end and back-end implementations. The front-end provides GUIs for software components where a REST API is available (LimesService, GeoLiftService and TripleGeoService). Components that provide their own GUI, are integrated using containers (FAGI-gis, OntoWiki, Mappify, Sparqlify and Virtuoso SPARQL query interface). The front-end also provides GUIs for the administrative features like users and roles management, data source management and graphs management, as well as the Dashboard GUI. The Dashboard provides a visual feedback to the user with the registered jobs and the status of executions. The Workbench back-end provides REST interfaces for management of users, roles, graphs, datasources and batch jobs, for retrieving the system configuration, and for importing RDF data. All system information is stored in Virtuoso RDF store.

Generator Workbench Architecture.

Generator Workbench Architecture.

A more deep description of this workbench can be found in the GeoKnow D1.4.2 Intermediate release of the GeoKnow_Generator. The GeoKnow software, including this Workbench are open source and they are available in github. An easy to install preconfigured versions of all GeoKnow software ara available as Debian packages in the Linked Data Stack Debian repository.

GeoKnow Generator 2nd Year Releases

The second year of GeoKnow has passed and we have several new releases to announce. Among new software tools there are:

FAGI-gis 1.1+rev0
FAGI aims to provide data fusing on geometries of linked entities.
This latest version provides several optimisations that increased
the scalability and efficiency. It also provides a map-based interface
for facilitating the fusion actions through visualisation and
filtering of linked entities.
RDF Data Cube Validation Tool 0.0.1
Validation tool aims to ensure the quality of statistical datasets.
It is based primarily on the integrity constraints defined by the
RDF Data Cube vocabulary, and it can be used to detect violations
of the integrity constraints, identify violating resources, and
fix detected issues. Furthermore, to ensure the proper use of
vocabularies other than the RDF Data Cube vocabulary, it relies
on RDFUnit. It can be configured to work any SPARQL endpoint,
which needs to be writeable in order to perform fix operations.
However, if this is not the case, user is provided with the SPARQL
Update query that provides the fix, so that it can be executed
manually. Main purpose of the tool within the GeoKnow project
is to ensure the quality of input data that is to be processed and
visualized with ESTA-LD.
spring-batch-admin-geoknow 0.0.1
The spring-batch-admin-geoknow is the first version of batch processing
component that functions as the backend of the Workbench’s.

Besides brand new components, there are also new releases also available as Debian packages:

virtuoso-opensource 7.1.0
Virtuoso 7.1.0 includes improvements in the Engine (SQL Relational Tables and RDF Property/Predicate Graphs); Geo-Spatial support; SPARQL compiler; Jena and Sesame provider performance; JDBC Driver; Conductor CA root certificate management; WebDAV; and the Faceted Browser.
linkedgeodata 0.4.2
The LinkedGeoData package contains scripts and mapping files
for converting spatial data from non-RDF (currently relational)
sources to RDF. OpenStreetMap is so far the best covered data
source. Recently, initial support for GADM and Natural Earth were
added.

  • Added an alternative lgd load script which improves
    throughput by inserting data into a different schema first
    followed by a conversion step.
  • Optimized export scripts by using parallel version of pbzip.
  • Added rdfs:isDefinedBy triples providing licence information
    for each resource.
Facete2-tomcat7 0.0.1
Facete2 is a web application for exploring (spatial) data in SPARQL
endpoints. It features faceted browsing, auto detection of relations
to spatial data, export, and customization of which data to
show.

  • Context menus are now available in the result view enabling
    one to conveniently visit resources in other browser
    tabs, create facet constraints from selected items and copy
    values into the clipboard.
  • Improved Facete’s autodetection when counting facets is
    infeasible because of the size of the data
  • Suggestions of resources related to the facet selection that
    can be shown on the map are now sortable by the length
    of the corresponding property path.
facete2-tomcat-common 0.0.1
This package is a helper package and is mainly responsible for
the Facete database setup. There were no significant changes.
sparqlify-tomcat7 0.6.13
This package provides a web admin interface for Sparqlify. The
system supports running different mappings simultaneously under
different context paths. Minor user interface improvements.
sparqlify-tomcat-common 0.6.13
This package is a helper package and is mainly responsible for
the Sparqlify database setup. There were no significant changes.
sparqlify-cli 0.6.13
This package provides a command line interface for Sparqlify.
Sparqlify is an advanced scalable SPARQL-to-SQL rewriter and the
main engine for the LinkedGeoData project.

  • Fixed some bugs that caused the generation of invalid SQL.
  • Added improvements for aggregate functions that make
    Sparqlify work with Facete.
  • Added initial support for Oracle 11g database.
limes-service 0.5
Limes-services updated to the latest LIMES library. The main enhancement
this year was refactoring the service to provide RESTful
interface.
geoknow-generator-ui 1.1.0
First public release of the GeoKnow Generator Workbench extends
the initial prototype by including user and role management,
graph access control management, processing monitoring
within a dashboard.
Deer 0.0.1
GeoLift has been renamed to DEER. The functionalities provided
in GeoLift have been generalised to not only support geospatial,
but generally structured data.

If you need help installing or using components are available as Debian packages in the Linked Data Stack, do not hesitate to join and ask in the linkeddatastack google group.

The Linked Data Stack

Screen Shot 2014-11-19 at 15.43.15

The Linked Data Stack aims to simplify the deployment and distribution of tools that support the Linked Data life cycle. Moreover, it eases the information flow between components to enhance the end-user experience while harmonising the look and feel. It comprises a number of tools for managing the life-cycle of Linked Data. At the moment it consists of two software repositories for distributing Linked Data Software components to the developer communities: 1.)  A Debian repository that provides installers of components where users can directly install them on Linux servers using the standard packaging tools. 2.) And a Maven repository for managing binary software components used for developing, deploying and provisioning. 

The Linked Data Stack has been the result of the LOD2 EU project efforts, and now the GeoKnow team has officially became the manager of the Linked Data Stack. This announcement took place in the 10th International Conference on Semantic Systems held the 4th and 5th of September 2014 in Leipzig.

If you are a Linked Data User, visit the Linked Data Stack where you can find instructions on how to install and use the demonstrations and documentation for installing specific components. If you want to contribute to the stack with your software, you can find also guidelines how to contribute.

GeoKnow at the Open Data Workshop, London

IMG_0460A couple of weeks ago the Open Data on the Web workshop took place in London at the Google Campus. The purpose of the workshop was to discuss how the potential of open data can be realized.

In attendance were many open and linked data enthusiasts keen to discuss the issues and challenges concerning publishing, linking and -most importantly – use of open data.

The discussions concerning open data and how to make it more accessible to all, all while standardizing the formats or otherwise using and keeping the current formats were intense and fascinating. The many presentations concerning the efforts of various governments to open their data as well as projects seeking to create practical ways of either publishing or using data were equally engaging.

GeoKnow was represented by Ontos and presented the challenges, motivations, and goals of the GeoKnow project. Feedback on the motivation for the project was positive and interest was expressed in the GeoKnow Generator as well as a possible collaboration in one of the use cases.

A further interesting development was the interest expressed by INSPIRE stakeholders for exposing INSPIRE data as Linked Open Data. This is a timely development for GeoKnow since we will be providing tools for exposing INSPIRE data as LOD with the aim of re-purposing and leveraging the EU’s growing INSPIRE geospatial data in the LOD cloud.

A big thank you to the World Wide Web Consortium (W3C), the Open Data Institute, and the Open Knowledge Foundation for making the event happen. Another big thanks goes out to Google for hosting the event and providing the much appreciated sustenance and coffee over the two days!