Categories
GIS INSPIRE neogeography OGC Thoughts

Neogeography.. it was just a dream..

Imagine waking up in the beautiful Portuguese city of Porto and finding out the past two years of your life were a dream… All that talk of GeoRSS, Map Mash-ups, KML, User generated My Maps, The GeoWEB and Paris Hilton were all part of a dream.

We it felt a bit like that on the first day of the annual European Commission GI and GIS Workshop. Over 200 hundred GIS users from Public Sector Organisations and a few private sector ones are together meeting to discuss the impact of the INSPIRE directive now that it has been passed by the European Parliament.

ECGIS workshop

During this first day the web 2.0 buzzwords of neogeograhy were notable by their absence.

Now I am actually less disappointed that I might have been, let me explain why…

INSPIRE is, contrary to all of the fuss last year drummed up by some in the UK, quite tightly focused on the supply of harmonised environmental geospatial data to the institutions of the European Commission, by public sector organisations in the member states. – There is no “public” interface here and the citizens are not seen as major customers for INSPIRE services.

As such you can think of this as a complex back office system for European Government, as much an Enterprise GIS for Brussels as a Spatial Data Infrastructure. So key to success will be clear definition of requirements and well specified system design.

Now here is the rub, despite the fact that much of the INSPIRE directive is not expected to be implemented until at least 2010, it is been designed now and must used well specified and recognised standards – things like the ISO 19100 series of standards developed by the Open GeoSpatial Consortium.

It’s not difficult to appreciate the problem, REST based interfaces, KML, GeoJSON, GeoRSS etc might actually be the best technologies to use today and would be the tools of choice of many, however like many other Government IT projects INSPIRE needs to follow the low risk route of SOAP, WSDL, WMS, WFS etc.

So we may find that organisations will use OGC style interfaces to communicate to other public sector organisations and the commission, while using lighter weight technologies to publish information to their citizens. This is no bad thing !!

I am however disappointed by the continued focus on metadata driven catalogue services as the primary mechanism to find geospatial data, I don’t believe this will work as nobody likes creating metadata, and catalogue services are unproved.

INSPIRE needs GeoSearch !!

Written and Submitted from the Le Meridien Hotel, Porto using the in-room broadband network.

9 replies on “Neogeography.. it was just a dream..”

Are SOAP and WSDL really low risk? Maybe it’s just the circles I move in, but I can’t remember the last time I heard about a genuinely successful implementation of those technologies.

Simon,

Take you point I think SOAP at least works, WSDL not sure it is really implemented that much. Low risk in this context means well documented and “mainstream” whatever that really means.

Any organization completely dependent on big vendors doesn’t have a much choice between SOAP and REST, but we’re going to see REST tools that make these folks feel comfortable very soon. Still, it’s a twisted world in which SOAP is less risky than the good old HTTP connecting every European website.

Thanks Ed for this useful blog.
One comment though, I am a bit worried about all the words containing neo as it reminds me the worst regarding what human beings can do (neo-nazism, neo-republican,…). So as a community, could we find another name 😉
Otherwise could you develop a little bit your ideas concerning the metadata driven catalogue services and the services registry.
Like you I really don’t believe in them and would like to get your opinion on that.

Guillaume,

Take your point about “neo” – the term “new” stolen by conservatives, strange when you think about it !

My concerns around metadata are slightly to do with the immature nature of the technology and in particular the OGC CSW standard and the ongoing debate about encoding standards – ebRIM argument anybody ?

A bigger concern however is to do with psychology, the process of creating metadata by humans seems prone to failure… when did you last enter any metadata in Microsoft Word for example. Why should we think people creating geospatial data will act any differently, when potentially you may need to enter 100+ fields of informtion. To be fair automation may help here..

In the early days of the internet people attempted to keep catalogues of interesting webpages but these soon all failed.

We find information now using sophisticated search algorithms which understand to a great or lesser extent the context and semantic meaning of content from analysing the content itself.

This I believe would be a more pragmatic solution to the problem.

Ahh – metadata. Funnily I violently agree with you and disagree (in a more gentle way) at the same time.

Geosearch will only work if there is an input – a sample set to use stochastic methods over. More to the point is the question of whether you can find anything worth using..

and in this sense, we need much higher quality metadata that allows us to use the things we find. And as you say, it isnt going to be entered using current procedures or technologies. It has to be automatically generated, in the same way that, for example, eBay is a metadata catalogue driving operational functionality.

Try selling metadata generation to a vendor of used bicycles…. no – just let them use it as part of effective systems.

So, my heresy for the month is that metadata is an excuse not to achieve any effective outcomes, and geosearch if popularised _before_ content creation drivers emerge will only exacerbate this procrastination.

Great post Rob, well it should at least generate a bit more debate.. I still think that the solution will come from basic automated metadata production, and search

This is not really a technology issue in the same ways that SDI development is not really about technology but organisational dynamics.

The “complexity” of contemporary metadata is a side-effect of being able to assign better semantic context to the information being described. There may be many possible fields, but they are used only as necessary to provide that context to the end-user to understand the resource. We see full-text searches on harvested files functioning in search engines, but they still lack the precision or field-level semantics to help us identify location, temporal, and feature content information. Sure, search engines will find us thousands of possible hits (mostly erroneous) that we must page through as humans to decide if it is relevant to our needs. Even Flickr encourages/facilitates users in their geo-tagging and time-tagging of content — [gasp] non-automated collection of metadata.

The pervasive use of XML instructs us that there is a growing interest in structured, meaningful information that cannot be effectively imparted by unstructured HTML or documents and inference engines. Even the Google search appliance can be tuned to recognize and apply structured patterns in support of more precise query (spatial, temporal, and other fields).

Significant metadata on geospatial data sets can be already largely derived through automated tools. Locational extent and feature type names and properties can be extracted without user intervention from geospatial databases. By themselves (along with a link to access) these metadata are very helpful in finding geospatial data or services, but they are still limited until the publisher adds some human storytelling, definition, context, and contact information to round it out. A balance between a handful of important structured pieces of information and full-text search of content is what is needed to populate and satisfy search/discovery requirements, whether using a search engine or formalized catalog service.

Leave a Reply to Rob AtkinsonCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.