E-Agriculture

Question 2: What are the prospects for interoperability in the future?

Question 2: What are the prospects for interoperability in the future?

"Interoperabilty"1 is a feature both of data sets and of information services that gives access to data sets. When a data set or a service is interoperable it means that data coming from it can be easily "operated" also by other systems. The easier it is for other systems to retrieve, process, re-use and re-package data from a source, and the less coordination and tweaking of tools is required to achieve this, the more interoperable that source is.

Interoperability ensures that distributed data can be exchanged and re-used by and between partners without the need to centralize data or standardise software.
Some examples of scenarios where data sets need to be interoperable:

   transfer data from one repository to another;
   harmonize different data and metadata sets;
   aggregate different data and metadata sets;
   virtual research environments;
   creating documents from distributed data sets;
   reasoning on distributed datasets;
   creating new information services using distributed data sets.


There are current examples of how an interesting degree of  internal interoperability  is achieved through centralized systems.  Facebook and  Google are the largest examples of centralized systems that allow easy sharing of data and a very good level of inter-operation within their own  services. This is due to the use of uniform environments (software and database schemas)  that  can  easily  make physically distributed information repositories interoperable, but only within the limits of that environment. What  is interesting is that centralized services like Google, Facebook and all social networks are adopting interoperable technologies in order to expose  part of their data to other applications, because the  huge range of social platforms is distributed and has to meet the needs of users in terms of easier access to information across different platforms.

Since there are social, political and practical reasons why centralization of repositories or omologation  of software and working tools will not happen, a higher degree of standardization and generalization ("abstraction") is needed to make data sets interoperable across systems.

The alternative to centralization of data or  omologation  of working environments is the development of a set of standards, protocols and tools that make distributed data sets interoperable and sharing possible among heterogeneous and un-coordinated systems ("loose coupling").

This has been addressed by the W3C with the concept of the "semantic web". The semantic web heralds the goal of global interoperability of data on the WWW. The concept  was proposed more than 10 years ago. Since then the W3C has developed a range of standards to achieve this goal, specifically semantic description languages (RDF, OWL), which should get data out of  isolated database silos and structure text that was born unstructured. Interoperability is achieved when machines understand the meaning of distributed data and therefore are able to process them in the correct way.

 


1 Interoperability http://en.wikipedia.org/wiki/Interoperability 

Thomas Baker
Thomas BakerDublin Core Metadata InitiativeUnited States of America

I recognize, with Diane, that part of the problem has indeed been the use of technologies pushed by IT departments because they lie within their comfort zones, which typically means XML and SQL.  (It should however be added that not all data needs to be exposed as linked data, and that managing data in XML or SQL may in many circumstances be the most practical solution.)

That being the case, the question becomes: How can this or that database or XML database be tweaked to expose linked data -- perhaps only an extract of the full data, or perhaps on-the- fly?  Data can be managed in XML or SQL and exposed as RDF. If a given XML or SQL database was originally designed with linked data in mind, or if it happens to map cleanly to linked-data structures, such transformations will be that much easier to implement.

The VIVO project has alot to say about this, as much of their data is extracted and converted from the wide range of databases and formats used on their campuses.  In today's world, the (growing) diversity of data formats is a given.  It is precisely because the linked data approach does not require data to be managed in a particular format that it stands a chance of succeeding.

Burley Zhong Wang
Burley Zhong WangSchool of Information Science and Technology, Sun Yat-sen UniversityChina

Hi Diane

I started to be aware of the acceptance and application of Open Access as well we Web of Linked Dada (WLD) in China, since last year. I wanna know how many institute and organizations or projects has adopted these two approaches to run their applications, the number is very limited.

yes perhaps people is more used to the tools they use well, and there are many usable tools to develop a CMS application or simple online query system, which are enough for user to find the info they want.  

but to me WLD or LOD is not difficult, perhaps it requires more patience than technical skill, an LOD demonstration seldom looks attractive (see those samples from W3C). 

 

While using data in models other than its original one, it often can not be used as is, some kind of transformations will be needed. Such transformations can be divided into 3 occasions: formal transformation, semantic transformation, and the combine of both. The formal transformation is about the data type, precision, format, etc. All of these formal transformation concerns can be handled by LOD standards. The semantic transformation is about the conceptual and logical aspects of data, and can not be handled by LOD. The semantic transformation even can not be fully done by Ontological methods, because that logical inference can only handle formal semantic problems, most of the real problems can not be formalized completely, and commonsense computation must be introduced to solve them. Today’s commonsense computation is represented by Watson like machine, and still far from mature. Commonsense computation will still become the bottle neck of interoperability in future years.

Sylvester Dickson  Baguma
Sylvester Dickson BagumaNational Agricultural Research Organisation - NAROUganda

Prospects of interoperability in the future are promising. Already there are many organisation that are sharing data especisly in business entities They are making some of their data accessible and used by other organizations and vice versa. For this to truly work it requires changing negative mind set regarding making available and sharing the existing information and knowledge. Interoperability will reduce duplication, the amount of time in generating new information and where possible knowledge and improve efficiency of agricultural research systems.

 

 

 

Prospects for interoperability are very high. However they depend on the agreement of international standards for data formats, designs and tools. The agreements to share information located in institutional repositories will also be key. We need to approach this issue at global, regional, national and institutional levels. We also need to be aware of where the technology is moving in relation to web design.
 

Mon nom est Andriamparany travaillant au Ministère de l'agriculture à Madagascar.
Je vois que les précédentes contributions ont soulevé de nombreuses questions dont beaucoup vont au-delà de l'échange d'information, mais doivent se refléter dans les tendances futures qui aider à améliorer la documentation d'information et de partage de l'information. Certaines questions comme un autre continent ... un autre rêve, le manque de connaissances sur les ordinateurs et web 2.0 », mon premier intérêt", le patrimoine culturel, le manque de connaissances emballés produits qui sont dans l'intérêt des agriculteurs, le manque d'incitations entre chercheurs, en particulier dans l'absence de nombreux pays en développement, de clair partage de la culture, la façon de documenter et de faire résultats tangibles, et beaucoup d'autres questions qui pourraient demeurer comme des obstacles touchant le partage des  informations.
Maintenant, la question est de savoir comment nous pouvons penser à des suggestions qui pourraient faciliter une meilleure reconnaissance des efforts de recherche et contribuent à briser le cercle vicieux cercle dans l'intégration des connaissances  scientifiques et autochtones ainsi que les mécanismes qui facilitent la plus participative et centrée sur l'agriculteur approches menant à des formats adaptés de l'édition et le partage d'informations.