E-Agriculture

Question 2: What are the prospects for interoperability in the future?

Question 2: What are the prospects for interoperability in the future?

"Interoperabilty"1 is a feature both of data sets and of information services that gives access to data sets. When a data set or a service is interoperable it means that data coming from it can be easily "operated" also by other systems. The easier it is for other systems to retrieve, process, re-use and re-package data from a source, and the less coordination and tweaking of tools is required to achieve this, the more interoperable that source is.

Interoperability ensures that distributed data can be exchanged and re-used by and between partners without the need to centralize data or standardise software.
Some examples of scenarios where data sets need to be interoperable:

   transfer data from one repository to another;
   harmonize different data and metadata sets;
   aggregate different data and metadata sets;
   virtual research environments;
   creating documents from distributed data sets;
   reasoning on distributed datasets;
   creating new information services using distributed data sets.


There are current examples of how an interesting degree of  internal interoperability  is achieved through centralized systems.  Facebook and  Google are the largest examples of centralized systems that allow easy sharing of data and a very good level of inter-operation within their own  services. This is due to the use of uniform environments (software and database schemas)  that  can  easily  make physically distributed information repositories interoperable, but only within the limits of that environment. What  is interesting is that centralized services like Google, Facebook and all social networks are adopting interoperable technologies in order to expose  part of their data to other applications, because the  huge range of social platforms is distributed and has to meet the needs of users in terms of easier access to information across different platforms.

Since there are social, political and practical reasons why centralization of repositories or omologation  of software and working tools will not happen, a higher degree of standardization and generalization ("abstraction") is needed to make data sets interoperable across systems.

The alternative to centralization of data or  omologation  of working environments is the development of a set of standards, protocols and tools that make distributed data sets interoperable and sharing possible among heterogeneous and un-coordinated systems ("loose coupling").

This has been addressed by the W3C with the concept of the "semantic web". The semantic web heralds the goal of global interoperability of data on the WWW. The concept  was proposed more than 10 years ago. Since then the W3C has developed a range of standards to achieve this goal, specifically semantic description languages (RDF, OWL), which should get data out of  isolated database silos and structure text that was born unstructured. Interoperability is achieved when machines understand the meaning of distributed data and therefore are able to process them in the correct way.

 


1 Interoperability http://en.wikipedia.org/wiki/Interoperability 

Burley Zhong Wang
Burley Zhong WangSchool of Information Science and Technology, Sun Yat-sen UniversityChina

 Hi John,

There is a note written by the creator of eScienceNews.com (http://drupal.org/node/261340). our team has also made a trial based on the description. by far there is less semantic calculation involved, it uses Naïve Bayesian  algorithm to calculate the similarities among online texts aggregated from the RSS sources, this require a training process, as Agrotagger does I guess.  

 I agree the future you described the tools like OpenCalais, Textwise could bring, Johannes once introduces a website to me, which you may have known, that PhaseIITechnology (http://www.phase2technology.com/) has been working in this area. 

Information exchange and networking is most effective if it is coined by reciprocity. - Also farmers' contributions might be of importance for researchers. That implies - independent from the finally chosen technical solution, farmers or other stakeholders should be motivated to reply, meaning to comment on a report may not be a disincentive. Therefore it should be thought about right from the beginning if it will be possible to establish "regional data transformation centres". Certainly, such an solution might raise the costs for running the information network, but it might be worth thinking about.

Johannes Keizer
Johannes KeizerFAO of the United NationsItaly

I don´t like very much the word "extension service" .  The language implies that something is only extended passively to another group.  But this is not the case. The results and outputs of science have to be transformed into practice, into technology and business.

But without any doubt there has to be a layer between scientists and practitioners (farmers, enterprises) that mediates between groups of people that often hardly speak the same language. And this is as IAMO pointed out an investment, but a necessary one.

Please have also a look again what Krishan wrote about the different stakeholders.  He explicitely included infomation management specialists among the stakeholders.  I agree very much with this, because they have a role in creating systems that link science information to advisory services and practice.

I think a quite good example for this is TECA  http://teca.fao.org which is facilitated by colleagues in FAO but has stakeholders from Farmer Organizations to Scientists

 

Chaitra Bahuman
Chaitra BahumanIIT BombayIndia

We developed an Information exchange connecting agri-experts in India with farmers.    The interactions start as a question from a farmer with a response from an expert and additional dialogue in some cases.

There are now about 35,000 posts on aaqua.org, about 1/3 of them coming from the farming community, 1/3 from agri-experts and 1/3 from agri-consultants/students/teachers/researchers etc.

We have tried to cluster the Q&A into topics by (a) identifying agri-topics and (b) counting the no of Q&A on each topic.    This analysis is shared with agri-extension organizations helping them prioritize farmer's needs and prepare content on those areas so that information can be provided pro-actively. 

The forums area used by wealthy, educated farmers but is not limited to them.   A large no of questions come from an author who is asking the question on behalf of a small farmer who is a friend, relative, beneficiary etc.

Challenges include capacity building of agri-extension organizations so that they have 10 hr Internet access, power and at least two people who can (a) provide answers to questions coming online and (b) archive Q&A discussion hapenning on the phone.

aaqua.org

Krishan Bheenick
Krishan BheenickForum for Agricultural Research in AfricaGhana

Is interoperability the basis of 'collaboration'? Collaboration is our goal (& hopefully a common one, even though our derived benefits may differ) and interoperability is one of the means of achieving it.

I feel that the concept of 'interoperability' needs to be considered, ranging all the way  from people collaborating to systems collaborating, with concepts and information interoperability being somewhere in between.

People successfully interoperating means that there has been a mutual recognition of value of the knowledge/information, an understanding of each others' context, an agreed set of communication protocols, and an agreed vision of the process of 'interoperating'.

The same would apply when dealing with concepts (whether its among concepts in people's minds- tacit or that have been described in ontologies (and vocabularies?) - explicit).

I am trying not to use big words or acronyms that I have often seen so far, because some of these are quite new to many of us and that should not affect our 'interoperability' within this discussion.

Thus, I also feel that  interoperability among systems may not happen until we have interoperability among people, concepts followed by data, before we can put them together again as contents (or constructs) of systems.

So, as we go through his discussion, it may be useful to try to classify the technologies we are talking about according to which of the four they are suited to address:-

When Sanjay talks of seeking support from Management to invest in his institution's system's interoperability, he is looking for tools that address interoperability among people - can these be policy statements from international forums, like GCARD; intergovernmental statements about the need for regional integration;

When Hugo, in response to Qu. 1, states that he did not see the collaboration among people to define or discover new uses of information, was he looking for tools that facilitate interoperability around concepts? When we talked about the variety of information needs that have to be satisfied under Qu1, were we referring to the need to improve the understanding of the concepts of the user by the information provider? Do the LOD, RDF & URIs address this need sufficiently?(I would not know)

What about when it comes to data? Do the 'standards' or 'most popular formats' of storage and information exchange address the interoperability among databases? It seems that we have historically spent more time of this aspect (until the technology forced us to move onto new concepts?). Based on the responses, it also seems that to take us forward, the focus will have to be elsewhere or on more than just interoperability around data exchange.

So, finally when we tallk of interoperability among systems, is it a combination of the above? Laurent seems to be saying that we need to go beyond the three levels as distinct domains, but start defining new constructs that mix the terms above - which then enables the systems to share meaningful information. Does the example of an Artificial Intelligence behind the editing of the e-sciencenews give us an example of how we may need to build these constructs in the future?

The above is just me thinking aloud...but then, what does all this discussion mean to the person sitting in an institution in the developing world, with poor connectivity and blackouts (in a financially stalled government as Dick put it). What picture do we paint about interoperability for the person who has to decide what is the next step in their institution? We need to be able to paint a scenery for them to illustrate that the efforts they put in now in a system is not going to be wasted or that it can already, with collaboration with another (external, regional?) partner, be contributing to the global pool.

We also need to be able to paint a scenery for ourselves on this forum of how these different techniques of facilitating interoperability fit together!

Hugo Besemer
Hugo BesemerSelf employed/ Wageningen UR (retired)Netherlands

Krishan wrote:

 

>When Hugo, in response to Qu. 1, states that he did not see the collaboration among >people to define or discover new uses of information, was he looking for tools that >facilitate interoperability around concepts?


Krishan, I am not sure what you are referring to. If you mean my example of bringing together different types of data relating to climate and agriculture: I meant to say exactly the opposite. What I hinted at is that bringing these  things together is not just a technical or logical issue. There is a human side to it as well.  But it needs to be done, especially for more insight in urgent problems like agriculture and climate change. Not necessarily easy, but like - just a random  example :=)  - in a marriage, it is worth the effort to learn to talk to each other. 

John Fereira
John FereiraCornell UniversityUnited States of America

When Krishnan brought up the topic of Interoperibility amoung people I thought that might be a good opportunity to introduce (for those that are not familiar with it) a project developed out of Cornell called VIVO (http://www.vivoweb.org).  I'm hoping that my boss (the original developer and current development manager) will chime in to provide greater detail but VIVO is an open source semantic web application originally developed and implemented at Cornell. When installed and populated with researcher interests, activities, and accomplishments, it enables the discovery of research and scholarship across disciplines at that institution and beyond.

There is currently a large NIH funded VIVO project underway that involves seven institutions to create a national network of scientists will facilitate the discovery of researchers and collaborators across the country.  Essentially, it is being implemented to facilitate the Interoperability among people.  

Although VIVO has not yet founds it's way into the Agricultural Information Systems domain in any sort of production environment there has been a great amount of interest.

For example, suring a recent visit to several institutions in Costa Rica we had talked about devleoping a community of experts system that might involve insitutions associated with the SIDALC project in latin america.  That includes 158 institutions in 22 different countries. 

There is also a project at the United States Department of Agriculture (USDA) that has committed to using VIVO to create a one-stop shop for federal agriculture expertise and research results.  Here's the official announcement:  http://www.ars.usda.gov/is/pr/2010/101005.htm

Personally, I've done a bit of work integrating VIVO with Drupal based systems and the creation of a "Semantic Services" project that is being used in a few Cornell departments to provide faculty information to students. 

When talking about interopibility among people I think taking a serious look at VIVO is warrented.

 

 

 

Sanjay Chandrabose Sembhoo
Sanjay Chandrabose SembhooAricultural Research and Extension UnitMauritius

It is good to see that some of us are trying to bring the human factor in interoperability. This is often overlooked because we have a tendency to concentrate on the technology aspect. And perhaps, this is why at times it is difficult to have buy in. Or in many cases haven't we seen systems being setup but then they lose momentum and eventually become souvenirs!

But if I summarise from everything from this thread, doesn't everything comes to people, processes and technology?

We cannot seperate any of the three and hope a system to be interoperable? Or can we?

 

Indeed, the triangle is pivotal no matter how the final system for information sharing finally looks like. Going through the contributions it becomes obvious that many expect more than passive sharing of information, but are thinking on active networking.

In fact there is a gradient coining the interaction/ the processes: starting from nearly solely technical characteristics with nearly no personal component ending with multilateral face-to-face-communication.

The first one might be efficient, the latter is generally more effective, particularly if you are thinking on "new knowledge" (see Hugo's contribution), because personal communication in a group is said to lead to greates creativity. Furthermore, personal communication makes a network more sustainable, because closer contacts and trust are established.

On the gradient in between these two poles, there are communication forms, like e-mail, skype conversations, bilateral meetings ...

Meaning, establishing a network for sharing information seems to be most promissing if it includes the different communication forms

I totaly Agree with IAMO's statement that establishing a network for sharing information seems to be most promising if it includes the different communication forms. this could be considered as incentive that encourage participation in information sharing and could enforce more win-win information sharing.