Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

February 27 2015

AKSW Colloquium: Tommaso Soru and Martin Brümmer on Monday, March 2 at 3.00 p.m.

On Monday, 2nd of March 2015, Tommaso Soru will present ROCKER, a refinement operator approach for key discovery. Martin Brümmer will then present NIF annotation and provenance – A comparison of approaches.

Tommaso Soru – ROCKER – Abstract

As within the typical entity-relationship model, unique and composite keys are of central importance also when their concept is applied on the Linked Data paradigm. They can provide help in manifold areas, such as entity search, question answering, data integration and link discovery. However, the current state of the art does not count approaches able to scale while relying on a correct definition of key. We thus present a refinement-operator-based approach dubbed ROCKER, which has shown to scale to big datasets with respect to the run time and the memory consumption. ROCKER will be officially introduced at the 24th International Conference on World Wide Web.

Tommaso Soru, Edgard Marx, and Axel-Cyrille Ngonga Ngomo, “ROCKER – A Refinement Operator for Key Discovery”. [PDF]

Martin Brümmer - Abstract – NIF annotation and provenance – A comparison of approaches

The uptaking use of the NLP Interchange Format (NIF) reveals its shortcomings on a number of levels. One of these is tracking metadata of annotations represented in NIF – which NLP tool added which annotation with what confidence at which point in time etc.

A number of solutions to this task of annotating annotations expressed as RDF statements has been proposed over the years. The talk will weigh these solutions, namely annotation resources, reification, Open Annotation, quads and singleton properties in regard to their granularity, ease of implementation and query complexity.

The goal of the talk is presenting and comparing viable alternatives of solving the problem at hand and collecting feedback on how to proceed.

February 19 2015

AKSW Colloquium: Edgard Marx and Tommaso Soru on Monday, February 23, 3.00 p.m.

On Monday, 23rd of February 2015, Edgard Marx will introduce Smart, a search engine designed over the Semantic Search paradigm; subsequently, Tommaso Soru will present ROCKER, a refinement operator approach for key discovery.

EDIT: Tommaso Soru’s presentation was moved to March 2nd.

Abstract – Smart

Since the conception of the Web, search engines play a key role in making content available. However, retrieving of the desire information is still significantly challenging. Semantic Search systems are a natural evolution of the traditional search engines. They promise more accurate interpretation by understanding the contextual meaning of the user query. In this talk, we will introduce our audience to Smart, a search engine designed over the Semantic Search paradigm. Smart incorporates two of our currently designed approaches of dealing with the problem of Information Retrieval, as well as a novel interface paradigm. Moreover, we will present some of the former, as well as more recent state-of-the-art approaches used by the industry – for instance by Yahoo!, Google and Facebook.

Abstract – ROCKER

As within the typical entity-relationship model, unique and composite keys are of central importance also when their concept is applied on the Linked Data paradigm. They can provide help in manifold areas, such as entity search, question answering, data integration and link discovery. However, the current state of the art does not count approaches able to scale while relying on a correct definition of key. We thus present a refinement-operator-based approach dubbed ROCKER, which has shown to scale to big datasets with respect to the run time and the memory consumption. ROCKER will be officially introduced at the 24th International Conference on World Wide Web.

Tommaso Soru, Edgard Marx, and Axel-Cyrille Ngonga Ngomo, “ROCKER – A Refinement Operator for Key Discovery”. [PDF]

February 17 2015

Call for Feedback on LIDER Roadmap

The LIDER project is gathering feedback on a roadmap for the use of Linguistic Linked Data for content analytics.  We invite you to give feedback in the following ways:

Excerpt from the roadmap

Full document: available here
Summary slides: available here

Content is growing at an impressive, exponential rate. Exabytes of new data are created every single day. In fact, data has been recently referred to as the “oil” of the new economy, where the new economy is understood as “a new way of organizing and managing economic activity based on the new opportunities that the Internet provided for businesses” .

Content analytics, i.e. the ability to process and generate insights from existing content, plays and will continue to play a crucial role for enterprises and organizations that seek to generate value from data, e.g. in order to inform decision and policy making.

As corroborated by many analysts, substantial investments in technology, partnerships and research are required to reach an ecosystem consisting of many players and technological solutions that provide the necessary infrastructure, expertise and human resources required to make sure that organizations can effectively deploy content analytics solutions at large scale in order to generate relevant insights that support policy and decision making, or even to define completely new business models in a data-driven economy.

Assuming that such investments need to be and will be made, this roadmap explores the role that linked data and semantic technologies can and will play in the field of content analytics and will generate a set of recommendations for organizations, funders and researchers on which technologies to invest as a basis to prioritize their investment in R&D as well as on optimizing their mid- and long-term strategies and roadmaps.

Conference Call on 19th of February 3 p.m. CET

Connection details: https://www.w3.org/community/ld4lt/wiki/Main_Page#LD4LT_calls
Summary slides: available here


  1. Introduction to the LIDER Roadmap (Philipp Cimiano, 10 minutes)
  2. Discussion of Global Customer Engagement Use Cases (All, 10 minutes)
  3. Discussion of Public Sector and Civil Society Use Cases (All, 10 minutes)
  4. Discussion of Linked Data Life Cycle and Linguistic Linked Data Value Chain (All, 10 minutes)
  5. General Discussion on further use cases, items in the roadmap etc. (20 minutes)

In addition, the call will briefly discuss progress of meta-share linked data metadata model.

The call is open to the public, no LD4LT group participation is required. Dial-in information is available. Please spread this information widely. No knowledge about linguistic linked data is required. We especially are interested in feedback from potential users of linguistic linked data.

About the LIDER Project

Website: http://lider-project.eu

The project’s mission is to provide the basis for the creation of a Linguistic Linked Data cloud that can support content analytics tasks of unstructured multilingual cross-media content. By achieving this goal, LIDER will impact on the ease and efficiency with which Linguistic Linked Data will be exploited in content analytics processes.

We aim at providing an ecosystem for the establishment of a new Linked Open Data (LOD) based ecosystem of free, interlinked, and semantically interoperable language resources (corpora, dictionaries, lexical and syntactic metadata, etc.) and media resources (image, video, etc. metadata) that will allow for free and open exploitation of such resources in multilingual, cross-media content analytics across the EU and beyond, with specific use cases in industries related to social media, financial services, localization, and other multimedia content providers and consumers.

Take a personal interview to include your voice into the roadmap

Contact: http://lider-project.eu/?q=content/contact-us

The EU project LIDER has been tasked by the European Commission to put together a roadmap for future R&D funding in multilingual industries such as content and knowledge localization, multilingual terminology and taxonomy management, cross-border business intelligence, etc. As a leading supplier of solutions in one or more of these industries, we would need your input for this roadmap. We would like to conduct a short interview with you to establish your views on current and developing R&D efforts in multilingual and semantic technologies that will likely play an increasing role in these industries, such as Linked Data and related standards for web-based, multilingual data processing. The interview will cover the below 5 questions and will not take more than 30 minutes. Please let us know on a suitable time and date.

February 16 2015

AKSW Colloquium: Konrad Höffner and Michael Röder on Monday, February 16, 3.00 p.m.

CubeQA—Question Answering on Statistical Linked Data by Konrad Höffner


Question answering systems provide intuitive access to data by translating natural language queries into SPARQL, which is the native query language of RDF knowledge bases. Statistical data, however, is structurally very different from other data and cannot be queried using existing approaches. Building upon a question corpus established in previous work, we created a benchmark for evaluating questions on statistical Linked Data in order to evaluate statistical question answering algorithms and to stimulate further research. Furthermore, we designed a question answering algorithm for statistical data, which covers a wide range of question types. To our knowledge, this is the first question answering approach for statistical RDF data and could open up a new research area.
See also the paper (preprint, under review) and the slides.

News from the WSDM 2015 by Michael Röder


The WSDM conference is one of the major conferences for Web Search and Data Mining. Michael Röder was attending this years WSDM conference in Shanghai and wants to present a short overview over the conference topics. After that, he wants to take a closer look at FEL – an entity linking approach for search queries peresented at the conference.

About the AKSW Colloquium

This event is part of a series of events about Semantic Web technology. Please see http://wiki.aksw.org/Colloquium for further information about previous and future events. As always, Bachelor and Master students are able to get points for attendance and there is complimentary coffee and cake after the session.

Kick-off of the FREME project

Hi all !

A new InfAI project, FREME, kicked off in Berlin. FREME – Open Framework of E-Services for Multilingual and Semantic Enrichment of Digital Content is an H2020 funded project with the objective of building an open, innovative, commercial-grade framework of e-services for multilingual and semantic enrichment of digital content.

InfAI will play an important role in FREME by driving two of the six central FREME services, e-Link and the e-Entity. NIF will be used as a mediator between language services and data sources, serving as foundation for e-Link, while DBpedia Spotlight will be a prototype for e-Entity services, linking named entities in natural language texts to Linked Open Data sets like DBpedia.

InfAI will also help to identify and publish new Linked Data sets that can contribute to data value chains. Our partners in this open content enrichment effort will be DFKI, Tilde, Iminds, Agro-Know, Wripl, VistaTEC and ISBM.

Stay tuned for more info ! In the meanwhile join the conversation on twitter #FREMEH2020.

- Amrapali Zaveri on behalf of the NLP2RDF group

February 13 2015

DL-Learner 1.0 (Supervised Structured Machine Learning Framework) Released

Dear all,

we are happy to announce DL-Learner 1.0.

DL-Learner is a framework containing algorithms for supervised machine learning in RDF and OWL. DL-Learner can use various RDF and OWL serialization formats as well as SPARQL endpoints as input, can connect to most popular OWL reasoners and is easily and flexibly configurable. It extends concepts of Inductive Logic Programming and Relational Learning to the Semantic Web in order to allow powerful data analysis.

Website: http://dl-learner.org
GitHub page: https://github.com/AKSW/DL-Learner
Download: https://github.com/AKSW/DL-Learner/releases
ChangeLog: http://dl-learner.org/development/changelog/

DL-Learner is used for data analysis in other tools such as ORE and RDFUnit. Technically, it uses refinement operator based, pattern based and evolutionary techniques for learning on structured data. For a practical example, see http://dl-learner.org/community/carcinogenesis/. It also offers a plugin for Protege, which can give suggestions for axioms to add. DL-Learner is part of the Linked Data Stack – a repository for Linked Data management tools.

We want to thank everyone who helped to create this release, in particular (alphabetically) An Tran, Chris Shellenbarger, Christoph Haase, Daniel Fleischhacker, Didier Cherix, Johanna Völker, Konrad Höffner, Robert Höhndorf, Sebastian Hellmann and Simon Bin. We also acknowledge support by the recently started SAKE project, in which DL-Learner will be applied to event analysis in manufacturing use cases, as well as the GeoKnow and Big Data Europe projects where it is part of the respective platforms.

Kind regards,

Lorenz Bühmann, Jens Lehmann and Patrick Westphal

Writing a Survey – Steps, Advantages, Limitations and Examples

What is a Survey?

A survey or systematic literature review is a text of a scholarly paper, which includes the current knowledge including substantive findings, as well as theoretical and methodological contributions to a particular topic. Literature reviews use secondary sources, and do not report new or original experimental work [1].

A systematic review is a literature review focused on a research question, trying to identify, appraise, select and synthesize all high quality research evidence and arguments relevant to that question. Moreover, a literature review is comprehensive, exhaustive and repeatable, that is, the readers can replicate or verify the review.

Steps to perform a survey

  • Select two independent reviewers

  • Look for related/existing surveys

    • If it exists, see how long back it was done. If it was 10 years ago, you can go ahead and update it.

  • Formulate research questions

  • Devise eligibility criteria

  • Define search strategy – keywords, journals, conferences, workshops to search in

  • Retrieve further potential article using search strategy and also directly contacting top researchers in the field

  • Compare chosen articles among reviewers and decide a core set of papers to be included in the survey

  • Perform Qualitatively and Quantitatively on the selected set of papers

  • Report on the results

Advantages of writing a survey

There are several benefits/advantages of conducting a survey, such as:

  • A survey is the best way to get an idea of the state-of-the-art technologies, algorithms, tools etc. in a particular field

  • One can get a clear birds-eye overview of the current state of that field

  • It can serve as a great starting point for a student or any researcher thinking of venturing into that particular field/area of research

  • One can easily acquire updated information of a subject by referring to a review

  • It gives researchers the opportunity to formalize different concepts of a particular field

  • It allows one to identify challenges and gaps that are unanswered and crucial for that subject

Limitations of a survey

However, there are a few limitations that must be considered before undertaking a survey such as:

  • Surveys can tend to be biased, thus it is necessary to have two researchers, who perform the systematic search for the articles independently

  • It is quite challenging to unify concepts, especially when there are different ideas referring to the same concepts developed over several years

  • Indeed, conducting a survey and getting the article published is a long process

Surveys conducted by members of the AKSW group

In our group, three students conducted comprehensive literature reviews on three different topics:

  • Linked Data Quality: The survey covers 30 core papers, which focus on providing quality assessment methodologies for Linked Data specifically. A total of 18 data quality dimensions along with their definitions and 69 metrics are provided. Additionally, the survey contributes a comparison of 12 tools, which perform quality assessment of Linked Data [2].

  • Ubiquitous Semantic Applications: The survey presents a thorough analysis of 48 primary studies out of 172 initially retrieved papers.  The results consist of a comprehensive set of quality attributes for Ubiquitous Semantic Applications together with corresponding application features suggested for their realization. The quality attributes include aspects such as mobility, usability, heterogeneity, collaboration, customizability and evolvability. The proposed quality attributes facilitate the evaluation of existing approaches and the development of novel, more effective and intuitive Ubiquitous Semantic Applications [3].

  • User interfaces for semantic authoring of textual content: The survey covers a thorough analysis of 31 primary studies out of 175 initially retrieved papers. The results consist of a comprehensive set of quality attributes for SCA systems together with corresponding user interface features suggested for their realization. The quality attributes include aspects such as usability, automation, generalizability, collaboration, customizability and evolvability. The proposed quality attributes and UI features facilitate the evaluation of existing approaches and the development of novel more effective and intuitive semantic authoring interfaces [4].

Also, here is a presentation on “Systematic Literature Reviews”: http://slidewiki.org/deck/57_systematic-literature-review.


[1] Lisa A. Baglione (2012) Writing a Research Paper in Political Science. Thousand Oaks: CQ Press.

[2] Amrapali Zaveri, Anisa Rula, Andrea Maurino, Ricardo Pietrobon, Jens Lehmann and Sören Auer (2015), ‘Quality Assessment for Linked Data: A Survey’, Semantic Web Journal. http://www.semantic-web-journal.net/content/quality-assessment-linked-data-survey

[3] Timofey Ermilov, Ali Khalili, and Sören Auer (2014). ;Ubiquitous Semantic Applications: A Systematic Literature Review’. Int. J. Semant. Web Inf. Syst. 10, 1 (January 2014), 66-99. DOI=10.4018/ijswis.2014010103 http://dx.doi.org/10.4018/ijswis.2014010103

[4] Ali Khalili and Sören Auer (2013). ‘User interfaces for semantic authoring of textual content: A systematic literature review’, Web Semantics: Science, Services and Agents on the World Wide Web, Volume 22, October 2013, Pages 1-18 http://www.sciencedirect.com/science/article/pii/S1570826813000498

February 03 2015

Kick-Off for the BMWi project SAKE

Hi all!

One of AKSW’s Big Data Project, SAKE – Semantische Analyse Komplexer Ereignisse (SAKE – Semantic Analysis of Complex Events) kicked-off in Karlsruhe. SAKE is one of the winners of the Smart Data Challenge and is funded by the German BMWi (Bundesministerium für Wirtschaft und Energie) and has a duration of 3 years. Within this project, AKSW will develop powerful methods for analysis of industrial-scale Big Linked Data in real time. To this end, the team will extend existing frameworks like LIMES, QUETSAL and FOX. Together with USU AG, Heidelberger Druckmaschinen, Fraunhofer  IAIS and AviComp Controls novel methods for tackling Business Intelligence challenges will be devised.

More info to come soon!

Stay tuned!

Axel on behalf of the SAKE team

February 02 2015

AKSW Colloquium: Ricardo Usbeck and Ivan Ermilov on Monday, February 2, 3.00 p.m.

GERBIL – General Entity Annotation Benchmark Framework by Ricardo Usbeck


The need to bridge between the unstructured data on the document Web and the structured data on the Data Web has led to the development of a considerable number of annotation tools. Those tools are hard to compare since published results are calculated on diverse datasets and measured in different units.

We present GERBIL, a general entity annotation system based on the BAT-Framework. GERBIL offers an easy-to-use web-based platform for the agile comparison of annotators using multiple datasets and uniform measuring approaches. To add a tool to GERBIL, all the end user has to do is to provide a URL to a REST interface to its tool which abides by a given specification. The integration and benchmarking of the tool against user-specified datasets is then carried out automatically by the GERBIL platform. Currently, out platform provides results for 9 annotators and 11 datasets with more coming. Internally, GERBIL is based on the Natural Language Programming Interchange Format (NIF) and provide Java classes for implementing APIs for datasets and annotators to NIF. For the paper see here.

Towards Efficient and Effective Semantic Table Interpretation by Ziqi Zhang presented by Ivan Ermilov


Ivan will present a paper that describes TableMiner by Ziqi Zhang, the first semantic Table Interpretation method that adopts an incremental, mutually recursive and bootstrapping learning approach seeded by automatically selected ‘partial’ data from a table. TableMiner labels columns containing named entity mentions with semantic concepts that best describe data in columns, and disambiguates entity content cells in these columns. TableMiner is able to use various types of contextual information outside tables for Table Interpretation, including semantic markups (e.g., RDFa/microdata annotations) that to the best of our knowledge, have never been used in Natural Language Processing tasks. Evaluation on two datasets shows that compared to two baselines, TableMiner consistently obtains the best performance. In the classification task, it achieves significant improvements of between 0.08 and 0.38 F1 depending on different baseline methods; in the disambiguation task, it outperforms both baselines by between 0.19 and 0.37 in Precision on one dataset, and between 0.02 and 0.03 F1 on the other dataset. Observation also shows that the bootstrapping learning approach adopted by TableMiner can potentially deliver computational savings of between 24 and 60% against classic methods that ‘exhaustively’ processes the entire table content to build features for interpretation.

About the AKSW Colloquium

This event is part of a series of events about Semantic Web technology. Please see http://wiki.aksw.org/Colloquium for further information about previous and future events. As always, Bachelor and Master students are able to get points for attendance and there is complimentary coffee and cake after the session.

January 28 2015

Neues Netzwerkprojekt: FOKUS:SE - Das Forschernetzwerk Service Engineering

January 20 2015

Two AKSW Papers at #WWW2015 in Florence, Italy

Hello Community!
We are very pleased to announce that two of our papers were accepted for presentation at WWW 2015.  The papers cover novel approaches for Key Discovery while Linking Ontologies and a benchmark framework for entity annotation systems. In more detail, we will present the following papers:
Visit us from the 18th to the 22nd May in Florence, Italy and enjoy the talks. More information on these publications at http://aksw.org/Publications.
Ricardo on behalf of AKSW

January 12 2015

International Symposium on Service Science ISSS 2015

Bereits zum sechsten Mal in Folge bietet das „International Symposium on Service Science“ (ISSS) eine einzigartige Plattform für einen Austausch über den Fortschritt der Forschung im Bereich der Service Science und die Anwendung der Ergebnisse in der Praxis. Wissenschaftler und Anwender sind gleichermaßen aufgerufen, sich an der Veranstaltung zu beteiligen und gemeinsam den Blick auf die umfangreichen Aspekte der Dienstleistungsforschung zu schärfen. Die ISSS wird eintägige Veranstaltung als Teil des wissenschaftlichen Programms der „Leipziger Tage der Angewandten Informatik” am 5. Mai 2015 in Leipzig stattfinden.

Das Symposium wird veranstaltet vom Institut für Informatik an der Universität Leipzig sowie dem Institut für Angewandte Informatik (InfAI) e.V..

Alle Informationen zur Veranstaltung unter http://isss.uni-leipzig.de

November 24 2014

Highlights of the 1st Meetup on Question Answering Systems – Leipzig, November 21st

On November 21st, AKSW group was hosting the 1st meetup on “Question Answering” (QA) systems. In this meeting, researchers from AKSW/University of Leipzig, CITEC/University of Bielefeld, Fraunhofer IAIS/University of BonDERI/National University of Ireland and the University of Passau presented the recent results of their work on QA systems. The following themes were discussed during the meeting:

  • Ontology-driven QA on the Semantic Web. Christina Unger presented Pythia system for ontology-based QA. Slides are available here.
  • Distributed Semantic Models for achieving scalability & consistency on QA. André Freitas presented TREO and EasyESA which employ vector-based approach for semantic approximation.
  • Template-based QA. Jens Lehmann presented TBSL for Template-based Question Answering over RDF Data.
  • Keyword-based QA. Saeedeh Shekarpour presented SINA approach for semantic interpretation of user queries for QA on interlinked data.
  • Hybrid QA over Linked Data. Ricardo Usbeck presented HAWK for hybrid question answering using Linked Data and full-text indizes.
  • Semantic Parsing with Combinatory Categorial Grammars (CCG). Sherzod Hakimov. Slides are available here.
  • QA on statistical Linked Data. Konrad Höffner presented LinkedSpending and RDF Data Cube vocabulary to apply QA on statistical Linked Data.
  • WDAqua (Web Data and Question Answering) project. Christoph Lange presented the WDAqua project which is part of the EU’s Marie Skłodowska-Curie Action Innovative Training Networks. WDAqua focuses on answering different aspects of the question, “how can we answer complex questions with web data?”
  • OKBQA (Open Knowledge Base & Question-Answering). Axel-C. Ngonga Ngomo presented OKBQA which aims to bring cutting edge experts in knowledge base construction and application in order to create an extensive architecture for QA systems which has no restriction on programming languages.
  • Open QA. Edgard Marx presented open source question answering framework that unifies QA approaches from several domain experts.

The meetup decided to meet biannually to fuse efforts. All agreed upon investigating existing architecture for question answering systems to be able to offer a promising, collaborative architecture for future endeavours. Join us next time! For more information contact Ricardo Usbeck.

Ali and Ricardo on behalf of the QA meetup

November 20 2014

Announcing GERBIL: General Entity Annotator Benchmark Framework

Dear all,

We are happy to announce GERBIL – a General Entity Annotation Benchmark Framework, a demo can be found at! With GERBIL, we aim to establish a highly available, easy quotable and liable focal point for Named Entity Recognition and Named Entity Disambiguation (Entity Linking) evaluations:

  • GERBIL provides persistent URLs for experimental settings. By these means, GERBIL also addresses the problem of archiving experimental results.
  • The results of GERBIL are published in a human-readable as well as a machine-readable format. By these means, we also tackle the problem of reproducibility.
  • GERBIL provides 11 different datasets and 9 different entity annotators. Please talk to us if you want to add yours.

To ensure that the GERBIL framework is useful to both end users and tool developers, its architecture and interface were designed with the following principles in mind:

  • Easy integration of annotators: We provide a web-based interface that allows annotators to be evaluated via their NIF-based REST interface. We provide a small NIF library for an easy implementation of the interface.
  • Easy integration of datasets: We also provide means to gather datasets for evaluation directly from data services such as DataHub.
  • Extensibility: GERBIL is provided as an open-source platform that can be extended by members of the community both to new tasks and different purposes.
  • Diagnostics: The interface of the tool was designed to provide developers with means to easily detect aspects in which their tool(s) need(s) to be improved.
  • Portability of results: We generate human- and machine-readable results to ensure maximum usefulness and portability of the results generated by our framework.

We are looking for your feedback!

Best regards,

Ricardo Usbeck for The GERBIL Team

November 17 2014

@BioASQ challenge gaining momentum

BioASQ is a series of challenges aiming to bring us closer to the vision of machines that can answer questions of biomedical professionals and researchers. The second BioASQ challenge started in February 2013. It comprised two different tasks: Large-scale biomedical semantic indexing (Task 2a), and biomedical semantic question answering (Task 2b).

In total 216 users and 142 systems registered to the automated evaluation system of BioASQ in order to participate in the challenge; 28 teams (with 95 systems) finally submitted their suggested solutions and answers. The final results were presented at the BioASQ workshop in the Cross Language Evaluation Forum (CLEF), which took place between September 23 and 26 in Sheffield, U.K.

The Awards Went To The Following Teams

Task 2a (Large-scale biomedical semantic indexing):

  • Fudan University (China)
  • NCBI (USA)
  • Aristotle University of Thessaloniki (Greece) and atypon.com (USA)

Task 2b (Biomedical semantic question answering):

  • Fudan University (China)
  • NCBI (USA)
  • University of Alberta (Canada)
  • Seoul National University (South Korea)
  • Toyota Technological Institute (Japan)
  • Aristotle University of Thessaloniki (Greece) and atypon.com (USA)

Best Overall Contribution:

  • NCBI (USA)
The second BioASQ competition, challenge continued the impressive achievements of the first one, pushing the research frontiers in biomedical indexing and question answering. The systems that participated in both tasks of the challenge achieved a notable increase in accuracy over the first year. Among the highlights is the fact that the best systems in task 2a outperformed again the very strong baseline MTI system provided by NLM. This is despite the fact that the MTI system itself has been improved by incorporating ideas proposed by last year’s winning systems. The end of the second challenge marks also the end of the financial support for BioASQ, by the European Commission. We would like to take this opportunity to thank the EC for supporting our vision. The main project results (incl. frameworks, datasets and publications) can be found at the project showcase page at http://bioasq.org/project/showcase.
Nevertheless, the BioASQ challenge will continue with its third round BioASQ3, which will start in February 2015. Stay tuned!

About BioASQ

The BioASQ team combines researchers with complementary expertise from 6 organisations in 3 countries: the Greek National Center for Scientific Research “Demokritos” (coordinator), participating with its Institutes of ‘Informatics & Telecommunications’ and ‘Biosciences & Applications’, the German IT company Transinsight GmbH, the French University Joseph Fourier, the German research Group for Agile Knowledge Engineering and Semantic Web at the University of Leipzig, the French University Pierre et Marie Curie‐Paris 6 and the Department of Informatics of the Athens University of Economics and Business in Greece (visit the BioASQ project partners page). Moreover, biomedical experts from several countries assist in the creation of the evaluation data and a number of key players in the industry and academia from around the world participate in the advisory board of the project.
BioASQ started in October 2012 and was funded for two years by the European Commission as a support action (FP7/2007-2013: Intelligent Information Management, Targeted Competition Framework; grant agreement n° 318652). More information can be found at: http://www.bioasq.org.
Project Coordinator: George Paliouras (paliourg@iit.demokritos.gr).

October 28 2014

AKSW successful at #ISWC2014

Dear followers, 9 members of AKSW have been participating at the 13th International Semantic Web Conference (ISWC) at Riva del Garda, Italy. Next to listening to interesting talks, giving presentations or discussing with fellow Semantic Web researchers, AKSW won 4 significant prizes:

We do work on way more projects, which you can find at http://aksw.org/projects/. Cheers, Ricardo on behalf of the AKSW group
Best Paper Award

October 23 2014

AKSW internal group meeting @ Dessau

Recently, AKSW members were at the city of Dessau for an internal group meeting.

The meeting took place between 8th and 10th of October, in the modern university of architecture of Bauhaus where most team members also stayed the nights. Bauhaus is located in the city of Dessau, about one hour from Leipzig. Bauhaus operated from 1919 to 1933 and was famous for the approach to design that combined crafts and the fine arts. At that time the German term Bauhaus – literally “house of construction” – was understood as meaning “School of Building”. It was a perfect getaway and an awesome location for AKSWers to meet and “build” together the future steps of the group.

Wednesday was spent mostly in smaller group discussions on various ongoing projects. Over the next two days the AKSW PhD students presented their achievements, current status and future plans of their PhD projects. During the meeting, we had the pleasure to receive valuable feedback from AKSW leaders and project managers as Prof. Sören Auer, Dr. Jens Lehmann, Prof. Thomas Riechert and Dr. Michael Martin. The heads of AKSW gave their inputs and suggestions to the students in order to help them to improve, continue and or complete their PhDs. In addition, the current projects were also discussed so as to find possible synergies between them and to discuss further improvements and ideas.

However, we did find some time to enjoy the beautiful city of Dessau as well and learned a little bit more about the history of this wonderful city.

Overall, it was a productive and recreational trip not only to keep a track of each students progress but also to help them to improve their work. We are all thankful to Prof. Riechert and Dr. Lehmann who were responsible for organizing this amazing meeting.

DSC_2817 SAM_3489 DSC_2806 DSC_2698 DSC_2724 DSC_2775 IMG_2950 SAM_3491 SAM_3565 IMG_2932 IMG_2930 IMG_2940 SAM_3630 SAM_3472 SAM_3493 SAM_3485 IMG_2963 SAM_3624 SAM_3498 SAM_3639 DSC_2736 DSC_2740 DSC_2742 DSC_2754 DSC_2767

October 16 2014

AKSW at #ISWC2014. Come and join, talk and discuss with us!

Hello AKSW Follower!
We are very pleased to announce that nine of our papers were accepted for presentation at ISWC 2014.
In the main track of the conference we will present the following papers:

This year, the Replication, Benchmark, Data and Software Track started and we got accepted twice!

Additionally, four  of our papers will be presented within different workshops:

You can also find us at the posters and demo session where we are goint to present

  • AGDISTIS – Multilingual Disambiguation of Named Entities Using Linked Data, Ricardo Usbeck , Axel-Cyrille Ngonga Ngomo, Wencan Luo and Lars Wesemann
  • Named Entity Recognition using FOX, René Speck and Axel-Cyrille Ngonga Ngomo
  • AMSL – Creating a Linked Data Infrastructure for Managing Electronic Resources in Libraries, Natanael Arndt, Sebastian Nuck, Andreas Nareike, Norman Radtke, Leander Seige and Thomas Riechert.
  • Xodx – A node for the Distributed Semantic Social Network, Natanael Arndt and Sebastian Tramp.

We are especially looking forward to see you at the full-day tutorial:

Come to ISWC at Riva del Garda, talk to us and enjoy the talks. More information on various publications can be found at http://aksw.org/Publications.
Ricardo on behalf of AKSW

October 06 2014

LIMES Version 0.6 RC4

It has been a while but that moment has arrived again. We are happy to announce a new release of the LIMES framework. This version implements novel geo-spatial measures (e.g., geographic mean) as well as string similarity measures (jaro, jaro-winkler, etc.). Moreover, we fixed some minor bugs (thanks for the bug reports). The final release (i.e., version 0.7) will soon be available so stay tuned!

Link on,

September 17 2014

Study of Interoperability between Meta-Modeling Tools

Last week I presented a study about the interoperability between meta-modeling tools on the MDASD workshop. The study focuses on two aspects: (1) the degree of interoperability and (2) approaches for realizing interoperability between meta-modeling tools. I know interoperability has a broad meaning. In this study I try to focus on the ability to exchange models and meta-models. Thereby exchange has a character of migration of models and meta-models between tools.

The study includes 20 meta-modeling tools. Overall I tested over 60 tool and looked for their meta-modeling capabilities. I found 20 tools with different meta-modeling capabilities. It was very interesting for me to see different meta-modeling solutions. Some tools do not support meta-modeling in a very strict sense but in the end it is possible to design modeling languages. Not every tool supports so much concepts in their meta-modeling languages as MetaEdit+ or GME but it is sufficient. However, in my study I investigated the unification approach (common structure or transformation-based) and the level of exchange (model and meta-model). Other dimension such as integration topology and integration layer were fix.

The results of this study confirm my feeling about the exchange possibilities between tools. The degree of interoperability is between 1 and 8 percent. This is very low and shows the lack of interoperability between modeling, particular meta-modeling tools. If you are interested in the results, you can view my presentation on SlideShare. After the paper passed the publishing process, you can read the paper online.

After finishing my presentation, the audience had two interesting questions. The first question concerns Microsoft Visio. Why is Visio included in this study because Visio is just a drawing tool? By the way, this question has been asked many times before. Now I want to say something to this question. Yes you can use Visio to draw rectangles, circles and lines but you can also use Visio as a serious modeling tool. Many people do not know the modeling capabilities of Visio because they use Visio in a superficial way and draw rectangle and lines. However, Visio allows the definition of stencils which contains a set of modeling elements which defines at the end a part of a modeling languages. And there are a lot modeling languages available in Visio such as UML, BPMN, etc. The completeness and quality of these modeling languages ​​is anyone’s guess but it possible. So that’s why Visio is included in my study as a meta-modeling tool.

A second question concerns the degree of interoperability. The degree of interoperability is very low. What are the reasons for this lack? For me, there are the following two reasons responsible: (1) a strategic reason and (2) a technological reason. Interfaces are often a strategic reason. On the one hand, tool vendors want to limit the export capabilities to bind customers to their tool (vendor lock-in). On the other hand, vendors want to offer a variety of import possibilities to allow the migration of models from the old to the new tool. The study shows that the amount of import possibilities are not so much higher than the export possibilities. This brings me to the conclusion that the missing interoperability is not only a strategic reason and maybe there are some technology issues responsible for this result.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!