<kbd id='0f7a173494'></kbd><address id='0f7a173494'><style id='0f7a173494'></style></address><button id='0f7a173494'></button>

          Extracting Structured Data from the Common Crawl

          The Web Data Commons project extracts structured data from theCommon Crawl, the largest web corpus available to the public, and provides the extracted data for public download in order to support researchers and companies in exploiting the wealth of information that is available on the Web.

          News

          亚洲国产精品免费线观看视频

          RDFa, Microdata, and Microformat

          More and more websites have started to embed structured data describing products, people, organizations, places, and events into their HTML pages using markup standards such as RDFa, Microdata and Microformats. The Web Data Commons project extracts this data from several billion web pages. So far the project provides six different data set releases extracted from the Common Crawl 2016, 2015, 2014, 2013, 2012 and 2010. The project provides the extracted data for download and publishes statistics about the deployment of the different formats.

          Web Tables

          The Web contains vast amounts of HTML tables. Most of these tables are used for layout purposes, but a fraction of the tables is also quasi-relational, meaning that they contain structured data describing a set of entities, and are thus useful in application contexts such as data search, table augmentation, knowledge base construction, and for various NLP tasks. The WDC Web Tables data set consists of the 147 million relational Web tables that are contained in the overall set of 11 billion HTML tables found in the Common Crawl.

          Hyperlink Graph

          We offer a large hyperlink graph that we extracted from the 2012 version of the Common Crawl. The WDC Hyperlink Graph covers 3.5 billion web pages and 128 billion hyperlinks between these pages. The graph can help researchers to improve search algorithms, develop spam detection methods and evaluate graph analysis algorithms.To the best of our knowledge, the graph is the largest hyperlink graph that is available to the public.

          WebIsA Database

          We offer a large IsA database that we extracted from the 2015 version of the Common Crawl. The WDC IsA Database contains more than 400 million hypernymy relations we extracted from the text of HTML pages included in the crawl.This collection of relations represents a rich source of knowledge and can be used to improve approaches in various application domains. We offer the tuple dataset for public download and an application programming interface to help other researchers programmatically query the database. In addition a demo web application of the database is available.

          Product Data Corpora

          We offer two product data corpora containing offers from multiple e-shops. The first corpusconsists of 5.6 million product offers from the categories mobile phones, headphones and televisions and was crawled from 32 popular shopping websites. The corpus is accompanies by a manually verified gold standard for the evaluation and comparison of product feature extraction and product matching methods. The second corpus consists of more than 26 million product offers originating from 79 thousand websites. The offers are grouped into 16 million clusters of offers referring to the same product using product identifiers, such as GTINs or MPNs.

          亚洲国产精品免费线观看视频

          Extraction Framework

          The effective processing of large web corpora presents challenges in terms of resources, time and costs. In order to extract the data sets presented above, the Web Data Commons project has developed a framework which provides an easy to use basis for the distributed processing of large web crawls using Amazon EC2 cloud services. The framework is published under the terms of the Apache license and can be simply customized to perform also different data extraction tasks.

          License

          The Web Data Commons extraction framework can be used under the terms of the Apache Software License.

          Feedback

          Please send questions and feedback to the Web Data Commons mailing list or post them in our Web Data Commons Google Group.

          亚洲国产精品免费线观看视频

          The Web Data Commons project was started by researchersfrom Freie Universität Berlin and the Karlsruhe Institute of Technology (KIT) in 2012. The goal of the project is to facilitate research and support companies in exploiting the wealth of information on the Web by extracting structured data from web crawls and provide this data for public download. Today the WDC Project is mainly maintained by the Data and Web Science Research Group at the University of Mannheim. The project is coordinated by Christian Bizer who has moved from Berlin to Mannheim.

          Credits

          Web Data Commons is supported by the EU FP7 projects PlanetData and LOD2, by an Amazon Web Services in Education Grant Award, by the German Research Foundation (DFG) and by the ViCe research project of the Ministry of Economy, Research and Arts of Baden - Württemberg.

          PlanetData Logo   LOD2 Logo   AWS Logo   DFG Logo   MWK_BW Logo