Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

extractnet

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

extractnet

Extract the main article content (and optionally comments) from a web page

  • 2.0.7
  • PyPI
  • Socket score

Maintainers
1

ExtractNet

PyPI version codecov

Based on the popular content extraction package Dragnet, ExtractNet extend the machine learning approach to extract other attributes such as date, author and keywords from news article.

demo code

Example code:

Simply use the following command to install the latest released version:

pip install extractnet

Start extract content and other meta data passing the result html to function

import requests
from extractnet import Extractor

raw_html = requests.get('https://currentsapi.services/en/blog/2019/03/27/python-microframework-benchmark/.html').text
results = Extractor().extract(raw_html)

Why don't just use existing rule-base extraction method:

We discover some webpage doesn't provide the real author name but simply populate the author tag with a default value.

For example ltn.com.tw, udn.com always populate the same author value for each news article while the real author can only be found within the content.

Our machine learnig first approach extract correct fields just like human reading a website

ExtractNet uses machine learning approach to extract these relevant data through visible section of the webpage just like a human.

ExtractNet pipeline

What ExtractNet is and isn't

  • ExtractNet is a platform to extract any interesting attributes from any webpage, not just limited to content based article.

  • The core of ExtractNet aims to convert unstructured webpage to structured data without relying hand crafted rules

  • ExtractNet do not support boilerplate content extraction

  • ExtractNet allows user to add custom pipelines that returns additional data through a list of callbacks function


Performance

Results of the body extraction evaluation:

We use the same body extraction benchmark from article-extraction-benchmark

ModelPrecisionRecallF1AccuracyOpen Source
AutoExtract0.984 ± 0.0030.956 ± 0.0100.970 ± 0.0050.470 ± 0.037
Diffbot0.958 ± 0.0090.944 ± 0.0130.951 ± 0.0100.348 ± 0.035
ExtractNet0.922 ± 0.0110.933 ± 0.0130.927 ± 0.0100.160 ± 0.027
boilerpipe0.850 ± 0.0160.870 ± 0.0200.860 ± 0.0160.006 ± 0.006
dragnet0.925 ± 0.0120.889 ± 0.0180.907 ± 0.0140.221 ± 0.030
html-text0.500 ± 0.0170.994 ± 0.0010.665 ± 0.0150.000 ± 0.000
newspaper0.917 ± 0.0130.906 ± 0.0170.912 ± 0.0140.260 ± 0.032
readability0.913 ± 0.0140.931 ± 0.0150.922 ± 0.0130.315 ± 0.034
trafilatura0.930 ± 0.0100.967 ± 0.0090.948 ± 0.0080.243 ± 0.031

Results of author name extraction:
ModelF1
ExtractNet : fasttext embeddings + CRF0.904 ± 0.10

List of changes from Dragnet

  • Underlying classifier is replaced by Catboost instead of Decision Tree for all attributes extraction for consistency and performance boost.

  • Updated CSS features, added text+css latent feature

  • Includes a CRF model that extract names from author block text.

  • Trained on 22000+ updated webpages collected in the late 2020, 20 times of dragnet data.

GETTING STARTED

Installing and extraction

pip install extractnet
import requests
from extractnet import Extractor

raw_html = requests.get('https://apnews.com/article/6e58b5742b36e3de53298cf73fbfdf48').text
results = Extractor().extract(raw_html)
for key, value in results.items():
    print(key)
    print(value)
    print('------------')

Callbacks

ExtractNet also support the ability to add callbacks functions to inject additional features during extraction process

A quick glance of usage : each callbacks will be able to access the raw html string provided during the extraction process. This allows user to extract addtional information such as language detection to the final results

def meta_pre1(raw_html):
    return {'first_value': 0}

def meta_pre2(raw_html):
    return {'first_value': 1, 'second_value': 2}

def find_stock_ticker(raw_html, results):
    matched_ticker = []
    for ticket in re.findall(r'[$][A-Za-z][\S]*', str(results['content'])):
      matched_ticker.append(ticket)
    return {'matched_ticker': matched_ticker}

extract = Extractor(author_prob_threshold=0.1, 
      meta_postprocess=[meta_pre1, meta_pre2], 
      postprocess=[find_stock_ticker])

The extracted results will contain like, first_value and second_value. Do note callbacks are executed by the given order ( which means meta_pre1 will be executed first followed by meta_pre2 ), any results passed from the previous stage will not be overwritten by later stage


raw_html = requests.get('https://apnews.com/article/6e58b5742b36e3de53298cf73fbfdf48').text
results = extract(raw_html)

In this example the value for first_value will remain 0 even though meta_pre2 also returns first_value=1 because meta_pre2 callbacks already assign first_value as 0.

Contributing

We love contributions! Open an issue, or fork/create a pull request.

Develop Locally

Since extractnet relies on several C++ modules, before starting to run locally you need to compile them first

Usually what you need would be this command

make

However, you can try to build it

Supress logging error

Setting the level to critical will suppress any logging output

from extractnet import Extractor
from extractnet.blocks import BlockifyError
logging.getLogger('extractnet').setLevel(logging.CRITICAL)

extractor = Extractor()

More details about the code structure

Coming soon

Reference

Content extraction using diverse feature sets

[1] Peters, Matthew E. and D. Lecocq, Content extraction using diverse feature sets

@inproceedings{Peters2013ContentEU,
  title={Content extraction using diverse feature sets},
  author={Matthew E. Peters and D. Lecocq},
  booktitle={WWW '13 Companion},
  year={2013}
}

Bag of Tricks for Efficient Text Classification

[2] A. Joulin, E. Grave, P. Bojanowski, T. Mikolov, Bag of Tricks for Efficient Text Classification

@article{joulin2016bag,
  title={Bag of Tricks for Efficient Text Classification},
  author={Joulin, Armand and Grave, Edouard and Bojanowski, Piotr and Mikolov, Tomas},
  journal={arXiv preprint arXiv:1607.01759},
  year={2016}
}

Keywords

FAQs


Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc