Security News
Research
Data Theft Repackaged: A Case Study in Malicious Wrapper Packages on npm
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
Tiny library for extraction articles from html.
It can extract the content of an article, both in text and HTML, and it's title.
This library is designed to be as simple as possible.
To start using it just import it and instantiate with link you want to parse as a parameter.
Also the library designed to work in lazy manner.
So, until you make a request for some property, it does not send any requests.
from articulo import Articulo
# Step 1: initializing Articulo instance
article = Articulo('https://info.cern.ch/')
# Step 2: requesting article properties. All properties resolve lazily.
print(article.title) # article title as a string
print(article.text) # article content as a string
print(article.markup) # article content as an html markup string
print(article.icon) # link to article icon
print(article.description) # article meta description
print(article.preview) # link to article meta preview image
print(article.keywords) # article meta keywords list
In case you want to see the whole procees just provide parameter verbose=True
to the instance. It can be helpful for debugging.
from articulo import Articulo
# Initializing Articulo instance with verbose mode
article = Articulo('https://info.cern.ch/', verbose=True)
The whole idea of parsing article content is to define the part of the document that has the highest information density. To find that part there is the so-called information loss coefficient
. This coefficient determines the decrease in the text density of the document during parsing.
The default value is 0.7
which stands for 70%
information density decrease. In most cases, this works fine.
Nevertheless, you can change it in case you have insufficient parsing results. Just provide theshold
parameter to the articulo
instance, it might help.
from articulo import Articulo
# Initializing Articulo instance with information loss coefficient of 30%
article = Articulo('https://info.cern.ch/', threshold=0.3)
In some cases you need to provide additional headers to get an article html from url.
For that case you can provide headers with http_headers
parameter when
you create new instance of articulo
.
from articulo import Articulo
# Initializing Articulo instance with custom user agent
headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36' }
article = Articulo('https://info.cern.ch/', http_headers=headers)
Articulo uses requests
library to get html from url. This library tries to guess the encoding of the response based on the HTTP headers.
Although it works fine most of the time, in some cases this might not work as expected, and you'll get a mess instead of text. For that case you can provide custom charset with def_charset
parameter when you create new instance of articulo
.
from articulo import Articulo
# Initializing Articulo instance with cp1251 charset
article = Articulo('https://info.cern.ch/', def_charset='cp1251')
FAQs
Unknown package
We found that articulo demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Research
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
Research
Security News
Attackers used a malicious npm package typosquatting a popular ESLint plugin to steal sensitive data, execute commands, and exploit developer systems.
Security News
The Ultralytics' PyPI Package was compromised four times in one weekend through GitHub Actions cache poisoning and failure to rotate previously compromised API tokens.