
Security News
Potemkin Understanding in LLMs: New Study Reveals Flaws in AI Benchmarks
New research reveals that LLMs often fake understanding, passing benchmarks but failing to apply concepts or stay internally consistent.
Unofficial library to scrape Twitter profiles and posts from Nitter instances
Twitter has recently made some changes which affected every third party Twitter client, including Nitter. As a result, most Nitter instances have shut down or will shut down shortly. Even local instances are affected by this, so you may not be able to scrape as many tweets as expected, if at all.
This is a simple library to scrape Nitter instances for tweets. It can:
search and scrape tweets with a certain term
search and scrape tweets with a certain hashtag
scrape tweets from a user profile
get profile information of a user, such as display name, username, number of tweets, profile picture ...
If the instance to use is not provided to the scraper, it will use a random public instance. If you can, please host your own instance in order to avoid overloading the public ones and letting Nitter stay alive for everyone. You can read more about that here: https://github.com/zedeus/nitter#installation.
pip install ntscraper
First, initialize the library:
from ntscraper import Nitter
scraper = Nitter(log_level=1, skip_instance_check=False)
The valid logging levels are:
The skip_instance_check
parameter is used to skip the check of the Nitter instances altogether during the execution of the script. If you use your own instance or trust the instance you are relying on, then you can skip set it to 'True', otherwise it's better to leave it to false.
Then, choose the proper function for what you want to do from the following.
github_hash_tweets = scraper.get_tweets("github", mode='hashtag')
bezos_tweets = scraper.get_tweets("JeffBezos", mode='user')
Parameters:
Returns a dictionary with tweets and threads for the term.
You can also scrape multiple terms at once using multiprocessing:
terms = ["github", "bezos", "musk"]
results = scraper.get_tweets(terms, mode='term')
Each term will be scraped in a different process. The result will be a list of dictionaries, one for each term.
The multiprocessing code needs to run in a if __name__ == "__main__"
block to avoid errors. With multiprocessing, only full logging is supported. Also, the number of processes is limited to the number of available cores on your machine.
NOTE: using multiprocessing on public instances is highly discouraged since it puts too much load on the servers and could potentially also get you rate limited. Please only use it on your local instance.
tweet = scraper.get_tweet_by_id("x", "1826317783430303888")
Parameters:
Returns a dictionary with the tweet's content.
bezos_information = scraper.get_profile_info("JeffBezos")
Parameters:
Returns a dictionary of the profile's information.
As for the term scraping, you can also get info from multiple profiles at once using multiprocessing:
usernames = ["x", "github"]
results = scraper.get_profile_info(usernames)
Each user will be scraped in a different process. The result will be a list of dictionaries, one for each user.
The multiprocessing code needs to run in a if __name__ == "__main__"
block to avoid errors. With multiprocessing, only full logging is supported. Also, the number of processes is limited to the number of available cores on your machine.
NOTE: using multiprocessing on public instances is highly discouraged since it puts too much load on the servers and could potentially also get you rate limited. Please only use it on your local instance.
random_instance = scraper.get_random_instance()
Returns a random Nitter instance.
Due to recent changes on Twitter's side, some Nitter instances may not work properly even if they are marked as "working" on Nitter's wiki. If you have trouble scraping with a certain instance, try changing it and check if the problem persists.
FAQs
Unofficial library to scrape Twitter profiles and posts from Nitter instances
We found that ntscraper demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
New research reveals that LLMs often fake understanding, passing benchmarks but failing to apply concepts or stay internally consistent.
Security News
Django has updated its security policies to reject AI-generated vulnerability reports that include fabricated or unverifiable content.
Security News
ECMAScript 2025 introduces Iterator Helpers, Set methods, JSON modules, and more in its latest spec update approved by Ecma in June 2025.