Research
Security News
Malicious npm Packages Inject SSH Backdoors via Typosquatted Libraries
Socket’s threat research team has detected six malicious npm packages typosquatting popular libraries to insert SSH backdoors.
Python library (and future command-line interface) to crawl YouTube Data API v3 in batch operations and other related tasks. Easier to use than alternatives - you don't need to spend time learning the YouTube API and its caveats. With this library you can get:
The library will automatically:
pip install youtool
You may also want some extras:
pip install youtool[livechat]
pip install youtool[transcription]
Just follow the tutorial/examples below and check the help()
for YouTube
methods.
Note: the examples below will use 135 units of your API key quota.
from pprint import pprint
from pathlib import Path
from youtool import YouTube
api_keys = ["key1", "key2", ...] # Create one in Google Cloud Console
yt = YouTube(api_keys, disable_ipv6=True) # Will try all keys
channel_id_1 = yt.channel_id_from_url("https://youtube.com/c/PythonicCafe/")
print(f"Pythonic Café's channel ID (got from URL): {channel_id_1}")
channel_id_2 = yt.channel_id_from_username("turicas")
print(f"Turicas' channel ID (got from username): {channel_id_2}")
print("Playlists found on Turicas' channel (the \"uploads\" playlist is not here):")
# WARNING: this method won't return the main channel playlist ("uploads").
# If you need it, get channel info using `channels_infos` and the `playlist_id` key (or use the hack in the next
# section), so you can pass it to `playlist_videos`.
for playlist in yt.channel_playlists(channel_id_2):
# `playlist` is a `dict`
print(f"Playlist: {playlist}")
for video in yt.playlist_videos(playlist["id"]):
# `video` is a `dict`, but this endpoint doesn't provide full video information (use `videos_infos` to get them)
print(f" Video: {video}")
print("-" * 80)
# Hack: replace `UC` with `UU` on channel ID to get main playlist ID ("uploads"):
assert channel_id_1[:2] == "UC"
print("Last 3 uploads for Pythonic Café:")
for index, video in enumerate(yt.playlist_videos("UU" + channel_id_1[2:])):
# `video` is a `dict`, but this endpoint doesn't provide full video information (use `videos_infos` to get them)
print(f" Video: {video}")
if index == 2: # First 3 results only
break
print("-" * 80)
print("5 videos found on search:")
# `video_search` has many other parameters also!
# WARNING: each request made by this method will consume 100 units of your quota (out of 10k daily!)
for index, video in enumerate(yt.video_search(term="Álvaro Justen")): # Will paginate automatically
# `video` is a `dict`, but this endpoint doesn't provide full video information (use `videos_infos` to get them)
print(f" Video: {video}")
if index == 4: # First 5 results only
break
print("-" * 80)
# The method below can be used to get information in batches (50 videos per request) - you can pass a list of video IDs
# (more than 50) and it'll get data in batches from the API.
last_video = list(yt.videos_infos([video["id"]]))[0]
print("Complete information for last video:")
pprint(last_video)
print("-" * 80)
print("Channel information (2 channels in one request):")
for channel in yt.channels_infos([channel_id_1, channel_id_2]):
# `channel` is a `dict`
print(channel)
print("-" * 80)
video_id = "b1FjmUzgFB0"
print(f"Comments for video {video_id}:")
for comment in yt.video_comments(video_id):
# `comment` is a `dict`
print(comment)
print("-" * 80)
live_video_id = "yyzIPQsa98A"
print(f"Live chat for video {live_video_id}:")
for chat_message in yt.video_livechat(live_video_id):
# `chat_message` is a `dict`
print(chat_message) # It has the superchat information (`money_currency` and `money_amount` keys)
print("-" * 80)
download_path = Path("transcriptions")
if not download_path.exists():
download_path.mkdir(parents=True)
print(f"Downloading Portuguese (pt) transcriptions for videos {video_id} and {live_video_id} - saving at {download_path.absolute()}")
for downloaded in yt.download_transcriptions([video_id, live_video_id], language_code="pt", path=download_path):
vid, status, filename = downloaded["video_id"], downloaded["status"], downloaded["filename"]
if status == "error":
print(f" {vid}: error downloading!")
elif status == "skipped":
print(f" {vid}: skipped, file already exists ({filename}: {filename.stat().st_size / 1024:.1f} KiB)")
elif status == "done":
print(f" {vid}: done ({filename}: {filename.stat().st_size / 1024:.1f} KiB)")
print("-" * 80)
# You can also download audio and video, just replace `download_transcriptions` with `download_audios` or
# `download_videos`. As simple as it is. :)
print("Categories in Brazilian YouTube:")
for category in yt.categories(region_code="BR"):
# `category` is a `dict`
print(category)
print("-" * 80)
print("Current most popular videos in Brazil:")
for video in yt.most_popular(region_code="BR"): # Will paginate automatically
# `video` is a `dict`, but this endpoint doesn't provide full video information (use `videos_infos` to get them)
print(f"{video['id']} {video['title']}")
print("-" * 80)
print("Total quota used during this session:")
total_used = 0
for method, units_used in yt.used_quota.items():
print(f"{method:20}: {units_used:05d} unit{'' if units_used == 1 else 's'}")
total_used += units_used
print(f"TOTAL : {total_used:05d} unit{'' if total_used == 1 else 's'}")
To run all tests, execute:
make test
Pull requests are welcome! :)
channel
dicts)video
dicts)video
dict schema or option to get full video info after)comment
dicts)chat_message
dicts)dict
s with dataclassesGNU Lesser General Public License (LGPL) version3.
This project was developed in a partnership between Pythonic Café and Novelo Data.
FAQs
Easy-to-use library to access YouTube Data API v3 in bulk operations
We found that youtool demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
Socket’s threat research team has detected six malicious npm packages typosquatting popular libraries to insert SSH backdoors.
Security News
MITRE's 2024 CWE Top 25 highlights critical software vulnerabilities like XSS, SQL Injection, and CSRF, reflecting shifts due to a refined ranking methodology.
Security News
In this segment of the Risky Business podcast, Feross Aboukhadijeh and Patrick Gray discuss the challenges of tracking malware discovered in open source softare.