
Research
/Security News
Critical Vulnerability in NestJS Devtools: Localhost RCE via Sandbox Escape
A flawed sandbox in @nestjs/devtools-integration lets attackers run code on your machine via CSRF, leading to full Remote Code Execution (RCE).
A mastodon reader client that uses embeddings to present a consolidated view of my mastodon timeline
A mastodon client optimized for reading, with a configurable and hackable timeline algorithm powered by Simon Wilison's llm tool. Try making your own algorithm!
Sneak peek:
I highly suggest not installing any Python app directly into your global Python. Create a virtual environment:
python -m venv fossil
And then activate it (see here)
source fossil/bin/activate
Alternatively, use pipx
:
pip install pipx
pipx install fossil-mastodon
Clone this repo:
git clone https://github.com/tkellogg/fossil.git
And then cd fossil
to get into the correct directory.
.env
fileBefore that, you'll need a .env
file with these keys:
ACCESS_TOKEN=
Alternatively, you can set them as environment variables. All available keys are here:
Variable | Required? | Value |
---|---|---|
OPENAI_API_BASE | no | eg. https://api.openai.com/v1 |
MASTO_BASE | no? | eg. https://hackyderm.io |
ACCESS_TOKEN | yes | In your mastodon UI, create a new "app" and copy the access token here |
To get MASTO_BASE
and ACCESS_TOKEN
:
urn:ietf:wg:oauth:2.0:oob
read
and write
(contribution idea: figure out what's strictly necessary and send a pull request to update this)ACCESS_TOKEN
in the .env
file.MAST_BASE
. You should be able to copy the URL from your browser and then remove the entire path (everything after /
, inclusive).Models can be configured and/or added via llm
.
Here's how to set your OpenAI API key, which gives you access to OpenAI models:
$ llm keys set openai
Enter key: ...
You will need to install an embedding model and a large language model. The instructions here use the llm-sentence-transformers
and llm-gpt4all
plugins to do so.
$ llm install llm-sentence-transformers # An Embedding Model Plugin
$ llm install llm-gpt4all # A Large Language Model Plugin
$ llm sentence-transformers register all-mpnet-base-v2 --alias mpnet # Download/Register one of the Embedding Models
llm
will need to download it. This will add to the overall time it takes to processIf you installed from PyPi:
uvicorn --host 0.0.0.0 --port 8888 fossil_mastodon.server:app
If you installed from source:
poetry run uvicorn --host 0.0.0.0 --port 8888 --reload fossil_mastodon.server:app
If you're working on CSS or HTML files, you should include them:
poetry run uvicorn --host 0.0.0.0 --port 8888 --reload --reload-include '*.html' --reload-include '*.css' fossil_mastodon.server:app
(Note the --reload
makes it much easier to develop, but is generally unneccessary if you're not developing)
FAQs
A mastodon reader client that uses embeddings to present a consolidated view of my mastodon timeline
We found that fossil-mastodon demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
/Security News
A flawed sandbox in @nestjs/devtools-integration lets attackers run code on your machine via CSRF, leading to full Remote Code Execution (RCE).
Product
Customize license detection with Socket’s new license overlays: gain control, reduce noise, and handle edge cases with precision.
Product
Socket now supports Rust and Cargo, offering package search for all users and experimental SBOM generation for enterprise projects.