![PyPI Now Supports iOS and Android Wheels for Mobile Python Development](https://cdn.sanity.io/images/cgdhsj6q/production/96416c872705517a6a65ad9646ce3e7caef623a0-1024x1024.webp?w=400&fit=max&auto=format)
Security News
PyPI Now Supports iOS and Android Wheels for Mobile Python Development
PyPI now supports iOS and Android wheels, making it easier for Python developers to distribute mobile packages.
This repository is a port from
unitedstates/congress. We required a
versioning policy which the unitedstates
organization does not provide, so we made
this repository. Most changes here are still coming from the main repository,
and any changes we make here will most likely end up in the main repository
as well via PRs.
Additionally, we needed to store older data in bulk for our open source developers, so we decided to utilize GitHub Actions and its cron job. We update our releases of bulk data on a daily cycle. This can be found at: Hear-Ye/congress-data
Run pip install congress-crawler
After reviewing much of this repository, lots of code is just missing and not updated. Be careful before using either this port or the main repository. Additionally, we want to maintain the same license as the main repository in the spirit of open source and being public domain. We firmly believe tools are the main gears of our society, and thus this tool in particular should remain free.
The following is the documentation (mostly intact with more features and docs from our repository) from unitedstates/congress.
Public domain code that collects data about the bills, amendments, roll call votes, and other core data about the U.S. Congress.
Includes:
A data importing script for the official bulk bill status data from Congress, the official source of information on the life and times of legislation.
Scrapers for House and Senate roll call votes.
A document fetcher for GovInfo.gov, which holds bill text, bill status, and other official documents.
A defunct THOMAS scraper for presidential nominations in Congress.
Read about the contents and schema in the documentation in the github project wiki.
For background on how this repository came to be, see Eric's blog post.
This project supports Python 3.6+.
System dependencies
On Ubuntu, you'll need wget
, pip
, and some support packages:
sudo apt-get install git python3-dev libxml2-dev libxslt1-dev libz-dev python3-pip python3-venv
On OS X, you'll need developer tools installed (XCode), and wget
.
brew install wget
Python dependencies
It's recommended you use a virtualenv
(virtual environment) for development. Create a virtualenv for this project:
python3 -m venv congress
source congress/bin/activate
Finally, with your virtual environment activated, install Python packages:
pip3 install -r requirements.txt
The general form to start the scraping process is:
./run <data-type> [--force] [other options]
where data-type is one of:
bills
(see Bills) and Amendments)votes
(see Votes)nominations
(see Nominations)committee_meetings
(see Committee Meetings)govinfo
(see Bill Text)statutes
(see Bills and Bill Text)To get data for bills, resolutions, and amendments, run:
./run govinfo --bulkdata=BILLSTATUS
./run bills
The bills script will output bulk data into a top-level data
directory, then organized by Congress number, bill type, and bill number. Two data output files will be generated for each bill: a JSON version (data.json) and an XML version (data.xml).
Debugging messages are hidden by default. To include them, run with --log=info or --debug. To hide even warnings, run with --log=error.
To get emailed with errors, copy config.yml.example to config.yml and fill in the SMTP options. The script will automatically use the details when a parsing or execution error occurs.
The --force flag applies to all data types and supresses use of a cache for network-retreived resources.
The script will cache downloaded pages in a top-level cache
directory, and output bulk data in a top-level data
directory.
Two bulk data output files will be generated for each object: a JSON version (data.json) and an XML version (data.xml). The XML version attempts to maintain backwards compatibility with the XML bulk data that GovTrack.us has provided for years. Add the --govtrack flag to get fully backward-compatible output using GovTrack IDs (otherwise the source IDs used for legislators is used).
See the project wiki for documentation on the output format.
Pull requests with patches are awesome. Unit tests are strongly encouraged (example tests).
The best way to file a bug is to open a ticket.
To run this project's unit tests:
./test/run
The Sunlight Foundation and GovTrack.us are the two principal maintainers of this project.
Both Sunlight and GovTrack operate APIs where you can get much of this data delivered over HTTP:
This project is dedicated to the public domain. As spelled out in CONTRIBUTING:
The project is in the public domain within the United States, and copyright and related rights in the work worldwide are waived through the CC0 1.0 Universal public domain dedication.
All contributions to this project will be released under the CC0 dedication. By submitting a pull request, you are agreeing to comply with this waiver of copyright interest.
FAQs
Gathers data on the U.S. Congress.
We found that congress-crawler demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
PyPI now supports iOS and Android wheels, making it easier for Python developers to distribute mobile packages.
Security News
Create React App is officially deprecated due to React 19 issues and lack of maintenance—developers should switch to Vite or other modern alternatives.
Security News
Oracle seeks to dismiss fraud claims in the JavaScript trademark dispute, delaying the case and avoiding questions about its right to the name.