Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

audiomnia

Package Overview
Dependencies
Maintainers
1
Versions
4
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

audiomnia

> A global bioacoustics map

  • 0.1.3
  • latest
  • Source
  • npm
  • Socket score

Version published
Weekly downloads
1
Maintainers
1
Weekly downloads
 
Created
Source

Audiomnia

A global bioacoustics map

standard-readme compliant audiomnia

Data provided by the The Macaulay Library at the Cornell Lab of Ornithology and iNaturalist.

Table of Contents

Usage

https://audiomnia.com

Screenshot 1

The main interface is a simple visualization of the bioacoustic samples in the database. Drill down geographically by clicking. When the zoom level reaches maximum a listing of samples will be displayed.

Local One-liner

To run the application locally, you can simply run:

$ npx audiomnia # or run a specific version i.e. npx audiomnia@0.1.1

This requires node.js, which can be installed easily using nvm.

Contributing

Happy to consider issues and PRs are highly encouraged. Experience with OpenLayers, Scrapy, and front-end web development is welcome, but more importantly bioacoustics, conservation, ornithology, and marine science expertise is needed, perhaps even moreso.

Setting up for development

First, grab the source code and install the dependencies:

$ git clone https://github.com/audiomnia/audiomnia
cd audiomnia
npm install

Then, npm start will run the app for you and should work "out of the box."

Scrapers

Audiomnia uses Scrapy for its scrapers.

Currently, the data sets are small enough to check into the repo, so checking out the source code will also include the geojson files. However, if you're working on the scrapers.

npm run scrape

This is shorthand for:

cd scrapers
scrapy crawl macaulaylibrary -a MAX=50000 --loglevel WARNING

You can read the Scrapy docs to learn more about the scrapy crawl

Scrapy cache

By default, HTTP Caching is enabled in the Scrapy config. This will deposit a LOT of data in ./scrapers/scrapers/.scrapy. This will make your development much easier and your scraping much more polite.

Tests

UI Tests using Mocha + Puppeteer are in the test/ folder.

License

GPL-3.0 © 2020 Audiomnia

FAQs

Package last updated on 16 Oct 2020

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc