Latest Threat Research:SANDWORM_MODE: Shai-Hulud-Style npm Worm Hijacks CI Workflows and Poisons AI Toolchains.Details
Socket
Book a DemoInstallSign in
Socket

mbutil

Package Overview
Dependencies
Maintainers
2
Versions
6
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

mbutil - npm Package Compare versions

Comparing version
0.2.0
to
0.3.0
+1
MANIFEST.in
include *.md
Metadata-Version: 1.0
Name: mbutil
Version: 0.3.0
Summary: An importer and exporter for MBTiles
Home-page: https://github.com/mapbox/mbutil
Author: Tom MacWright
Author-email: tom@macwright.org
License: LICENSE.md
Description: # MBUtil
MBUtil is a utility for importing and exporting the [MBTiles](http://mbtiles.org/) format,
typically created with [Mapbox](http://mapbox.com/) [TileMill](http://mapbox.com/tilemill/).
Before exporting tiles to disk, see if there's a [Mapbox Hosting plan](http://mapbox.com/plans/)
or an open source [MBTiles server implementation](https://github.com/mapbox/mbtiles-spec/wiki/Implementations)
that works for you - tiles on disk are notoriously difficult to manage.
[![Build Status](https://secure.travis-ci.org/mapbox/mbutil.png)](http://travis-ci.org/mapbox/mbutil)
## Installation
Git checkout (requires git)
git clone git://github.com/mapbox/mbutil.git
cd mbutil
# get usage
./mb-util -h
Then to install the mb-util command globally:
sudo python setup.py install
# then you can run:
mb-util
Python installation (requires easy_install)
easy_install mbutil
mb-util -h
## Usage
$ mb-util -h
Usage: mb-util [options] input output
Examples:
Export an mbtiles file to a directory of files:
$ mb-util world.mbtiles tiles # tiles must not already exist
Import a directory of tiles into an mbtiles file:
$ mb-util tiles world.mbtiles # mbtiles file must not already exist
Options:
-h, --help Show this help message and exit
--scheme=SCHEME Tiling scheme of the tiles. Default is "xyz" (z/x/y),
other options are "tms" which is also z/x/y
but uses a flipped y coordinate, and "wms" which replicates
the MapServer WMS TileCache directory structure "z/000/000/x/000/000/y.png"''',
--image_format=FORMAT
The format of the image tiles, either png, jpg, webp or pbf
--grid_callback=CALLBACK
Option to control JSONP callback for UTFGrid tiles. If
grids are not used as JSONP, you can
remove callbacks specifying --grid_callback=""
--do_compression Do mbtiles compression
--silent Dictate whether the operations should run silentl
Export an `mbtiles` file to files on the filesystem:
mb-util World_Light.mbtiles adirectory
Import a directory into a `mbtiles` file
mb-util directory World_Light.mbtiles
## Requirements
* Python `>= 2.6`
## Metadata
MBUtil imports and exports metadata as JSON, in the root of the tile directory, as a file named `metadata.json`.
```javascript
{
"name": "World Light",
"description": "A Test Metadata",
"version": "3"
}
```
## Testing
This project uses [nosetests](http://readthedocs.org/docs/nose/en/latest/) for testing. Install nosetests:
pip install nose
or
easy_install nose
Then run:
nosetests
## See Also
* [node-mbtiles provides mbpipe](https://github.com/mapbox/node-mbtiles/wiki/Post-processing-MBTiles-with-MBPipe), a useful utility.
* [mbliberator](https://github.com/calvinmetcalf/mbliberator) a similar program but in node.
## License
BSD - see LICENSE.md
## Authors
- Tom MacWright (tmcw)
- Dane Springmeyer (springmeyer)
- Mathieu Leplatre (leplatrem)
Platform: UNKNOWN
LICENSE.md
MANIFEST.in
README.md
mb-util
setup.py
mbutil/__init__.py
mbutil/util.py
mbutil.egg-info/PKG-INFO
mbutil.egg-info/SOURCES.txt
mbutil.egg-info/dependency_links.txt
mbutil.egg-info/top_level.txt
test/test.py
[egg_info]
tag_build =
tag_date = 0
+1
-1

@@ -1,1 +0,1 @@

from util import *
from mbutil.util import *

@@ -12,3 +12,3 @@ #!/usr/bin/env python

import sqlite3, uuid, sys, logging, time, os, json, zlib
import sqlite3, sys, logging, time, os, json, zlib, re

@@ -30,2 +30,6 @@ logger = logging.getLogger(__name__)

(name text, value text);""")
cur.execute("""CREATE TABLE grids (zoom_level integer, tile_column integer,
tile_row integer, grid blob);""")
cur.execute("""CREATE TABLE grid_data (zoom_level integer, tile_column
integer, tile_row integer, key_name text, key_json text);""")
cur.execute("""create unique index name on metadata (name);""")

@@ -35,9 +39,10 @@ cur.execute("""create unique index tile_index on tiles

def mbtiles_connect(mbtiles_file):
def mbtiles_connect(mbtiles_file, silent):
try:
con = sqlite3.connect(mbtiles_file)
return con
except Exception, e:
logger.error("Could not connect to database")
logger.exception(e)
except Exception as e:
if not silent:
logger.error("Could not connect to database")
logger.exception(e)
sys.exit(1)

@@ -50,7 +55,9 @@

def compression_prepare(cur, con):
def compression_prepare(cur, silent):
if not silent:
logger.debug('Prepare database compression.')
cur.execute("""
CREATE TABLE if not exists images (
tile_data blob,
tile_id VARCHAR(256));
tile_id integer);
""")

@@ -62,12 +69,22 @@ cur.execute("""

tile_row integer,
tile_id VARCHAR(256));
tile_id integer);
""")
def optimize_database(cur):
logger.debug('analyzing db')
def optimize_database(cur, silent):
if not silent:
logger.debug('analyzing db')
cur.execute("""ANALYZE;""")
logger.debug('cleaning db')
if not silent:
logger.debug('cleaning db')
# Workaround for python>=3.6.0,python<3.6.2
# https://bugs.python.org/issue28518
cur.isolation_level = None
cur.execute("""VACUUM;""")
cur.isolation_level = '' # reset default value of isolation_level
def compression_do(cur, con, chunk):
def compression_do(cur, con, chunk, silent):
if not silent:
logger.debug('Making database compression.')
overlapping = 0

@@ -79,4 +96,5 @@ unique = 0

total_tiles = res[0]
last_id = 0
logging.debug("%d total tiles to fetch" % total_tiles)
for i in range(total_tiles / chunk + 1):
for i in range(total_tiles // chunk + 1):
logging.debug("%d / %d rounds done" % (i, (total_tiles / chunk)))

@@ -102,5 +120,5 @@ ids = []

unique = unique + 1
id = str(uuid.uuid4())
last_id += 1
ids.append(id)
ids.append(last_id)
files.append(r[3])

@@ -112,3 +130,3 @@

values (?, ?)"""
cur.execute(query, (str(id), sqlite3.Binary(r[3])))
cur.execute(query, (str(last_id), sqlite3.Binary(r[3])))
logger.debug("insert into images: %s" % (time.time() - start))

@@ -119,3 +137,3 @@ start = time.time()

values (?, ?, ?, ?)"""
cur.execute(query, (r[0], r[1], r[2], id))
cur.execute(query, (r[0], r[1], r[2], last_id))
logger.debug("insert into map: %s" % (time.time() - start))

@@ -125,2 +143,3 @@ con.commit()

def compression_finalize(cur):
logger.debug('Finalizing database compression.')
cur.execute("""drop table tiles;""")

@@ -142,3 +161,3 @@ cur.execute("""create view tiles as

def getDirs(path):
def get_dirs(path):
return [name for name in os.listdir(path)

@@ -148,5 +167,10 @@ if os.path.isdir(os.path.join(path, name))]

def disk_to_mbtiles(directory_path, mbtiles_file, **kwargs):
logger.info("Importing disk to MBTiles")
logger.debug("%s --> %s" % (directory_path, mbtiles_file))
con = mbtiles_connect(mbtiles_file)
silent = kwargs.get('silent')
if not silent:
logger.info("Importing disk to MBTiles")
logger.debug("%s --> %s" % (directory_path, mbtiles_file))
con = mbtiles_connect(mbtiles_file, silent)
cur = con.cursor()

@@ -157,3 +181,3 @@ optimize_connection(cur)

image_format = kwargs.get('format', 'png')
grid_warning = True
try:

@@ -165,5 +189,7 @@ metadata = json.load(open(os.path.join(directory_path, 'metadata.json'), 'r'))

(name, value))
logger.info('metadata from metadata.json restored')
if not silent:
logger.info('metadata from metadata.json restored')
except IOError:
logger.warning('metadata.json not found')
if not silent:
logger.warning('metadata.json not found')

@@ -174,49 +200,96 @@ count = 0

for zoomDir in getDirs(directory_path):
for zoom_dir in get_dirs(directory_path):
if kwargs.get("scheme") == 'ags':
if not "L" in zoomDir:
logger.warning("You appear to be using an ags scheme on an non-arcgis Server cache.")
z = int(zoomDir.replace("L", ""))
if not "L" in zoom_dir:
if not silent:
logger.warning("You appear to be using an ags scheme on an non-arcgis Server cache.")
z = int(zoom_dir.replace("L", ""))
elif kwargs.get("scheme") == 'gwc':
z=int(zoom_dir[-2:])
else:
if "L" in zoomDir:
logger.warning("You appear to be using a %s scheme on an arcgis Server cache. Try using --scheme=ags instead" % kwargs.get("scheme"))
z = int(zoomDir)
for rowDir in getDirs(os.path.join(directory_path, zoomDir)):
if "L" in zoom_dir:
if not silent:
logger.warning("You appear to be using a %s scheme on an arcgis Server cache. Try using --scheme=ags instead" % kwargs.get("scheme"))
z = int(zoom_dir)
for row_dir in get_dirs(os.path.join(directory_path, zoom_dir)):
if kwargs.get("scheme") == 'ags':
y = flip_y(z, int(rowDir.replace("R", ""), 16))
y = flip_y(z, int(row_dir.replace("R", ""), 16))
elif kwargs.get("scheme") == 'gwc':
pass
else:
x = int(rowDir)
for imageFile in os.listdir(os.path.join(directory_path, zoomDir, rowDir)):
img, ext = imageFile.split('.', 1)
if (ext == image_format):
f = open(os.path.join(directory_path, zoomDir, rowDir, imageFile), 'rb')
x = int(row_dir)
for current_file in os.listdir(os.path.join(directory_path, zoom_dir, row_dir)):
if current_file == ".DS_Store" and not silent:
logger.warning("Your OS is MacOS,and the .DS_Store file will be ignored.")
else:
file_name, ext = current_file.split('.',1)
f = open(os.path.join(directory_path, zoom_dir, row_dir, current_file), 'rb')
file_content = f.read()
f.close()
if kwargs.get('scheme') == 'xyz':
y = flip_y(int(z), int(img))
y = flip_y(int(z), int(file_name))
elif kwargs.get("scheme") == 'ags':
x = int(img.replace("C", ""), 16)
x = int(file_name.replace("C", ""), 16)
elif kwargs.get("scheme") == 'gwc':
x, y = file_name.split('_')
x = int(x)
y = int(y)
else:
y = int(img)
y = int(file_name)
logger.debug(' Read tile from Zoom (z): %i\tCol (x): %i\tRow (y): %i' % (z, x, y))
cur.execute("""insert into tiles (zoom_level,
tile_column, tile_row, tile_data) values
(?, ?, ?, ?);""",
(z, x, y, sqlite3.Binary(f.read())))
f.close()
count = count + 1
if (count % 100) == 0:
for c in msg: sys.stdout.write(chr(8))
msg = "%s tiles inserted (%d tiles/sec)" % (count, count / (time.time() - start_time))
sys.stdout.write(msg)
elif (ext == 'grid.json'):
if grid_warning:
logger.warning('grid.json interactivity import not yet supported\n')
grid_warning= False
logger.debug('tiles inserted.')
optimize_database(con)
if (ext == image_format):
if not silent:
logger.debug(' Read tile from Zoom (z): %i\tCol (x): %i\tRow (y): %i' % (z, x, y))
cur.execute("""insert into tiles (zoom_level,
tile_column, tile_row, tile_data) values
(?, ?, ?, ?);""",
(z, x, y, sqlite3.Binary(file_content)))
count = count + 1
if (count % 100) == 0:
for c in msg: sys.stdout.write(chr(8))
msg = "%s tiles inserted (%d tiles/sec)" % (count, count / (time.time() - start_time))
sys.stdout.write(msg)
elif (ext == 'grid.json'):
if not silent:
logger.debug(' Read grid from Zoom (z): %i\tCol (x): %i\tRow (y): %i' % (z, x, y))
# Remove potential callback with regex
file_content = file_content.decode('utf-8')
has_callback = re.match(r'[\w\s=+-/]+\(({(.|\n)*})\);?', file_content)
if has_callback:
file_content = has_callback.group(1)
utfgrid = json.loads(file_content)
data = utfgrid.pop('data')
compressed = zlib.compress(json.dumps(utfgrid).encode())
cur.execute("""insert into grids (zoom_level, tile_column, tile_row, grid) values (?, ?, ?, ?) """, (z, x, y, sqlite3.Binary(compressed)))
grid_keys = [k for k in utfgrid['keys'] if k != ""]
for key_name in grid_keys:
key_json = data[key_name]
cur.execute("""insert into grid_data (zoom_level, tile_column, tile_row, key_name, key_json) values (?, ?, ?, ?, ?);""", (z, x, y, key_name, json.dumps(key_json)))
if not silent:
logger.debug('tiles (and grids) inserted.')
if kwargs.get('compression', False):
compression_prepare(cur)
compression_do(cur, con, 256, silent)
compression_finalize(cur)
optimize_database(con, silent)
def mbtiles_metadata_to_disk(mbtiles_file, **kwargs):
silent = kwargs.get('silent')
if not silent:
logger.debug("Exporting MBTiles metatdata from %s" % (mbtiles_file))
con = mbtiles_connect(mbtiles_file, silent)
metadata = dict(con.execute('select name, value from metadata;').fetchall())
if not silent:
logger.debug(json.dumps(metadata, indent=2))
def mbtiles_to_disk(mbtiles_file, directory_path, **kwargs):
logger.debug("Exporting MBTiles to disk")
logger.debug("%s --> %s" % (mbtiles_file, directory_path))
con = mbtiles_connect(mbtiles_file)
silent = kwargs.get('silent')
if not silent:
logger.debug("Exporting MBTiles to disk")
logger.debug("%s --> %s" % (mbtiles_file, directory_path))
con = mbtiles_connect(mbtiles_file, silent)
os.mkdir("%s" % directory_path)

@@ -235,5 +308,5 @@ metadata = dict(con.execute('select name, value from metadata;').fetchall())

if formatter:
layer_json = os.path.join(base_path,'layer.json')
layer_json = os.path.join(base_path, 'layer.json')
formatter_json = {"formatter":formatter}
open(layer_json,'w').write('grid(' + json.dumps(formatter_json) + ')')
open(layer_json, 'w').write(json.dumps(formatter_json))

@@ -248,3 +321,4 @@ tiles = con.execute('select zoom_level, tile_column, tile_row, tile_data from tiles;')

y = flip_y(z,y)
print 'flipping'
if not silent:
logger.debug('flipping')
tile_dir = os.path.join(base_path, str(z), str(x))

@@ -272,3 +346,4 @@ elif kwargs.get('scheme') == 'wms':

for c in msg: sys.stdout.write(chr(8))
logger.info('%s / %s tiles exported' % (done, count))
if not silent:
logger.info('%s / %s tiles exported' % (done, count))
t = tiles.fetchone()

@@ -302,3 +377,3 @@

f = open(grid, 'w')
grid_json = json.loads(zlib.decompress(g[3]))
grid_json = json.loads(zlib.decompress(g[3]).decode('utf-8'))
# join up with the grid 'data' which is in pieces when stored in mbtiles file

@@ -311,3 +386,3 @@ grid_data = grid_data_cursor.fetchone()

grid_json['data'] = data
if callback in ("", "false", "null"):
if callback in (None, "", "false", "null"):
f.write(json.dumps(grid_json))

@@ -319,3 +394,4 @@ else:

for c in msg: sys.stdout.write(chr(8))
logger.info('%s / %s grids exported' % (done, count))
if not silent:
logger.info('%s / %s grids exported' % (done, count))
g = grids.fetchone()
+34
-13
Metadata-Version: 1.0
Name: mbutil
Version: 0.2.0
Version: 0.3.0
Summary: An importer and exporter for MBTiles

@@ -12,8 +12,10 @@ Home-page: https://github.com/mapbox/mbutil

MBUtil is a utility for importing and exporting the [MBTiles](http://mbtiles.org/) format,
typically created with [MapBox](http://mapbox.com/) [TileMill](http://mapbox.com/tilemill/).
typically created with [Mapbox](http://mapbox.com/) [TileMill](http://mapbox.com/tilemill/).
Before exporting tiles to disk, see if there's a [MapBox Hosting plan](http://mapbox.com/plans/)
Before exporting tiles to disk, see if there's a [Mapbox Hosting plan](http://mapbox.com/plans/)
or an open source [MBTiles server implementation](https://github.com/mapbox/mbtiles-spec/wiki/Implementations)
that works for you - tiles on disk are notoriously difficult to manage.
[![Build Status](https://secure.travis-ci.org/mapbox/mbutil.png)](http://travis-ci.org/mapbox/mbutil)
## Installation

@@ -25,5 +27,7 @@

cd mbutil
# get usage
./mb-util -h
# then to install the mb-util command globally:
Then to install the mb-util command globally:
sudo python setup.py install

@@ -43,5 +47,5 @@ # then you can run:

Examples:
Examples:
Export an mbtiles file to a directory of files:
$ mb-util world.mbtiles tiles # tiles must not already exist

@@ -53,8 +57,17 @@

Options:
-h, --help show this help message and exit
--scheme=SCHEME Tiling scheme of the tiles. Default is "xyz" (z/x/y),
other options are "tms" which is also z/x/y
but uses a flipped y coordinate, and "wms" which replicates
the MapServer WMS TileCache directory structure "z/000/000/x/000/000/y.png"''',
-h, --help Show this help message and exit
--scheme=SCHEME Tiling scheme of the tiles. Default is "xyz" (z/x/y),
other options are "tms" which is also z/x/y
but uses a flipped y coordinate, and "wms" which replicates
the MapServer WMS TileCache directory structure "z/000/000/x/000/000/y.png"''',
--image_format=FORMAT
The format of the image tiles, either png, jpg, webp or pbf
--grid_callback=CALLBACK
Option to control JSONP callback for UTFGrid tiles. If
grids are not used as JSONP, you can
remove callbacks specifying --grid_callback=""
--do_compression Do mbtiles compression
--silent Dictate whether the operations should run silentl
Export an `mbtiles` file to files on the filesystem:

@@ -64,2 +77,3 @@

Import a directory into a `mbtiles` file

@@ -87,5 +101,11 @@

This project uses [nosetests](http://readthedocs.org/docs/nose/en/latest/) for testing. Install nosetests
and run
This project uses [nosetests](http://readthedocs.org/docs/nose/en/latest/) for testing. Install nosetests:
pip install nose
or
easy_install nose
Then run:
nosetests

@@ -96,2 +116,3 @@

* [node-mbtiles provides mbpipe](https://github.com/mapbox/node-mbtiles/wiki/Post-processing-MBTiles-with-MBPipe), a useful utility.
* [mbliberator](https://github.com/calvinmetcalf/mbliberator) a similar program but in node.

@@ -98,0 +119,0 @@ ## License

+33
-12
# MBUtil
MBUtil is a utility for importing and exporting the [MBTiles](http://mbtiles.org/) format,
typically created with [MapBox](http://mapbox.com/) [TileMill](http://mapbox.com/tilemill/).
typically created with [Mapbox](http://mapbox.com/) [TileMill](http://mapbox.com/tilemill/).
Before exporting tiles to disk, see if there's a [MapBox Hosting plan](http://mapbox.com/plans/)
Before exporting tiles to disk, see if there's a [Mapbox Hosting plan](http://mapbox.com/plans/)
or an open source [MBTiles server implementation](https://github.com/mapbox/mbtiles-spec/wiki/Implementations)
that works for you - tiles on disk are notoriously difficult to manage.
[![Build Status](https://secure.travis-ci.org/mapbox/mbutil.png)](http://travis-ci.org/mapbox/mbutil)
## Installation

@@ -16,5 +18,7 @@

cd mbutil
# get usage
./mb-util -h
# then to install the mb-util command globally:
Then to install the mb-util command globally:
sudo python setup.py install

@@ -34,5 +38,5 @@ # then you can run:

Examples:
Examples:
Export an mbtiles file to a directory of files:
$ mb-util world.mbtiles tiles # tiles must not already exist

@@ -44,8 +48,17 @@

Options:
-h, --help show this help message and exit
--scheme=SCHEME Tiling scheme of the tiles. Default is "xyz" (z/x/y),
other options are "tms" which is also z/x/y
but uses a flipped y coordinate, and "wms" which replicates
the MapServer WMS TileCache directory structure "z/000/000/x/000/000/y.png"''',
-h, --help Show this help message and exit
--scheme=SCHEME Tiling scheme of the tiles. Default is "xyz" (z/x/y),
other options are "tms" which is also z/x/y
but uses a flipped y coordinate, and "wms" which replicates
the MapServer WMS TileCache directory structure "z/000/000/x/000/000/y.png"''',
--image_format=FORMAT
The format of the image tiles, either png, jpg, webp or pbf
--grid_callback=CALLBACK
Option to control JSONP callback for UTFGrid tiles. If
grids are not used as JSONP, you can
remove callbacks specifying --grid_callback=""
--do_compression Do mbtiles compression
--silent Dictate whether the operations should run silentl
Export an `mbtiles` file to files on the filesystem:

@@ -55,2 +68,3 @@

Import a directory into a `mbtiles` file

@@ -78,5 +92,11 @@

This project uses [nosetests](http://readthedocs.org/docs/nose/en/latest/) for testing. Install nosetests
and run
This project uses [nosetests](http://readthedocs.org/docs/nose/en/latest/) for testing. Install nosetests:
pip install nose
or
easy_install nose
Then run:
nosetests

@@ -87,2 +107,3 @@

* [node-mbtiles provides mbpipe](https://github.com/mapbox/node-mbtiles/wiki/Post-processing-MBTiles-with-MBPipe), a useful utility.
* [mbliberator](https://github.com/calvinmetcalf/mbliberator) a similar program but in node.

@@ -89,0 +110,0 @@ ## License

@@ -1,6 +0,6 @@

from distutils.core import setup
from setuptools import setup
setup(
name='mbutil',
version='0.2.0',
version='0.3.0',
author='Tom MacWright',

@@ -7,0 +7,0 @@ author_email='tom@macwright.org',

import os, shutil
import sys
import json
from nose import with_setup

@@ -9,5 +11,2 @@ from mbutil import mbtiles_to_disk, disk_to_mbtiles

try: os.path.mkdir('test/output')
except Exception: pass
@with_setup(clear_data, clear_data)

@@ -32,1 +31,24 @@ def test_mbtiles_to_disk():

assert os.path.exists('test/output/metadata.json')
@with_setup(clear_data, clear_data)
def test_utf8grid_disk_to_mbtiles():
os.mkdir('test/output')
mbtiles_to_disk('test/data/utf8grid.mbtiles', 'test/output/original', callback=None)
disk_to_mbtiles('test/output/original/', 'test/output/imported.mbtiles')
mbtiles_to_disk('test/output/imported.mbtiles', 'test/output/imported', callback=None)
assert os.path.exists('test/output/imported/0/0/0.grid.json')
original = json.load(open('test/output/original/0/0/0.grid.json'))
imported = json.load(open('test/output/imported/0/0/0.grid.json'))
assert original['data']['77'] == imported['data']['77'] == {u'ISO_A2': u'FR'}
@with_setup(clear_data, clear_data)
def test_mbtiles_to_disk_utfgrid_callback():
os.mkdir('test/output')
callback = {}
for c in ['null', 'foo']:
mbtiles_to_disk('test/data/utf8grid.mbtiles', 'test/output/%s' % c, callback=c)
f = open('test/output/%s/0/0/0.grid.json' % c)
callback[c] = f.read().split('{')[0]
f.close()
assert callback['foo'] == 'foo('
assert callback['null'] == ''

Sorry, the diff of this file is not supported yet