Latest Threat Research:SANDWORM_MODE: Shai-Hulud-Style npm Worm Hijacks CI Workflows and Poisons AI Toolchains.Details
Socket
Book a DemoInstallSign in
Socket

humanhash3

Package Overview
Dependencies
Maintainers
1
Versions
4
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

humanhash3 - npm Package Compare versions

Comparing version
0.0.4
to
0.0.5
+69
README.rst
humanhash
=========
humanhash provides human-readable representations of digests.
.. image:: https://img.shields.io/travis/blag/humanhash.svg
:target: https://travis-ci.org/blag/humanhash
.. image:: https://img.shields.io/coveralls/blag/humanhash.svg
:target: https://coveralls.io/github/blag/humanhash
.. image:: https://img.shields.io/pypi/v/humanhash3.svg
:target: https://pypi.python.org/pypi/humanhash3
.. image:: https://img.shields.io/pypi/l/humanhash3.svg
:target: https://github.com/blag/humanhash/blob/master/UNLICENSE
.. image:: https://img.shields.io/pypi/pyversions/humanhash3.svg
:target: https://github.com/blag/humanhash/blob/master/.travis.yml
Example
-------
.. code-block:: python
>>> import humanhash
>>> digest = '7528880a986c40e78c38115e640da2a1'
>>> humanhash.humanize(digest)
'three-georgia-xray-jig'
>>> humanhash.humanize(digest, words=6)
'high-mango-white-oregon-purple-charlie'
>>> humanhash.uuid()
('potato-oranges-william-friend', '9d2278759ae24698b1345525bd53358b')
Caveats
-------
Don’t store the humanhash output, as its statistical uniqueness is only
around 1 in 4.3 billion. Its intended use is as a human-readable (and,
most importantly, **memorable**) representation of a longer digest,
unique enough for display in a user interface, where a user may need to
remember or verbally communicate the identity of a hash, without having
to remember a 40-character hexadecimal sequence. Nevertheless, you
should keep original digests around, then pass them through
``humanize()`` only as you’re displaying them.
How It Works
------------
The procedure for generating a humanhash involves compressing the input
to a fixed length (default: 4 bytes), then mapping each of these bytes
to a word in a pre-defined wordlist (a default wordlist is supplied with
the library). This algorithm is consistent, so the same input, given the
same wordlist, will always give the same output. You can also use your
own wordlist, and specify a different number of words for output.
Inspiration
-----------
- `Chroma-Hash`_ - A human-viewable representation of a hash (albeit not
one that can be output on a terminal, or shouted down a hallway).
- `The NATO Phonetic Alphabet`_ - A great example of the trade-off
between clarity of human communication and byte-wise efficiency of
representation.
.. _Chroma-Hash: http://mattt.github.com/Chroma-Hash/
.. _The NATO Phonetic Alphabet: http://en.wikipedia.org/wiki/NATO_phonetic_alphabet
+70
-26

@@ -13,12 +13,13 @@ """

if sys.version_info.major == 3:
#Map returns an iterator in PY3K
# Map returns an iterator in PY3K
py3_map = map
def map(*args, **kwargs):
return [i for i in py3_map(*args, **kwargs)]
#Functionality of xrange is in range now
# Functionality of xrange is in range now
xrange = range
#Reduce moved to functools
#http://www.artima.com/weblogs/viewpost.jsp?thread=98196
# Reduce moved to functools
# http://www.artima.com/weblogs/viewpost.jsp?thread=98196
from functools import reduce

@@ -66,2 +67,8 @@

# Use a simple XOR checksum-like function for compression.
# checksum = lambda _bytes: reduce(operator.xor, _bytes, 0)
def checksum(checksum_bytes):
return reduce(operator.xor, checksum_bytes, 0)
class HumanHasher(object):

@@ -83,9 +90,32 @@

def __init__(self, wordlist=DEFAULT_WORDLIST):
"""
>>> HumanHasher(wordlist=[])
Traceback (most recent call last):
...
ValueError: Wordlist must have exactly 256 items
"""
if len(wordlist) != 256:
raise ArgumentError("Wordlist must have exactly 256 items")
raise ValueError("Wordlist must have exactly 256 items")
self.wordlist = wordlist
def humanize(self, hexdigest, words=4, separator='-'):
def humanize_list(self, hexdigest, words=4):
"""
Human a given hexadecimal digest, returning a list of words.
Change the number of words output by specifying `words`.
>>> digest = '60ad8d0d871b6095808297'
>>> HumanHasher().humanize_list(digest)
['sodium', 'magnesium', 'nineteen', 'hydrogen']
"""
# Gets a list of byte values between 0-255.
bytes_ = map(lambda x: int(x, 16),
map(''.join, zip(hexdigest[::2], hexdigest[1::2])))
# Compress an arbitrary number of bytes to `words`.
compressed = self.compress(bytes_, words)
return [str(self.wordlist[byte]) for byte in compressed]
def humanize(self, hexdigest, words=4, separator='-'):
"""
Humanize a given hexadecimal digest.

@@ -99,14 +129,12 @@

'sodium-magnesium-nineteen-hydrogen'
>>> HumanHasher().humanize(digest, words=6)
'hydrogen-pasta-mississippi-august-may-lithium'
>>> HumanHasher().humanize(digest, separator='*')
'sodium*magnesium*nineteen*hydrogen'
"""
# Gets a list of byte values between 0-255.
bytes = map(lambda x: int(x, 16),
map(''.join, zip(hexdigest[::2], hexdigest[1::2])))
# Compress an arbitrary number of bytes to `words`.
compressed = self.compress(bytes, words)
# Map the compressed byte values through the word list.
return separator.join(self.wordlist[byte] for byte in compressed)
return separator.join(self.humanize_list(hexdigest, words))
@staticmethod
def compress(bytes, target):
def compress(bytes_, target):

@@ -116,4 +144,4 @@ """

>>> bytes = [96, 173, 141, 13, 135, 27, 96, 149, 128, 130, 151]
>>> HumanHasher.compress(bytes, 4)
>>> bytes_ = [96, 173, 141, 13, 135, 27, 96, 149, 128, 130, 151]
>>> list(HumanHasher.compress(bytes_, 4))
[205, 128, 156, 96]

@@ -124,3 +152,3 @@

>>> HumanHasher.compress(bytes, 15) # doctest: +ELLIPSIS
>>> HumanHasher.compress(bytes_, 15) # doctest: +ELLIPSIS
Traceback (most recent call last):

@@ -131,3 +159,5 @@ ...

length = len(bytes)
bytes_list = list(bytes_)
length = len(bytes_list)
if target > length:

@@ -138,11 +168,8 @@ raise ValueError("Fewer input bytes than requested output")

seg_size = length // target
segments = [bytes[i * seg_size:(i + 1) * seg_size]
for i in xrange(target)]
segments = [bytes_list[i * seg_size:(i + 1) * seg_size]
for i in range(target)]
# Catch any left-over bytes in the last segment.
segments[-1].extend(bytes[target * seg_size:])
segments[-1].extend(bytes_list[target * seg_size:])
# Use a simple XOR checksum-like function for compression.
checksum = lambda bytes: reduce(operator.xor, bytes, 0)
checksums = map(checksum, segments)
return checksums
return map(checksum, segments)

@@ -156,4 +183,13 @@ def uuid(self, **params):

as :meth:`humanize` (they'll be passed straight through).
>>> import re
>>> hh = HumanHasher()
>>> result = hh.uuid()
>>> type(result) == tuple
True
>>> bool(re.match(r'^(\w+-){3}\w+$', result[0]))
True
>>> bool(re.match(r'^[0-9a-f]{32}$', result[1]))
True
"""
digest = str(uuidlib.uuid4()).replace('-', '')

@@ -166,1 +202,9 @@ return self.humanize(digest, **params), digest

humanize = DEFAULT_HASHER.humanize
humanize_list = DEFAULT_HASHER.humanize_list
if __name__ == "__main__":
import doctest
# http://stackoverflow.com/a/25691978/6461688
# This will force Python to exit with the number of failing tests as the
# exit code, which should be interpreted as a failing test by Travis.
sys.exit(doctest.testmod())
Metadata-Version: 1.1
Name: humanhash3
Version: 0.0.4
Version: 0.0.5
Summary: Human-readable representations of digests.

@@ -9,3 +9,72 @@ Home-page: https://github.com/blag/humanhash

License: Public Domain
Description: UNKNOWN
Description: humanhash
=========
humanhash provides human-readable representations of digests.
.. image:: https://img.shields.io/travis/blag/humanhash.svg
:target: https://travis-ci.org/blag/humanhash
.. image:: https://img.shields.io/coveralls/blag/humanhash.svg
:target: https://coveralls.io/github/blag/humanhash
.. image:: https://img.shields.io/pypi/v/humanhash3.svg
:target: https://pypi.python.org/pypi/humanhash3
.. image:: https://img.shields.io/pypi/l/humanhash3.svg
:target: https://github.com/blag/humanhash/blob/master/UNLICENSE
.. image:: https://img.shields.io/pypi/pyversions/humanhash3.svg
:target: https://github.com/blag/humanhash/blob/master/.travis.yml
Example
-------
.. code-block:: python
>>> import humanhash
>>> digest = '7528880a986c40e78c38115e640da2a1'
>>> humanhash.humanize(digest)
'three-georgia-xray-jig'
>>> humanhash.humanize(digest, words=6)
'high-mango-white-oregon-purple-charlie'
>>> humanhash.uuid()
('potato-oranges-william-friend', '9d2278759ae24698b1345525bd53358b')
Caveats
-------
Don’t store the humanhash output, as its statistical uniqueness is only
around 1 in 4.3 billion. Its intended use is as a human-readable (and,
most importantly, **memorable**) representation of a longer digest,
unique enough for display in a user interface, where a user may need to
remember or verbally communicate the identity of a hash, without having
to remember a 40-character hexadecimal sequence. Nevertheless, you
should keep original digests around, then pass them through
``humanize()`` only as you’re displaying them.
How It Works
------------
The procedure for generating a humanhash involves compressing the input
to a fixed length (default: 4 bytes), then mapping each of these bytes
to a word in a pre-defined wordlist (a default wordlist is supplied with
the library). This algorithm is consistent, so the same input, given the
same wordlist, will always give the same output. You can also use your
own wordlist, and specify a different number of words for output.
Inspiration
-----------
- `Chroma-Hash`_ - A human-viewable representation of a hash (albeit not
one that can be output on a terminal, or shouted down a hallway).
- `The NATO Phonetic Alphabet`_ - A great example of the trade-off
between clarity of human communication and byte-wise efficiency of
representation.
.. _Chroma-Hash: http://mattt.github.com/Chroma-Hash/
.. _The NATO Phonetic Alphabet: http://en.wikipedia.org/wiki/NATO_phonetic_alphabet
Platform: UNKNOWN

@@ -21,4 +90,5 @@ Classifier: Development Status :: 3 - Alpha

Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.2
Classifier: Programming Language :: Python :: 3.3
Classifier: Programming Language :: Python :: 3.4
Classifier: Programming Language :: Python :: 3.5
Classifier: Programming Language :: Python :: 3.6

@@ -6,6 +6,10 @@ #!/usr/bin/env python

with open('README.rst', 'r') as f:
long_description = f.read()
setup(
name='humanhash3',
version='0.0.4',
version='0.0.5',
description='Human-readable representations of digests.',
long_description=long_description,
author='Zachary Voase',

@@ -31,6 +35,8 @@ author_email='z@zacharyvoase.com',

'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.2',
# 'Programming Language :: Python :: 3.2', # Not tested
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
],
)