New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details →
Socket
Book a DemoSign in
Socket

plotman

Package Overview
Dependencies
Maintainers
1
Versions
9
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

plotman - pypi Package Compare versions

Comparing version
0.2
to
0.3
.coveragerc

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

+19
# Maintenance
## Overview
This document holds guidance on maintaining aspects of plotman.
## The `chia plots create` CLI parsing code
In [src/plotman/chia.py](src/plotman/chia.py) there is code copied from the `chia plots create` subcommand's CLI parser definition.
When new versions of `chia-blockchain` are released, their interface code should be added to plotman.
plotman commit [1b5db4e](https://github.com/ericaltendorf/plotman/commit/1b5db4e342b9ec1f7910663a453aec3a97ba51a6) provides an example of adding a new version.
In many cases, copying code is a poor choice.
It is believed that in this case it is appropriate since the chia code that plotman could import is not necessarily the code that is parsing the plotting process command lines anyways.
The chia command could come from another Python environment, a system package, a `.dmg`, etc.
This approach also offers future potential of using the proper version of parsing for the specific plot process being inspected.
Finally, this alleviates dealing with the dependency on the `chia-blockchain` package.
In generally, using dependencies is good.
This seems to be an exceptional case.

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is not supported yet

import functools
import click
from pathlib import Path
class Commands:
def __init__(self):
self.by_version = {}
def register(self, version):
if version in self.by_version:
raise Exception(f'Version already registered: {version!r}')
if not isinstance(version, tuple):
raise Exception(f'Version must be a tuple: {version!r}')
return functools.partial(self._decorator, version=version)
def _decorator(self, command, *, version):
self.by_version[version] = command
# self.by_version = dict(sorted(self.by_version.items()))
def __getitem__(self, item):
return self.by_version[item]
def latest_command(self):
return max(self.by_version.items())[1]
commands = Commands()
@commands.register(version=(1, 1, 2))
@click.command()
# https://github.com/Chia-Network/chia-blockchain/blob/v1.1.2/LICENSE
# https://github.com/Chia-Network/chia-blockchain/blob/v1.1.2/chia/cmds/plots.py#L39-L83
# start copied code
@click.option("-k", "--size", help="Plot size", type=int, default=32, show_default=True)
@click.option("--override-k", help="Force size smaller than 32", default=False, show_default=True, is_flag=True)
@click.option("-n", "--num", help="Number of plots or challenges", type=int, default=1, show_default=True)
@click.option("-b", "--buffer", help="Megabytes for sort/plot buffer", type=int, default=4608, show_default=True)
@click.option("-r", "--num_threads", help="Number of threads to use", type=int, default=2, show_default=True)
@click.option("-u", "--buckets", help="Number of buckets", type=int, default=128, show_default=True)
@click.option(
"-a",
"--alt_fingerprint",
type=int,
default=None,
help="Enter the alternative fingerprint of the key you want to use",
)
@click.option(
"-c",
"--pool_contract_address",
type=str,
default=None,
help="Address of where the pool reward will be sent to. Only used if alt_fingerprint and pool public key are None",
)
@click.option("-f", "--farmer_public_key", help="Hex farmer public key", type=str, default=None)
@click.option("-p", "--pool_public_key", help="Hex public key of pool", type=str, default=None)
@click.option(
"-t",
"--tmp_dir",
help="Temporary directory for plotting files",
type=click.Path(),
default=Path("."),
show_default=True,
)
@click.option("-2", "--tmp2_dir", help="Second temporary directory for plotting files", type=click.Path(), default=None)
@click.option(
"-d",
"--final_dir",
help="Final directory for plots (relative or absolute)",
type=click.Path(),
default=Path("."),
show_default=True,
)
@click.option("-i", "--plotid", help="PlotID in hex for reproducing plots (debugging only)", type=str, default=None)
@click.option("-m", "--memo", help="Memo in hex for reproducing plots (debugging only)", type=str, default=None)
@click.option("-e", "--nobitfield", help="Disable bitfield", default=False, is_flag=True)
@click.option(
"-x", "--exclude_final_dir", help="Skips adding [final dir] to harvester for farming", default=False, is_flag=True
)
# end copied code
def _cli():
pass
@commands.register(version=(1, 1, 3))
@click.command()
# https://github.com/Chia-Network/chia-blockchain/blob/v1.1.3/LICENSE
# https://github.com/Chia-Network/chia-blockchain/blob/v1.1.3/chia/cmds/plots.py#L39-L83
# start copied code
@click.option("-k", "--size", help="Plot size", type=int, default=32, show_default=True)
@click.option("--override-k", help="Force size smaller than 32", default=False, show_default=True, is_flag=True)
@click.option("-n", "--num", help="Number of plots or challenges", type=int, default=1, show_default=True)
@click.option("-b", "--buffer", help="Megabytes for sort/plot buffer", type=int, default=4608, show_default=True)
@click.option("-r", "--num_threads", help="Number of threads to use", type=int, default=2, show_default=True)
@click.option("-u", "--buckets", help="Number of buckets", type=int, default=128, show_default=True)
@click.option(
"-a",
"--alt_fingerprint",
type=int,
default=None,
help="Enter the alternative fingerprint of the key you want to use",
)
@click.option(
"-c",
"--pool_contract_address",
type=str,
default=None,
help="Address of where the pool reward will be sent to. Only used if alt_fingerprint and pool public key are None",
)
@click.option("-f", "--farmer_public_key", help="Hex farmer public key", type=str, default=None)
@click.option("-p", "--pool_public_key", help="Hex public key of pool", type=str, default=None)
@click.option(
"-t",
"--tmp_dir",
help="Temporary directory for plotting files",
type=click.Path(),
default=Path("."),
show_default=True,
)
@click.option("-2", "--tmp2_dir", help="Second temporary directory for plotting files", type=click.Path(), default=None)
@click.option(
"-d",
"--final_dir",
help="Final directory for plots (relative or absolute)",
type=click.Path(),
default=Path("."),
show_default=True,
)
@click.option("-i", "--plotid", help="PlotID in hex for reproducing plots (debugging only)", type=str, default=None)
@click.option("-m", "--memo", help="Memo in hex for reproducing plots (debugging only)", type=str, default=None)
@click.option("-e", "--nobitfield", help="Disable bitfield", default=False, is_flag=True)
@click.option(
"-x", "--exclude_final_dir", help="Skips adding [final dir] to harvester for farming", default=False, is_flag=True
)
# end copied code
def _cli():
pass
+2
-1
include CHANGELOG.md
include LICENSE
include LICENSE*
include README.md

@@ -7,4 +7,5 @@ include *.md

include tox.ini
include .coveragerc
recursive-include src *.py
recursive-include src/plotman/_tests/resources *
recursive-include src/plotman/resources *
Metadata-Version: 2.1
Name: plotman
Version: 0.2
Version: 0.3
Summary: Chia plotting manager

@@ -193,7 +193,10 @@ Home-page: https://github.com/ericaltendorf/plotman

Installation for Linux:
Installation for Linux and macOS:
1. Plotman assumes that a functioning [Chia](https://github.com/Chia-Network/chia-blockchain)
installation is present on the system. Activate your `chia` environment by typing
`source /path/to/your/chia/install/activate`.
installation is present on the system.
- virtual environment (Linux, macOS): Activate your `chia` environment by typing
`source /path/to/your/chia/install/activate`.
- dmg (macOS): Follow [these instructions](https://github.com/Chia-Network/chia-blockchain/wiki/CLI-Commands-Reference#mac)
to add the `chia` binary to the `PATH`
2. Then, install Plotman using the following command:

@@ -235,3 +238,4 @@ ```shell

Description-Content-Type: text/markdown
Provides-Extra: coverage
Provides-Extra: dev
Provides-Extra: test

@@ -184,7 +184,10 @@ # `plotman`: a Chia plotting manager

Installation for Linux:
Installation for Linux and macOS:
1. Plotman assumes that a functioning [Chia](https://github.com/Chia-Network/chia-blockchain)
installation is present on the system. Activate your `chia` environment by typing
`source /path/to/your/chia/install/activate`.
installation is present on the system.
- virtual environment (Linux, macOS): Activate your `chia` environment by typing
`source /path/to/your/chia/install/activate`.
- dmg (macOS): Follow [these instructions](https://github.com/Chia-Network/chia-blockchain/wiki/CLI-Commands-Reference#mac)
to add the `chia` binary to the `PATH`
2. Then, install Plotman using the following command:

@@ -191,0 +194,0 @@ ```shell

@@ -40,2 +40,4 @@ [metadata]

appdirs
attrs
click
desert

@@ -56,2 +58,5 @@ marshmallow

[options.extras_require]
coverage =
coverage
diff-cover
dev =

@@ -61,5 +66,6 @@ %(test)s

test =
%(coverage)s
check-manifest
pytest
pytest-mock
pytest-cov
pyfakefs

@@ -66,0 +72,0 @@

Metadata-Version: 2.1
Name: plotman
Version: 0.2
Version: 0.3
Summary: Chia plotting manager

@@ -193,7 +193,10 @@ Home-page: https://github.com/ericaltendorf/plotman

Installation for Linux:
Installation for Linux and macOS:
1. Plotman assumes that a functioning [Chia](https://github.com/Chia-Network/chia-blockchain)
installation is present on the system. Activate your `chia` environment by typing
`source /path/to/your/chia/install/activate`.
installation is present on the system.
- virtual environment (Linux, macOS): Activate your `chia` environment by typing
`source /path/to/your/chia/install/activate`.
- dmg (macOS): Follow [these instructions](https://github.com/Chia-Network/chia-blockchain/wiki/CLI-Commands-Reference#mac)
to add the `chia` binary to the `PATH`
2. Then, install Plotman using the following command:

@@ -235,3 +238,4 @@ ```shell

Description-Content-Type: text/markdown
Provides-Extra: coverage
Provides-Extra: dev
Provides-Extra: test
appdirs
attrs
click
desert

@@ -9,6 +11,12 @@ marshmallow

[coverage]
coverage
diff-cover
[dev]
coverage
diff-cover
check-manifest
pytest
pytest-mock
pytest-cov
pyfakefs

@@ -18,5 +26,7 @@ isort

[test]
coverage
diff-cover
check-manifest
pytest
pytest-mock
pytest-cov
pyfakefs

@@ -0,3 +1,6 @@

.coveragerc
CHANGELOG.md
LICENSE
LICENSE-chia-blockchain
MAINTENANCE.md
MANIFEST.in

@@ -14,2 +17,3 @@ README.md

src/plotman/archive.py
src/plotman/chia.py
src/plotman/configuration.py

@@ -35,4 +39,4 @@ src/plotman/interactive.py

src/plotman/_tests/reporting_test.py
src/plotman/_tests/resources/2021-04-04-19:00:47.log
src/plotman/_tests/resources/2021-04-04-19:00:47.notes
src/plotman/_tests/resources/2021-04-04T19_00_47.681088-0400.log
src/plotman/_tests/resources/2021-04-04T19_00_47.681088-0400.notes
src/plotman/_tests/resources/__init__.py

@@ -39,0 +43,0 @@ src/plotman/resources/__init__.py

@@ -1,7 +0,7 @@

from plotman import archive, configuration, manager
from plotman import archive, configuration, job, manager
def test_compute_priority():
assert (archive.compute_priority( (3, 1), 1000, 10) >
archive.compute_priority( (3, 6), 1000, 10) )
assert (archive.compute_priority( job.Phase(major=3, minor=1), 1000, 10) >
archive.compute_priority( job.Phase(major=3, minor=6), 1000, 10) )

@@ -8,0 +8,0 @@ def test_rsync_dest():

@@ -11,43 +11,47 @@ """Tests for plotman/configuration.py"""

@pytest.fixture(name='config_path')
def config_fixture(tmp_path):
"""Return direct path to plotman.yaml"""
with importlib.resources.path(plotman_resources, "plotman.yaml") as path:
yield path
@pytest.fixture(name='config_text')
def config_text_fixture():
return importlib.resources.read_text(plotman_resources, "plotman.yaml")
def test_get_validated_configs__default(mocker, config_path):
def test_get_validated_configs__default(config_text):
"""Check that get_validated_configs() works with default/example plotman.yaml file."""
mocker.patch("plotman.configuration.get_path", return_value=config_path)
res = configuration.get_validated_configs()
res = configuration.get_validated_configs(config_text, '')
assert isinstance(res, configuration.PlotmanConfig)
def test_get_validated_configs__malformed(mocker, config_path):
def test_get_validated_configs__malformed(config_text):
"""Check that get_validated_configs() raises exception with invalid plotman.yaml contents."""
mocker.patch("plotman.configuration.get_path", return_value=config_path)
with open(configuration.get_path(), "r") as file:
loaded_yaml = yaml.load(file, Loader=yaml.SafeLoader)
loaded_yaml = yaml.load(config_text, Loader=yaml.SafeLoader)
# Purposefully malform the contents of loaded_yaml by changing tmp from List[str] --> str
loaded_yaml["directories"]["tmp"] = "/mnt/tmp/00"
mocker.patch("yaml.load", return_value=loaded_yaml)
malformed_config_text = yaml.dump(loaded_yaml, Dumper=yaml.SafeDumper)
with pytest.raises(configuration.ConfigurationException) as exc_info:
configuration.get_validated_configs()
configuration.get_validated_configs(malformed_config_text, '/the_path')
assert exc_info.value.args[0] == f"Config file at: '{configuration.get_path()}' is malformed"
assert exc_info.value.args[0] == f"Config file at: '/the_path' is malformed"
def test_get_validated_configs__missing(mocker, config_path):
def test_get_validated_configs__missing():
"""Check that get_validated_configs() raises exception when plotman.yaml does not exist."""
nonexistent_config = config_path.with_name("plotman2.yaml")
mocker.patch("plotman.configuration.get_path", return_value=nonexistent_config)
with pytest.raises(configuration.ConfigurationException) as exc_info:
configuration.get_validated_configs()
configuration.read_configuration_text('/invalid_path')
assert exc_info.value.args[0] == (
f"No 'plotman.yaml' file exists at expected location: '{nonexistent_config}'. To generate "
f"No 'plotman.yaml' file exists at expected location: '/invalid_path'. To generate "
f"default config file, run: 'plotman config generate'"
)
def test_loads_without_user_interface(config_text):
loaded_yaml = yaml.load(config_text, Loader=yaml.SafeLoader)
del loaded_yaml["user_interface"]
stripped_config_text = yaml.dump(loaded_yaml, Dumper=yaml.SafeDumper)
reloaded_yaml = configuration.get_validated_configs(stripped_config_text, '')
assert reloaded_yaml.user_interface == configuration.UserInterface()

@@ -24,3 +24,3 @@ import contextlib

def logfile_fixture(tmp_path):
log_name = '2021-04-04-19:00:47.log'
log_name = '2021-04-04T19_00_47.681088-0400.log'
log_contents = importlib.resources.read_binary(resources, log_name)

@@ -58,1 +58,86 @@ log_file_path = tmp_path.joinpath(log_name)

assert faux_job_with_logfile.start_time == log_file_time
@pytest.mark.parametrize(
argnames=['arguments'],
argvalues=[
[['-h']],
[['--help']],
[['-k', '32']],
[['-k32']],
[['-k', '32', '--help']],
],
ids=str,
)
def test_chia_plots_create_parsing_does_not_fail(arguments):
job.parse_chia_plots_create_command_line(
command_line=['python', 'chia', 'plots', 'create', *arguments],
)
@pytest.mark.parametrize(
argnames=['arguments'],
argvalues=[
[['-h']],
[['--help']],
[['-k', '32', '--help']],
],
ids=str,
)
def test_chia_plots_create_parsing_detects_help(arguments):
parsed = job.parse_chia_plots_create_command_line(
command_line=['python', 'chia', 'plots', 'create', *arguments],
)
assert parsed.help
@pytest.mark.parametrize(
argnames=['arguments'],
argvalues=[
[[]],
[['-k32']],
[['-k', '32']],
],
ids=str,
)
def test_chia_plots_create_parsing_detects_not_help(arguments):
parsed = job.parse_chia_plots_create_command_line(
command_line=['python', 'chia', 'plots', 'create', *arguments],
)
assert not parsed.help
@pytest.mark.parametrize(
argnames=['arguments'],
argvalues=[
[[]],
[['-k32']],
[['-k', '32']],
[['--size', '32']],
],
ids=str,
)
def test_chia_plots_create_parsing_handles_argument_forms(arguments):
parsed = job.parse_chia_plots_create_command_line(
command_line=['python', 'chia', 'plots', 'create', *arguments],
)
assert parsed.parameters['size'] == 32
@pytest.mark.parametrize(
argnames=['arguments'],
argvalues=[
[['--size32']],
[['--not-an-actual-option']],
],
ids=str,
)
def test_chia_plots_create_parsing_identifies_errors(arguments):
parsed = job.parse_chia_plots_create_command_line(
command_line=['python', 'chia', 'plots', 'create', *arguments],
)
assert parsed.error is not None

@@ -30,26 +30,33 @@ # TODO: migrate away from unittest patch

def test_permit_new_job_post_milestone(sched_cfg, dir_cfg):
phases = job.Phase.list_from_tuples([ (3, 8), (4, 1) ])
assert manager.phases_permit_new_job(
[ (3, 8), (4, 1) ], '/mnt/tmp/00', sched_cfg, dir_cfg)
phases, '/mnt/tmp/00', sched_cfg, dir_cfg)
def test_permit_new_job_pre_milestone(sched_cfg, dir_cfg):
phases = job.Phase.list_from_tuples([ (2, 3), (4, 1) ])
assert not manager.phases_permit_new_job(
[ (2, 3), (4, 1) ], '/mnt/tmp/00', sched_cfg, dir_cfg)
phases, '/mnt/tmp/00', sched_cfg, dir_cfg)
def test_permit_new_job_too_many_jobs(sched_cfg, dir_cfg):
phases = job.Phase.list_from_tuples([ (3, 1), (3, 2), (3, 3) ])
assert not manager.phases_permit_new_job(
[ (3, 1), (3, 2), (3, 3) ], '/mnt/tmp/00', sched_cfg, dir_cfg)
phases, '/mnt/tmp/00', sched_cfg, dir_cfg)
def test_permit_new_job_too_many_jobs_zerophase(sched_cfg, dir_cfg):
phases = job.Phase.list_from_tuples([ (3, 0), (3, 1), (3, 3) ])
assert not manager.phases_permit_new_job(
[ (3, 0), (3, 1), (3, 3) ], '/mnt/tmp/00', sched_cfg, dir_cfg)
phases, '/mnt/tmp/00', sched_cfg, dir_cfg)
def test_permit_new_job_too_many_jobs_nonephase(sched_cfg, dir_cfg):
phases = job.Phase.list_from_tuples([ (None, None), (3, 1), (3, 3) ])
assert manager.phases_permit_new_job(
[ (None, None), (3, 1), (3, 3) ], '/mnt/tmp/00', sched_cfg, dir_cfg)
phases, '/mnt/tmp/00', sched_cfg, dir_cfg)
def test_permit_new_job_override_tmp_dir(sched_cfg, dir_cfg):
phases = job.Phase.list_from_tuples([ (3, 1), (3, 2), (3, 3) ])
assert manager.phases_permit_new_job(
[ (3, 1), (3, 2), (3, 3) ], '/mnt/tmp/04', sched_cfg, dir_cfg)
phases, '/mnt/tmp/04', sched_cfg, dir_cfg)
phases = job.Phase.list_from_tuples([ (3, 1), (3, 2), (3, 3), (3, 6) ])
assert not manager.phases_permit_new_job(
[ (3, 1), (3, 2), (3, 3), (3, 6) ], '/mnt/tmp/04', sched_cfg,
phases, '/mnt/tmp/04', sched_cfg,
dir_cfg)

@@ -56,0 +63,0 @@

@@ -6,19 +6,20 @@ # TODO: migrate away from unittest patch

from plotman import reporting
from plotman import job
def test_phases_str_basic():
assert(reporting.phases_str([(1,2), (2,3), (3,4), (4,0)]) ==
'1:2 2:3 3:4 4:0')
phases = job.Phase.list_from_tuples([(1,2), (2,3), (3,4), (4,0)])
assert reporting.phases_str(phases) == '1:2 2:3 3:4 4:0'
def test_phases_str_elipsis_1():
assert(reporting.phases_str([(1,2), (2,3), (3,4), (4,0)], 3) ==
'1:2 [+1] 3:4 4:0')
phases = job.Phase.list_from_tuples([(1,2), (2,3), (3,4), (4,0)])
assert reporting.phases_str(phases, 3) == '1:2 [+1] 3:4 4:0'
def test_phases_str_elipsis_2():
assert(reporting.phases_str([(1,2), (2,3), (3,4), (4,0)], 2) ==
'1:2 [+2] 4:0')
phases = job.Phase.list_from_tuples([(1,2), (2,3), (3,4), (4,0)])
assert reporting.phases_str(phases, 2) == '1:2 [+2] 4:0'
def test_phases_str_none():
assert(reporting.phases_str([(None, None), (2, None), (3, 0)]) ==
'?:? 2:? 3:0')
phases = job.Phase.list_from_tuples([(None, None), (3, 0)])
assert reporting.phases_str(phases) == '?:? 3:0'

@@ -31,3 +32,3 @@ def test_job_viz_empty():

j = MockJob()
j.progress.return_value = ph
j.progress.return_value = job.Phase.from_tuple(ph)
return j

@@ -34,0 +35,0 @@

@@ -14,6 +14,43 @@ import argparse

from plotman import manager, plot_util
from plotman import job, manager, plot_util
# TODO : write-protect and delete-protect archived plots
def spawn_archive_process(dir_cfg, all_jobs):
'''Spawns a new archive process using the command created
in the archive() function. Returns archiving status and a log message to print.'''
log_message = None
archiving_status = None
# Look for running archive jobs. Be robust to finding more than one
# even though the scheduler should only run one at a time.
arch_jobs = get_running_archive_jobs(dir_cfg.archive)
if not arch_jobs:
(should_start, status_or_cmd) = archive(dir_cfg, all_jobs)
if not should_start:
archiving_status = status_or_cmd
else:
cmd = status_or_cmd
# TODO: do something useful with output instead of DEVNULL
p = subprocess.Popen(cmd,
shell=True,
stdout=subprocess.DEVNULL,
stderr=subprocess.STDOUT,
start_new_session=True)
log_message = 'Starting archive: ' + cmd
# At least for now it seems that even if we get a new running
# archive jobs list it doesn't contain the new rsync process.
# My guess is that this is because the bash in the middle due to
# shell=True is still starting up and really hasn't launched the
# new rsync process yet. So, just put a placeholder here. It
# will get filled on the next cycle.
arch_jobs.append('<pending>')
if archiving_status is None:
archiving_status = 'pid: ' + ', '.join(map(str, arch_jobs))
return archiving_status, log_message
def compute_priority(phase, gb_free, n_plots):

@@ -29,10 +66,10 @@ # All these values are designed around dst buffer dirs of about

# ignore.
if (phase[0] and phase[1]):
if (phase == (3, 4)):
if (phase.known):
if (phase == job.Phase(3, 4)):
priority -= 4
elif (phase == (3, 5)):
elif (phase == job.Phase(3, 5)):
priority -= 8
elif (phase == (3, 6)):
elif (phase == job.Phase(3, 6)):
priority -= 16
elif (phase >= (3, 7)):
elif (phase >= job.Phase(3, 7)):
priority -= 32

@@ -63,3 +100,3 @@

freebytes = int(fields[3][:-1]) * 1024 # Strip the final 'K'
archdir = (fields[5]).decode('ascii')
archdir = (fields[5]).decode('utf-8')
archdir_freebytes[archdir] = freebytes

@@ -104,3 +141,3 @@ return archdir_freebytes

for d in dir_cfg.dst:
ph = dir2ph.get(d, (0, 0))
ph = dir2ph.get(d, job.Phase(0, 0))
dir_plots = plot_util.list_k32_plots(d)

@@ -141,5 +178,5 @@ gb_free = plot_util.df_b(d) / plot_util.GB

throttle_arg = ('--bwlimit=%d' % bwlimit) if bwlimit else ''
cmd = ('rsync %s --remove-source-files -P %s %s' %
cmd = ('rsync %s --no-compress --remove-source-files -P %s %s' %
(throttle_arg, chosen_plot, rsync_dest(dir_cfg.archive, archdir)))
return (True, cmd)

@@ -1,5 +0,6 @@

from dataclasses import dataclass
import contextlib
from typing import Dict, List, Optional
import appdirs
import attr
import desert

@@ -19,25 +20,34 @@ import marshmallow

def get_validated_configs():
"""Return a validated instance of the PlotmanConfig dataclass with data from plotman.yaml
def read_configuration_text(config_path):
try:
with open(config_path, "r") as file:
return file.read()
except FileNotFoundError as e:
raise ConfigurationException(
f"No 'plotman.yaml' file exists at expected location: '{config_path}'. To generate "
f"default config file, run: 'plotman config generate'"
) from e
def get_validated_configs(config_text, config_path):
"""Return a validated instance of PlotmanConfig with data from plotman.yaml
:raises ConfigurationException: Raised when plotman.yaml is either missing or malformed
"""
schema = desert.schema(PlotmanConfig)
config_file_path = get_path()
config_objects = yaml.load(config_text, Loader=yaml.SafeLoader)
try:
with open(config_file_path, "r") as file:
config_file = yaml.load(file, Loader=yaml.SafeLoader)
return schema.load(config_file)
except FileNotFoundError as e:
loaded = schema.load(config_objects)
except marshmallow.exceptions.ValidationError as e:
raise ConfigurationException(
f"No 'plotman.yaml' file exists at expected location: '{config_file_path}'. To generate "
f"default config file, run: 'plotman config generate'"
f"Config file at: '{config_path}' is malformed"
) from e
except marshmallow.exceptions.ValidationError as e:
raise ConfigurationException(f"Config file at: '{config_file_path}' is malformed") from e
return loaded
# Data models used to deserializing/formatting plotman.yaml files.
@dataclass
@attr.frozen
class Archive:

@@ -51,7 +61,7 @@ rsyncd_module: str

@dataclass
@attr.frozen
class TmpOverrides:
tmpdir_max_jobs: Optional[int] = None
@dataclass
@attr.frozen
class Directories:

@@ -65,3 +75,3 @@ log: str

@dataclass
@attr.frozen
class Scheduling:

@@ -76,3 +86,3 @@ global_max_jobs: int

@dataclass
@attr.frozen
class Plotting:

@@ -87,11 +97,11 @@ k: int

@dataclass
@attr.frozen
class UserInterface:
use_stty_size: bool
use_stty_size: bool = True
@dataclass
@attr.frozen
class PlotmanConfig:
user_interface: UserInterface
directories: Directories
scheduling: Scheduling
plotting: Plotting
user_interface: UserInterface = attr.ib(factory=UserInterface)

@@ -7,3 +7,2 @@ import curses

import subprocess
import threading

@@ -14,2 +13,5 @@ from plotman import archive, configuration, manager, reporting

class TerminalTooSmallError(Exception):
pass
class Log:

@@ -68,3 +70,5 @@ def __init__(self):

cfg = configuration.get_validated_configs()
config_path = configuration.get_path()
config_text = configuration.read_configuration_text(config_path)
cfg = configuration.get_validated_configs(config_text, config_path)

@@ -93,2 +97,3 @@ plotting_active = True

archdir_freebytes = None
aging_reason = None

@@ -120,2 +125,5 @@ while True:

if (started):
if aging_reason is not None:
log.log(aging_reason)
aging_reason = None
log.log(msg)

@@ -125,2 +133,5 @@ plotting_status = '<just started job>'

else:
# If a plot is delayed for any reason other than stagger, log it
if msg.find("stagger") < 0:
aging_reason = msg
plotting_status = msg

@@ -130,22 +141,6 @@

if archiving_active:
# Look for running archive jobs. Be robust to finding more than one
# even though the scheduler should only run one at a time.
arch_jobs = archive.get_running_archive_jobs(cfg.directories.archive)
if arch_jobs:
archiving_status = 'pid: ' + ', '.join(map(str, arch_jobs))
else:
(should_start, status_or_cmd) = archive.archive(cfg.directories, jobs)
if not should_start:
archiving_status = status_or_cmd
else:
cmd = status_or_cmd
log.log('Starting archive: ' + cmd)
archiving_status, log_message = archive.spawn_archive_process(cfg.directories, jobs)
if log_message:
log.log(log_message)
# TODO: do something useful with output instead of DEVNULL
p = subprocess.Popen(cmd,
shell=True,
stdout=subprocess.DEVNULL,
stderr=subprocess.STDOUT,
start_new_session=True)
archdir_freebytes = archive.get_archdir_freebytes(cfg.directories.archive)

@@ -188,9 +183,6 @@

n_tmpdirs = len(cfg.directories.tmp)
n_tmpdirs_half = int(n_tmpdirs / 2)
# Directory reports.
tmp_report_1 = reporting.tmp_dir_report(
jobs, cfg.directories, cfg.scheduling, n_cols, 0, n_tmpdirs_half, tmp_prefix)
tmp_report_2 = reporting.tmp_dir_report(
jobs, cfg.directories, cfg.scheduling, n_cols, n_tmpdirs_half, n_tmpdirs, tmp_prefix)
tmp_report = reporting.tmp_dir_report(
jobs, cfg.directories, cfg.scheduling, n_cols, 0, n_tmpdirs, tmp_prefix)
dst_report = reporting.dst_dir_report(

@@ -209,6 +201,4 @@ jobs, cfg.directories.dst, n_cols, dst_prefix)

tmp_h = max(len(tmp_report_1.splitlines()),
len(tmp_report_2.splitlines()))
tmp_w = len(max(tmp_report_1.splitlines() +
tmp_report_2.splitlines(), key=len)) + 1
tmp_h = len(tmp_report.splitlines())
tmp_w = len(max(tmp_report.splitlines(), key=len)) + 1
dst_h = len(dst_report.splitlines())

@@ -285,3 +275,2 @@ dst_w = len(max(dst_report.splitlines(), key=len)) + 1

# Dirs
tmpwin_12_gutter = 3
tmpwin_dstwin_gutter = 6

@@ -291,19 +280,11 @@

tmpwin_1 = curses.newwin(
tmpwin = curses.newwin(
tmp_h, tmp_w,
dirs_pos + int((maxtd_h - tmp_h) / 2), 0)
tmpwin_1.addstr(tmp_report_1)
dirs_pos + int(maxtd_h - tmp_h), 0)
tmpwin.addstr(tmp_report)
tmpwin.chgat(0, 0, curses.A_REVERSE)
tmpwin_2 = curses.newwin(
tmp_h, tmp_w,
dirs_pos + int((maxtd_h - tmp_h) / 2),
tmp_w + tmpwin_12_gutter)
tmpwin_2.addstr(tmp_report_2)
tmpwin_1.chgat(0, 0, curses.A_REVERSE)
tmpwin_2.chgat(0, 0, curses.A_REVERSE)
dstwin = curses.newwin(
dst_h, dst_w,
dirs_pos + int((maxtd_h - dst_h) / 2), 2 * tmp_w + tmpwin_12_gutter + tmpwin_dstwin_gutter)
dirs_pos + int((maxtd_h - dst_h) / 2), tmp_w + tmpwin_dstwin_gutter)
dstwin.addstr(dst_report)

@@ -326,4 +307,3 @@ dstwin.chgat(0, 0, curses.A_REVERSE)

jobs_win.noutrefresh()
tmpwin_1.noutrefresh()
tmpwin_2.noutrefresh()
tmpwin.noutrefresh()
dstwin.noutrefresh()

@@ -365,2 +345,7 @@ archwin.noutrefresh()

curses.wrapper(curses_main)
try:
curses.wrapper(curses_main)
except curses.error as e:
raise TerminalTooSmallError(
"Your terminal may be too small, try making it bigger.",
) from e
# TODO do we use all these?
import argparse
import contextlib
import functools
import logging

@@ -9,3 +10,2 @@ import os

import sys
import threading
import time

@@ -16,6 +16,10 @@ from datetime import datetime

import attr
import click
import pendulum
import psutil
from plotman import chia
def job_phases_for_tmpdir(d, all_jobs):

@@ -30,25 +34,11 @@ '''Return phase 2-tuples for jobs running on tmpdir d'''

def is_plotting_cmdline(cmdline):
if cmdline and 'python' in cmdline[0].lower():
cmdline = cmdline[1:]
return (
len(cmdline) >= 4
and 'python' in cmdline[0]
and cmdline[1].endswith('/chia')
and 'plots' == cmdline[2]
and 'create' == cmdline[3]
len(cmdline) >= 3
and cmdline[0].endswith("chia")
and 'plots' == cmdline[1]
and 'create' == cmdline[2]
)
# This is a cmdline argument fix for https://github.com/ericaltendorf/plotman/issues/41
def cmdline_argfix(cmdline):
known_keys = 'krbut2dne'
for i in cmdline:
# If the argument starts with dash and a known key and is longer than 2,
# then an argument is passed with no space between its key and value.
# This is POSIX compliant but the arg parser was tripping over it.
# In these cases, splitting that item up in separate key and value
# elements results in a `cmdline` list that is correctly formatted.
if i[0]=='-' and i[1] in known_keys and len(i)>2:
yield i[0:2] # key
yield i[2:] # value
else:
yield i
def parse_chia_plot_time(s):

@@ -58,2 +48,79 @@ # This will grow to try ISO8601 as well for when Chia logs that way

def parse_chia_plots_create_command_line(command_line):
command_line = list(command_line)
# Parse command line args
if 'python' in command_line[0].lower():
command_line = command_line[1:]
assert len(command_line) >= 3
assert 'chia' in command_line[0]
assert 'plots' == command_line[1]
assert 'create' == command_line[2]
all_command_arguments = command_line[3:]
# nice idea, but this doesn't include -h
# help_option_names = command.get_help_option_names(ctx=context)
help_option_names = {'--help', '-h'}
command_arguments = [
argument
for argument in all_command_arguments
if argument not in help_option_names
]
# TODO: We could at some point do chia version detection and pick the
# associated command. For now we'll just use the latest one we have
# copied.
command = chia.commands.latest_command()
try:
context = command.make_context(info_name='', args=list(command_arguments))
except click.ClickException as e:
error = e
params = {}
else:
error = None
params = context.params
return ParsedChiaPlotsCreateCommand(
error=error,
help=len(all_command_arguments) > len(command_arguments),
parameters=params,
)
class ParsedChiaPlotsCreateCommand:
def __init__(self, error, help, parameters):
self.error = error
self.help = help
self.parameters = parameters
@functools.total_ordering
@attr.frozen(order=False)
class Phase:
major: int = 0
minor: int = 0
known: bool = True
def __lt__(self, other):
return (
(not self.known, self.major, self.minor)
< (not other.known, other.major, other.minor)
)
@classmethod
def from_tuple(cls, t):
if len(t) != 2:
raise Exception(f'phase must be created from 2-tuple: {t!r}')
if None in t and not t[0] is t[1]:
raise Exception(f'phase can not be partially known: {t!r}')
if t[0] is None:
return cls(known=False)
return cls(major=t[0], minor=t[1])
@classmethod
def list_from_tuples(cls, l):
return [cls.from_tuple(t) for t in l]
# TODO: be more principled and explicit about what we cache vs. what we look up

@@ -64,11 +131,2 @@ # dynamically from the logfile

# These are constants, not updated during a run.
k = 0
r = 0
u = 0
b = 0
n = 0 # probably not used
tmpdir = ''
tmp2dir = ''
dstdir = ''
logfile = ''

@@ -79,7 +137,3 @@ jobfile = ''

proc = None # will get a psutil.Process
help = False
# These are dynamic, cached, and need to be udpated periodically
phase = (None, None) # Phase/subphase
def get_running_jobs(logroot, cached_jobs=()):

@@ -100,4 +154,15 @@ '''Return a list of running plot jobs. If a cache of preexisting jobs is provided,

else:
job = Job(proc, logroot)
if not job.help:
with proc.oneshot():
parsed_command = parse_chia_plots_create_command_line(
command_line=proc.cmdline(),
)
if parsed_command.error is not None:
continue
job = Job(
proc=proc,
parsed_command=parsed_command,
logroot=logroot,
)
if job.help:
continue
jobs.append(job)

@@ -107,65 +172,69 @@

def __init__(self, proc, logroot):
def __init__(self, proc, parsed_command, logroot):
'''Initialize from an existing psutil.Process object. must know logroot in order to understand open files'''
self.proc = proc
# These are dynamic, cached, and need to be udpated periodically
self.phase = Phase(known=False)
with self.proc.oneshot():
# Parse command line args
args = self.proc.cmdline()
assert len(args) > 4
assert 'python' in args[0]
assert 'chia' in args[1]
assert 'plots' == args[2]
assert 'create' == args[3]
args_iter = iter(cmdline_argfix(args[4:]))
for arg in args_iter:
val = None if arg in {'-e', '--nobitfield', '-h', '--help', '--override-k'} else next(args_iter)
if arg in {'-k', '--size'}:
self.k = val
elif arg in {'-r', '--num_threads'}:
self.r = val
elif arg in {'-b', '--buffer'}:
self.b = val
elif arg in {'-u', '--buckets'}:
self.u = val
elif arg in {'-t', '--tmp_dir'}:
self.tmpdir = val
elif arg in {'-2', '--tmp2_dir'}:
self.tmp2dir = val
elif arg in {'-d', '--final_dir'}:
self.dstdir = val
elif arg in {'-n', '--num'}:
self.n = val
elif arg in {'-h', '--help'}:
self.help = True
elif arg in {'-e', '--nobitfield', '-f', '--farmer_public_key', '-p', '--pool_public_key'}:
pass
# TODO: keep track of these
elif arg == '--override-k':
pass
self.help = parsed_command.help
self.args = parsed_command.parameters
# an example as of 1.0.5
# {
# 'size': 32,
# 'num_threads': 4,
# 'buckets': 128,
# 'buffer': 6000,
# 'tmp_dir': '/farm/yards/901',
# 'final_dir': '/farm/wagons/801',
# 'override_k': False,
# 'num': 1,
# 'alt_fingerprint': None,
# 'pool_contract_address': None,
# 'farmer_public_key': None,
# 'pool_public_key': None,
# 'tmp2_dir': None,
# 'plotid': None,
# 'memo': None,
# 'nobitfield': False,
# 'exclude_final_dir': False,
# }
self.k = self.args['size']
self.r = self.args['num_threads']
self.u = self.args['buckets']
self.b = self.args['buffer']
self.n = self.args['num']
self.tmpdir = self.args['tmp_dir']
self.tmp2dir = self.args['tmp2_dir']
self.dstdir = self.args['final_dir']
plot_cwd = self.proc.cwd()
self.tmpdir = os.path.join(plot_cwd, self.tmpdir)
if self.tmp2dir is not None:
self.tmp2dir = os.path.join(plot_cwd, self.tmp2dir)
self.dstdir = os.path.join(plot_cwd, self.dstdir)
# Find logfile (whatever file is open under the log root). The
# file may be open more than once, e.g. for STDOUT and STDERR.
for f in self.proc.open_files():
if logroot in f.path:
if self.logfile:
assert self.logfile == f.path
else:
print('Warning: unrecognized args: %s %s' % (arg, val))
self.logfile = f.path
break
# Find logfile (whatever file is open under the log root). The
# file may be open more than once, e.g. for STDOUT and STDERR.
if self.logfile:
# Initialize data that needs to be loaded from the logfile
self.init_from_logfile()
else:
print('Found plotting process PID {pid}, but could not find '
'logfile in its open files:'.format(pid = self.proc.pid))
for f in self.proc.open_files():
if logroot in f.path:
if self.logfile:
assert self.logfile == f.path
else:
self.logfile = f.path
break
print(f.path)
if self.logfile:
# Initialize data that needs to be loaded from the logfile
self.init_from_logfile()
else:
print('Found plotting process PID {pid}, but could not find '
'logfile in its open files:'.format(pid = self.proc.pid))
for f in self.proc.open_files():
print(f.path)
def init_from_logfile(self):

@@ -258,5 +327,5 @@ '''Read plot ID and job start time from logfile. Return true if we

phase = max(phase_subphases.keys())
self.phase = (phase, phase_subphases[phase])
self.phase = Phase(major=phase, minor=phase_subphases[phase])
else:
self.phase = (0, 0)
self.phase = Phase(major=0, minor=0)

@@ -344,3 +413,7 @@ def progress(self):

for f in self.proc.open_files():
if self.tmpdir in f.path or self.tmp2dir in f.path or self.dstdir in f.path:
if any(
dir in f.path
for dir in [self.tmpdir, self.tmp2dir, self.dstdir]
if dir is not None
):
temp_files.add(f.path)

@@ -347,0 +420,0 @@ return temp_files

@@ -9,6 +9,6 @@ import logging

import sys
import threading
import time
from datetime import datetime
import pendulum
import psutil

@@ -41,2 +41,4 @@

for j in all_jobs:
if j.dstdir is None:
continue
if not j.dstdir in result.keys() or result[j.dstdir] > j.progress():

@@ -50,3 +52,3 @@ result[j.dstdir] = j.progress()

# Filter unknown-phase jobs
phases = [ph for ph in phases if ph[0] is not None and ph[1] is not None]
phases = [ph for ph in phases if ph.known]

@@ -56,3 +58,6 @@ if len(phases) == 0:

milestone = (sched_cfg.tmpdir_stagger_phase_major, sched_cfg.tmpdir_stagger_phase_minor)
milestone = job.Phase(
major=sched_cfg.tmpdir_stagger_phase_major,
minor=sched_cfg.tmpdir_stagger_phase_minor,
)
# tmpdir_stagger_phase_limit default is 1, as declared in configuration.py

@@ -84,3 +89,3 @@ if len([p for p in phases if p < milestone]) >= sched_cfg.tmpdir_stagger_phase_limit:

elif len(jobs) >= sched_cfg.global_max_jobs:
wait_reason = 'max jobs (%d)' % sched_cfg.global_max_jobs
wait_reason = 'max jobs (%d) - (%ds/%ds)' % (sched_cfg.global_max_jobs, youngest_job_age, global_stagger)
else:

@@ -90,7 +95,7 @@ tmp_to_all_phases = [(d, job.job_phases_for_tmpdir(d, jobs)) for d in dir_cfg.tmp]

if phases_permit_new_job(phases, d, sched_cfg, dir_cfg) ]
rankable = [ (d, phases[0]) if phases else (d, (999, 999))
rankable = [ (d, phases[0]) if phases else (d, job.Phase(known=False))
for (d, phases) in eligible ]
if not eligible:
wait_reason = 'no eligible tempdirs'
wait_reason = 'no eligible tempdirs (%ds/%ds)' % (youngest_job_age, global_stagger)
else:

@@ -102,3 +107,3 @@ # Plot to oldest tmpdir.

dir2ph = { d:ph for (d, ph) in dstdirs_to_youngest_phase(jobs).items()
if d in dir_cfg.dst }
if d in dir_cfg.dst and ph is not None}
unused_dirs = [d for d in dir_cfg.dst if d not in dir2ph.keys()]

@@ -112,3 +117,3 @@ dstdir = ''

logfile = os.path.join(
dir_cfg.log, datetime.now().strftime('%Y-%m-%d-%H:%M:%S.log')
dir_cfg.log, pendulum.now().isoformat(timespec='microseconds').replace(':', '_') + '.log'
)

@@ -137,8 +142,36 @@

# start_new_sessions to make the job independent of this controlling tty.
p = subprocess.Popen(plot_args,
stdout=open(logfile, 'w'),
stderr=subprocess.STDOUT,
start_new_session=True)
try:
open_log_file = open(logfile, 'x')
except FileExistsError:
# The desired log file name already exists. Most likely another
# plotman process already launched a new process in response to
# the same scenario that triggered us. Let's at least not
# confuse things further by having two plotting processes
# logging to the same file. If we really should launch another
# plotting process, we'll get it at the next check cycle anyways.
message = (
f'Plot log file already exists, skipping attempt to start a'
f' new plot: {logfile!r}'
)
return (False, logmsg)
except FileNotFoundError as e:
message = (
f'Unable to open log file. Verify that the directory exists'
f' and has proper write permissions: {logfile!r}'
)
raise Exception(message) from e
# Preferably, do not add any code between the try block above
# and the with block below. IOW, this space intentionally left
# blank... As is, this provides a good chance that our handle
# of the log file will get closed explicitly while still
# allowing handling of just the log file opening error.
with open_log_file:
# start_new_sessions to make the job independent of this controlling tty.
p = subprocess.Popen(plot_args,
stdout=open_log_file,
stderr=subprocess.STDOUT,
start_new_session=True)
psutil.Process(p.pid).nice(15)

@@ -145,0 +178,0 @@ return (True, logmsg)

@@ -54,4 +54,7 @@ import math

plot = os.path.join(d, plot)
if os.stat(plot).st_size > (0.95 * get_k32_plotsize()):
plots.append(plot)
try:
if os.stat(plot).st_size > (0.95 * get_k32_plotsize()):
plots.append(plot)
except FileNotFoundError:
continue

@@ -58,0 +61,0 @@ return plots

@@ -135,3 +135,5 @@ import argparse

cfg = configuration.get_validated_configs()
config_path = configuration.get_path()
config_text = configuration.read_configuration_text(config_path)
cfg = configuration.get_validated_configs(config_text, config_path)

@@ -184,4 +186,8 @@ #

firstit = False
archive.archive(cfg.directories, jobs)
archiving_status, log_message = archive.spawn_archive_process(cfg.directories, jobs)
if log_message:
print(log_message)
# Debugging: show the destination drive usage schedule

@@ -207,5 +213,5 @@ elif args.cmd == 'dsched':

if (len(selected) == 0):
print('Error: %s matched no jobs.' % id_spec)
print('Error: %s matched no jobs.' % args.idprefix[0])
elif len(selected) > 1:
print('Error: "%s" matched multiple jobs:' % id_spec)
print('Error: "%s" matched multiple jobs:' % args.idprefix[0])
for j in selected:

@@ -212,0 +218,0 @@ print(' %s' % j.plot_id)

@@ -16,7 +16,8 @@ import math

def phase_str(phase_pair):
(ph, subph) = phase_pair
return ((str(ph) if ph is not None else '?') + ':'
+ (str(subph) if subph is not None else '?'))
def phase_str(phase):
if not phase.known:
return '?:?'
return f'{phase.major}:{phase.minor}'
def phases_str(phases, max_num=None):

@@ -54,11 +55,11 @@ '''Take a list of phase-subphase pairs and return them as a compact string'''

for i in range(0, 8):
result += n_to_char(n_at_ph(jobs, (1, i)))
result += n_to_char(n_at_ph(jobs, job.Phase(1, i)))
result += '2'
for i in range(0, 8):
result += n_to_char(n_at_ph(jobs, (2, i)))
result += n_to_char(n_at_ph(jobs, job.Phase(2, i)))
result += '3'
for i in range(0, 7):
result += n_to_char(n_at_ph(jobs, (3, i)))
result += n_to_char(n_at_ph(jobs, job.Phase(3, i)))
result += '4'
result += n_to_char(n_at_ph(jobs, (4, 0)))
result += n_to_char(n_at_ph(jobs, job.Phase(4, 0)))
return result

@@ -100,17 +101,18 @@

try:
row = [j.plot_id[:8],
j.k,
abbr_path(j.tmpdir, tmp_prefix),
abbr_path(j.dstdir, dst_prefix),
plot_util.time_format(j.get_time_wall()),
phase_str(j.progress()),
plot_util.human_format(j.get_tmp_usage(), 0),
j.proc.pid,
j.get_run_status(),
plot_util.human_format(j.get_mem_usage(), 1),
plot_util.time_format(j.get_time_user()),
plot_util.time_format(j.get_time_sys()),
plot_util.time_format(j.get_time_iowait())
]
except psutil.NoSuchProcess:
with j.proc.oneshot():
row = [j.plot_id[:8],
j.k,
abbr_path(j.tmpdir, tmp_prefix),
abbr_path(j.dstdir, dst_prefix),
plot_util.time_format(j.get_time_wall()),
phase_str(j.progress()),
plot_util.human_format(j.get_tmp_usage(), 0),
j.proc.pid,
j.get_run_status(),
plot_util.human_format(j.get_mem_usage(), 1),
plot_util.time_format(j.get_time_user()),
plot_util.time_format(j.get_time_sys()),
plot_util.time_format(j.get_time_iowait())
]
except (psutil.NoSuchProcess, psutil.AccessDenied):
# In case the job has disappeared

@@ -141,3 +143,3 @@ row = [j.plot_id[:8]] + (['--'] * 12)

ready = manager.phases_permit_new_job(phases, d, sched_cfg, dir_cfg)
row = [abbr_path(d, prefix), 'OK' if ready else '--', phases_str(phases)]
row = [abbr_path(d, prefix), 'OK' if ready else '--', phases_str(phases, 5)]
tab.add_row(row)

@@ -193,8 +195,12 @@

def dirs_report(jobs, dir_cfg, sched_cfg, width):
return (
tmp_dir_report(jobs, dir_cfg, sched_cfg, width) + '\n' +
dst_dir_report(jobs, dir_cfg.dst, width) + '\n' +
'archive dirs free space:\n' +
arch_dir_report(archive.get_archdir_freebytes(dir_cfg.archive), width) + '\n'
)
reports = [
tmp_dir_report(jobs, dir_cfg, sched_cfg, width),
dst_dir_report(jobs, dir_cfg.dst, width),
]
if dir_cfg.archive is not None:
reports.extend([
'archive dirs free space:',
arch_dir_report(archive.get_archdir_freebytes(dir_cfg.archive), width),
])
return '\n'.join(reports) + '\n'

@@ -62,7 +62,11 @@ # Default/example plotman.yaml configuration file

# Currently archival depends on an rsync daemon running on the remote
# host, and that the module is configured to match the local path.
# See code for details.
# host.
# The archival also uses ssh to connect to the remote host and check
# for available directories. Set up ssh keys on the remote host to
# allow public key login from rsyncd_user.
# Complete example: https://github.com/ericaltendorf/plotman/wiki/Archiving
archive:
rsyncd_module: plots
rsyncd_path: /plots
rsyncd_module: plots # Define this in remote rsyncd.conf.
rsyncd_path: /plots # This is used via ssh. Should match path
# defined in the module referenced above.
rsyncd_bwlimit: 80000 # Bandwidth limit in KB/s

@@ -85,7 +89,10 @@ rsyncd_host: myfarmer

# Run a job on a particular temp dir only if the number of existing jobs
# before tmpdir_stagger_phase_major tmpdir_stagger_phase_minor
# before [tmpdir_stagger_phase_major : tmpdir_stagger_phase_minor]
# is less than tmpdir_stagger_phase_limit.
# Phase major corresponds to the plot phase, phase minor corresponds to
# the table or table pair in sequence, phase limit corresponds to
# the number of plots allowed before [phase major, phase minor]
# the number of plots allowed before [phase major : phase minor].
# e.g, with default settings, a new plot will start only when your plot
# reaches phase [2 : 1] on your temp drive. This setting takes precidence
# over global_stagger_m
tmpdir_stagger_phase_major: 2

@@ -102,6 +109,6 @@ tmpdir_stagger_phase_minor: 1

# Don't run any jobs (across all temp dirs) more often than this.
# Don't run any jobs (across all temp dirs) more often than this, in minutes.
global_stagger_m: 30
# How often the daemon wakes to consider starting a new plot job
# How often the daemon wakes to consider starting a new plot job, in seconds.
polling_time_s: 20

@@ -115,8 +122,8 @@

k: 32
e: True # Use -e plotting option
n_threads: 8 # Threads per job
e: False # Use -e plotting option
n_threads: 2 # Threads per job
n_buckets: 128 # Number of buckets to split data into
job_buffer: 4520 # Per job memory
job_buffer: 3389 # Per job memory
# If specified, pass through to the -f and -p options. See CLI reference.
# farmer_pk: ...
# pool_pk: ...
+13
-1

@@ -6,2 +6,4 @@ [tox]

changedir = {envtmpdir}
setenv =
COVERAGE_FILE={toxinidir}/.coverage

@@ -12,3 +14,3 @@ [testenv:test-py{37,38,39}]

commands =
pytest --capture=no --verbose --pyargs plotman
pytest --capture=no --verbose --cov=plotman --cov-report=term-missing --cov-report=xml:{toxinidir}/coverage.xml --pyargs plotman

@@ -20,1 +22,11 @@ [testenv:check]

check-manifest --verbose {toxinidir}
[testenv:check-coverage]
changedir = {toxinidir}
extras =
coverage
commands =
coverage combine coverage_reports/
coverage xml -o coverage.xml
coverage report --fail-under=35 --ignore-errors --show-missing
diff-cover --fail-under=100 {posargs:--compare-branch=development} coverage.xml

@@ -1,1 +0,1 @@

0.2
0.3

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is not supported yet