New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details →
Socket
Book a DemoSign in
Socket

plotman

Package Overview
Dependencies
Maintainers
1
Versions
9
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

plotman - pypi Package Compare versions

Comparing version
0.3.1
to
0.4
+87
src/plotman/chiapos.py
# version = 1.0.2
# https://github.com/Chia-Network/chiapos/blob/1.0.2/LICENSE
# https://github.com/Chia-Network/chiapos/blob/1.0.2/src/pos_constants.hpp
# start ported code
# Unique plot id which will be used as a ChaCha8 key, and determines the PoSpace.
kIdLen = 32;
# Distance between matching entries is stored in the offset
kOffsetSize = 10;
# Max matches a single entry can have, used for hardcoded memory allocation
kMaxMatchesSingleEntry = 30;
kMinBuckets = 16;
kMaxBuckets = 128;
# During backprop and compress, the write pointer is ahead of the read pointer
# Note that the large the offset, the higher these values must be
kReadMinusWrite = 1 << kOffsetSize;
kCachedPositionsSize = kReadMinusWrite * 4;
# Must be set high enough to prevent attacks of fast plotting
kMinPlotSize = 18;
# Set to 50 since k + kExtraBits + k*4 must not exceed 256 (BLAKE3 output size)
kMaxPlotSize = 50;
# The amount of spare space used for sort on disk (multiplied time memory buffer size)
kSpareMultiplier = 5;
# The proportion of memory to allocate to the Sort Manager for reading in buckets and sorting them
# The lower this number, the more memory must be provided by the caller. However, lowering the
# number also allows a higher proportion for writing, which reduces seeks for HDD.
kMemSortProportion = 0.75;
kMemSortProportionLinePoint = 0.85;
# How many f7s per C1 entry, and how many C1 entries per C2 entry
kCheckpoint1Interval = 10000;
kCheckpoint2Interval = 10000;
# F1 evaluations are done in batches of 2^kBatchSizes
kBatchSizes = 8;
# EPP for the final file, the higher this is, the less variability, and lower delta
# Note: if this is increased, ParkVector size must increase
kEntriesPerPark = 2048;
# To store deltas for EPP entries, the average delta must be less than this number of bits
kMaxAverageDeltaTable1 = 5.6;
kMaxAverageDelta = 3.5;
# C3 entries contain deltas for f7 values, the max average size is the following
kC3BitsPerEntry = 2.4;
# The number of bits in the stub is k minus this value
kStubMinusBits = 3;
#end ported code
# version = 1.0.2
# https://github.com/Chia-Network/chiapos/blob/1.0.2/LICENSE
# https://github.com/Chia-Network/chiapos/blob/1.0.2/src/util.hpp
# start ported code
def ByteAlign(num_bits):
return (num_bits + (8 - ((num_bits) % 8)) % 8)
# end ported code
# version = 1.0.2
# https://github.com/Chia-Network/chiapos/blob/1.0.2/LICENSE
# https://github.com/Chia-Network/chiapos/blob/1.0.2/src/entry_sizes.hpp
# start ported code
def CalculateLinePointSize(k):
return ByteAlign(2 * k) / 8
# This is the full size of the deltas section in a park. However, it will not be fully filled
def CalculateMaxDeltasSize(k, table_index):
if (table_index == 1):
return ByteAlign((kEntriesPerPark - 1) * kMaxAverageDeltaTable1) / 8
return ByteAlign((kEntriesPerPark - 1) * kMaxAverageDelta) / 8
def CalculateStubsSize(k):
return ByteAlign((kEntriesPerPark - 1) * (k - kStubMinusBits)) / 8
def CalculateParkSize(k, table_index):
return CalculateLinePointSize(k) + CalculateStubsSize(k) + CalculateMaxDeltasSize(k, table_index);
# end ported code
target_definitions:
local_rsync:
env:
command: rsync
options: --preallocate --remove-source-files --skip-compress plot --whole-file
site_root: null
# The disk space script must return a line for each directory
# to consider archiving to with the following form.
#
# /some/path:1000000000000
#
# That line tells plotman that it should consider archiving
# plots to files at paths such as /some/path/theplotid.plot and
# that there is 1TB of space available for use in that
# directory.
disk_space_script: |
#!/bin/bash
set -evx
site_root_stripped=$(echo "${site_root}" | sed 's;/\+$;;')
# printf with %.0f used to handle mawk such as in Ubuntu Docker images
# otherwise it saturates and you get saturated sizes like 2147483647
df -BK | grep " ${site_root_stripped}/" | awk '{ gsub(/K$/,"",$4); printf "%s:%.0f\n", $6, $4*1024 }'
transfer_script: |
#!/bin/bash
set -evx
"${command}" ${options} "${source}" "${destination}"
transfer_process_name: "{command}"
transfer_process_argument_prefix: "{site_root}"
rsyncd:
env:
# A value of null indicates a mandatory option
command: rsync
options: --bwlimit=80000 --preallocate --remove-source-files --skip-compress plot --whole-file
rsync_port: 873
ssh_port: 22
user: null
host: null
site_root: null
site: null
disk_space_script: |
#!/bin/bash
set -evx
site_root_stripped=$(echo "${site_root}" | sed 's;/\+$;;')
# printf with %.0f used to handle mawk such as in Ubuntu Docker images
# otherwise it saturates and you get saturated sizes like 2147483647
ssh -p "${ssh_port}" "${user}@${host}" "df -BK | grep \" $(echo "${site_root_stripped}" | sed 's;/\+$;;')/\" | awk '{ gsub(/K\$/,\"\",\$4); printf \"%s:%.0f\n\", \$6, \$4*1024 }'"
transfer_script: |
#!/bin/bash
set -evx
echo Launching transfer activity
relative_path=$(realpath --canonicalize-missing --relative-to="${site_root}" "${destination}")
url_root="rsync://${user}@${host}:${rsync_port}/${site}"
"${command}" ${options} "${source}" "${url_root}/${relative_path}"
transfer_process_name: "{command}"
transfer_process_argument_prefix: "rsync://{user}@{host}:{rsync_port}/{site}"
# external_script:
# env:
# some_common_value_with_a_default: /a/path
# some_mandatory option: null
# disk_space_path: /home/me/my_disk_space_script.sh
# transfer_path: /home/me/my_transfer_script.sh
# transfer_process_name: rsync
# transfer_process_argument_prefix: /the/destination/directory/root
+47
-3

@@ -8,4 +8,48 @@ # Change Log

## [0.1.1] - 2020-02-07
## [0.4] - 2021-06-10
### Fixed
- More accurately calculates expected size of plots.
- Archival requires only minimal extra space on target drive.
The required space is based on the size of the actual plot to be transferred.
Previously a 20% (~20GB) margin was required relative to a rough approximation of plot size.
- Identify more cases of chia plotting processes such as on NixOS.
- Avoid some more `NoSuchProcess` and `AccessDenied` errors when identifying plotting processes.
- Avoid crashing when parsing plotting process logs fails to decode due to `UnicodeDecodeError`.
- Avoid crashing when a tmp file is removed while we are checking a job's tmp usage.
- Windows is not yet supported, but plot and archive processes are now launched to be independent of the plotman process on Windows as it already was on Linux.
### Added
- Configuration file is versioned.
The config for previous plotman versions has been retroactively defined to be version 0
The new version is 1.
An error will be raised when you launch plotman with a configuration file whose version does not match the expected configuration version.
That error will include a link to the wiki to help understand the needed changes.
See [the wiki configuration page](https://github.com/ericaltendorf/plotman/wiki/Configuration#1-v04).
- Archiving configuration has been reworked offering both a simple builtin local archiving setup as well as arbitrary configuration of the disk space check and transfer operations.
See [the wiki archiving page](https://github.com/ericaltendorf/plotman/wiki/Archiving)
- The `directories:` `dst:` section is optional.
If not specified then generally the tmp drive for the plot will be used as dst.
If tmp2 is specified then it will be used as dst.
- Along with plot logs, there are now archive transfer logs and an overall plotman log.
This helps with diagnosing issues with both the archival disk space check and the archival transfers.
The paths are configurable under `logging:` via `plots:` (directory), `transfers:` (directory), and `application:` (file).
- Added support for `-c`/`--pool_contract_address`.
Configurable as `plotting:` `pool_contract_address:`.
- Interactive can be launched with plotting and archiving inactive.
This is available via the configuration file in `commands:` `interactive:` `autostart_plotting:` and `autostart_archiving:`.
They are also available on the command line as `--[no-]autostart-plotting` and `--[no-]autostart-archiving`.
- Uses `i` to differentiate between gigabytes vs. gibibytes, for example.
`Gi` vs. `G`.
## [0.3.1] - 2021-05-13
Changes not documented.
Bug fixes for v0.3.1.
## [0.3] - 2021-05-12
Changes not documented.
## [0.2] - 2021-04-20
Changes not documented.
## [0.1.1] - 2021-02-07
### Fixed
- Find jobs more reliably by inspecting cmdline instead of "process name"

@@ -18,8 +62,8 @@ - checked-in config.yaml now conforms to code's expectations!

## [0.1.0] - 2020-01-31
## [0.1.0] - 2021-01-31
### Fixed
- Fixed issue with prioritization of tmp dirs
## [0.0.1] - 2020-01-30
## [0.0.1] - 2021-01-30
### Added
- `.gitignore` and `CHANGELOG.md`
+248
-210
Metadata-Version: 2.1
Name: plotman
Version: 0.3.1
Version: 0.4
Summary: Chia plotting manager

@@ -10,211 +10,2 @@ Home-page: https://github.com/ericaltendorf/plotman

Project-URL: Changelog, https://github.com/ericaltendorf/plotman/blob/main/CHANGELOG.md
Description: # `plotman`: a Chia plotting manager
This is a tool for managing [Chia](https://github.com/Chia-Network/chia-blockchain)
plotting operations. The tool runs on the plotting machine and provides
the following functionality:
- Automatic spawning of new plotting jobs, possibly overlapping ("staggered")
on multiple temp directories, rate-limited globally and by per-temp-dir
limits.
- Rsync'ing of newly generated plots to a remote host (a farmer/harvester),
called "archiving".
- Monitoring of ongoing plotting and archiving jobs, progress, resources used,
temp files, etc.
- Control of ongoing plotting jobs (suspend, resume, plus kill and clean up
temp files).
- Both an interactive live dashboard mode as well as command line mode tools.
- (very alpha) Analyzing performance statistics of past jobs, to aggregate on
various plotting parameters or temp dir type.
Plotman is designed for the following configuration:
- A plotting machine with an array of `tmp` dirs, a single `tmp2` dir, and an
array of `dst` dirs to which the plot jobs plot. The `dst` dirs serve as a
temporary buffer space for generated plots.
- A farming machine with a large number of drives, made accessible via an
`rsyncd` module, and to be entirely populated with plots. These are known as
the `archive` directories.
- Plot jobs are run with STDOUT/STDERR redirected to a log file in a configured
directory. This allows analysis of progress (plot phase) as well as timing
(e.g. for analyzing performance).
## Functionality
Plotman tools are stateless. Rather than keep an internal record of what jobs
have been started, Plotman relies on the process tables, open files, and
logfiles of plot jobs to understand "what's going on". This means the tools
can be stopped and started, even from a different login session, without loss
of information. It also means Plotman can see and manage jobs started manually
or by other tools, as long as their STDOUT/STDERR redirected to a file in a
known logfile directory. (Note: The tool relies on reading the chia plot
command line arguments and the format of the plot tool output. Changes in
those may break this tool.)
Plot scheduling is done by waiting for a certain amount of wall time since the
last job was started, finding the best (e.g. least recently used) `tmp` dir for
plotting, and ensuring that job has progressed to at least a certain point
(e.g., phase 2, subphase 5).
Plots are output to the `dst` dirs, which serve as a temporary buffer until they
are rsync'd ("archived") to the farmer/harvester. The archiver does several
things to attempt to avoid concurrent IO. First, it only allows one rsync
process at a time (more sophisticated scheduling could remove this
restriction, but it's nontrivial). Second, it inspects the pipeline of plot
jobs to see which `dst` dirs are about to have plots written to them. This
is balanced against how full the `dst` drives are in a priority scheme.
It is, obviously, necessary that your rsync bandwidth exceeds your plotting
bandwidth. Given this, in normal operation, the `dst` dirs remain empty until
a plot is finished, after which it is shortly thereafter picked up by the
archive job. However, the decoupling provided by using `dst` drives as a
buffer means that should the farmer/harvester or the network become
unavailable, plotting continues uninterrupted.
## Screenshot Overview
```
Plotman 19:01:06 (refresh 9s/20s) | Plotting: stagger (1623s/1800s) Archival: active pid 1599918
Prefixes: tmp=/mnt/tmp dst=/home/chia/chia/plots archive=/plots (remote)
# plot id k tmp dst wall phase tmp pid stat mem user sys io
0 6b4e7375... 32 03 001 0:27 1:2 71G 1590196 SLP 5.5G 0:52 0:02 0s
1 9ab50d0e... 32 02 005 1:00 1:4 199G 1539209 SLP 5.5G 3:50 0:09 0s
2 018cf561... 32 01 000 1:32 1:5 224G 1530045 SLP 5.5G 4:46 0:11 2s
3 f771de9c... 32 00 004 2:03 1:5 241G 1524772 SLP 5.5G 5:43 0:14 2s
...
16 58045bef... 32 10 002 11:23 3:5 193G 1381622 RUN 5.4G 15:02 0:53 0:02
17 8134a2dd... 32 11 003 11:55 3:6 148G 1372206 RUN 5.4G 15:27 0:57 0:03
18 50165422... 32 08 001 12:43 3:6 102G 1357782 RUN 5.4G 16:14 1:00 0:03
19 100df84f... 32 09 005 13:19 4:0 0 1347430 DSK 705.9M 16:44 1:04 0:06
tmp ready phases tmp ready phases dst plots GB free phases priority
00 -- 1:5, 3:4 06 -- 2:4 000 1 1890 1:5, 2:2, 3:4 47
01 -- 1:5, 3:4 07 -- 2:2 001 0 1998 1:2, 1:7, 3:2, 3:6 34
02 -- 1:4, 3:3 08 -- 1:7, 3:6 002 0 1967 1:6, 2:5, 3:5 42
03 -- 1:2, 3:2 09 -- 2:1, 4:0 003 0 1998 1:6, 3:1, 3:6 34
04 OK 3:1 10 -- 1:6, 3:5 004 0 1998 1:5, 2:4, 3:4 46
05 OK 2:5 11 -- 1:6, 3:6 005 0 1955 1:4, 2:1, 3:3, 4:0 18
Archive dirs free space
000: 94GB | 005: 94GB | 012: 24GB | 017: 99GB | 022: 94GB | 027: 94GB | 032: 9998GB | 037: 9998GB
001: 94GB | 006: 93GB | 013: 25GB | 018: 94GB | 023: 94GB | 028: 94GB | 033: 9998GB |
002: 93GB | 009: 25GB | 014: 93GB | 019: 31GB | 024: 94GB | 029: 7777GB | 034: 9998GB |
003: 94GB | 010: 25GB | 015: 94GB | 020: 47GB | 025: 94GB | 030: 9998GB | 035: 9998GB |
004: 94GB | 011: 25GB | 016: 99GB | 021: 93GB | 026: 94GB | 031: 9998GB | 036: 9998GB |
Log:
01-02 18:33:53 Starting plot job: chia plots create -k 32 -r 8 -u 128 -b 4580 -t /mnt/tmp/03 -2 /mnt/tmp/a -d /home/chi
01-02 18:33:53 Starting archive: rsync --bwlimit=100000 --remove-source-files -P /home/chia/chia/plots/004/plot-k32-202
01-02 18:52:40 Starting archive: rsync --bwlimit=100000 --remove-source-files -P /home/chia/chia/plots/000/plot-k32-202
```
The screenshot shows some of the main features of Plotman.
The first line shows the status. The plotting status shows whether we just
started a plot, or, if not, why not (e.g., stagger time, tmp directories being
ready, etc.; in this case, the 1800s stagger between plots has not been reached
yet). Archival status says whether we are currently archiving (and provides
the `rsync` pid) or whether there are no plots available in the `dst` drives to
archive.
The second line provides a key to some directory abbrevations used throughout.
For `tmp` and `dst` directories, we assume they have a common prefix, which is
computed and indicated here, after which they can be referred to (in context)
by their unique suffix. For example, if we have `tmp` dirs `/mnt/tmp/00`,
`/mnt/tmp/01`, `/mnt/tmp/02`, etc., we show `/mnt/tmp` as the prefix here and
can then talk about `tmp` dirs `00` or `01` etc. The `archive` directories are
the same except that these are paths on a remote host and accessed via an
`rsyncd` module (see `src/plotman/resources/plotman.yaml` for details).
The next table shows information about the active plotting jobs. It is
abbreviated to show the most and least recently started jobs (the full list is
available via the command line mode). It shows various information about the
plot jobs, including the plot ID (first 8 chars), the directories used,
walltime, the current plot phase and subphase, space used on the `tmp` drive,
pid, etc.
The next tables are a bit hard to read; there is actually a `tmp` table on the
left which is split into two tables for rendering purposes, and a `dst` table
on the right. The `tmp` tables show the phases of the plotting jobs using
them, and whether or not they're ready to take a new plot job. The `dst` table
shows how many plots have accumulated, how much free space is left, and the
phases of jobs that are destined to write to them, and finally, the priority
computed for the archive job to move the plots away.
The last table simply shows free space of drives on the remote
harverster/farmer.
Finally, the last section shows a log of actions performed -- namely, plot and
archive jobs initiated. This is the one part of the interactive tool which is
stateful. There is no permanent record of these executed command lines, so if
you start a new interactive plotman session, this log is empty.
## Limitations and Issues
The system is tested on Linux only. Plotman should be generalizable to other
platforms, but this is not done yet. Some of the issues around making calls
out to command line programs (e.g., running `df` over `ssh` to obtain the free
space on the remote archive directories) are very linux-y.
The interactive mode uses the `curses` library ... poorly. Keypresses are
not received, screen resizing does not work, and the minimum terminal size
is pretty big.
Plotman assumes all plots are k32s. Again, this is just an unimplemented
generalization.
Many features are inconsistently supported between either the "interactive"
mode or the command line mode.
There are many bugs and TODOs.
Plotman will always look for the `plotman.yaml` file within your computer at an OS-based
default location. To generate a default `plotman.yaml`, run:
```shell
> plotman config generate
```
To display the current location of your `plotman.yaml` file and check if it exists, run:
```shell
> plotman config path
```
([See also](https://github.com/ericaltendorf/plotman/pull/61#issuecomment-812967363)).
## Installation
Installation for Linux and macOS:
1. Plotman assumes that a functioning [Chia](https://github.com/Chia-Network/chia-blockchain)
installation is present on the system.
- virtual environment (Linux, macOS): Activate your `chia` environment by typing
`source /path/to/your/chia/install/activate`.
- dmg (macOS): Follow [these instructions](https://github.com/Chia-Network/chia-blockchain/wiki/CLI-Commands-Reference#mac)
to add the `chia` binary to the `PATH`
2. Then, install Plotman using the following command:
```shell
> pip install --force-reinstall git+https://github.com/ericaltendorf/plotman@main
```
3. Plotman will look for `plotman.yaml` within your computer at an OS-based
default location. To create a default `plotman.yaml` and display its location,
run the following command:
```shell
> plotman config generate
```
The default configuration file used as a starting point is located [here](./src/plotman/resources/plotman.yaml)
4. That's it! You can now run Plotman by typing `plotman version` to verify its version.
Run `plotman --help` to learn about the available commands.
### Development note:
If you are forking Plotman, simply replace the installation step with `pip install --editable .[dev]` from the project root directory to install *your* version of plotman with test and development extras.
Keywords: chia,blockchain,automation,process management

@@ -241,1 +32,248 @@ Platform: UNKNOWN

Provides-Extra: test
License-File: LICENSE
# `plotman`: a Chia plotting manager
This is a tool for managing [Chia](https://github.com/Chia-Network/chia-blockchain)
plotting operations. The tool runs on the plotting machine and provides
the following functionality:
- Automatic spawning of new plotting jobs, possibly overlapping ("staggered")
on multiple temp directories, rate-limited globally and by per-temp-dir
limits.
- Rsync'ing of newly generated plots to a remote host (a farmer/harvester),
called "archiving".
- Monitoring of ongoing plotting and archiving jobs, progress, resources used,
temp files, etc.
- Control of ongoing plotting jobs (suspend, resume, plus kill and clean up
temp files).
- Both an interactive live dashboard mode as well as command line mode tools.
- (very alpha) Analyzing performance statistics of past jobs, to aggregate on
various plotting parameters or temp dir type.
Plotman is designed for the following configuration:
- A plotting machine with an array of `tmp` dirs, a single `tmp2` dir, and an
array of `dst` dirs to which the plot jobs plot. The `dst` dirs serve as a
temporary buffer space for generated plots.
- A farming machine with a large number of drives, made accessible via an
`rsyncd` module, and to be entirely populated with plots. These are known as
the `archive` directories.
- Plot jobs are run with STDOUT/STDERR redirected to a log file in a configured
directory. This allows analysis of progress (plot phase) as well as timing
(e.g. for analyzing performance).
## Functionality
Plotman tools are stateless. Rather than keep an internal record of what jobs
have been started, Plotman relies on the process tables, open files, and
logfiles of plot jobs to understand "what's going on". This means the tools
can be stopped and started, even from a different login session, without loss
of information. It also means Plotman can see and manage jobs started manually
or by other tools, as long as their STDOUT/STDERR redirected to a file in a
known logfile directory. (Note: The tool relies on reading the chia plot
command line arguments and the format of the plot tool output. Changes in
those may break this tool.)
Plot scheduling is done by waiting for a certain amount of wall time since the
last job was started, finding the best (e.g. least recently used) `tmp` dir for
plotting, and ensuring that job has progressed to at least a certain point
(e.g., phase 2, subphase 5).
Plots are output to the `dst` dirs, which serve as a temporary buffer until they
are rsync'd ("archived") to the farmer/harvester. The archiver does several
things to attempt to avoid concurrent IO. First, it only allows one rsync
process at a time (more sophisticated scheduling could remove this
restriction, but it's nontrivial). Second, it inspects the pipeline of plot
jobs to see which `dst` dirs are about to have plots written to them. This
is balanced against how full the `dst` drives are in a priority scheme.
It is, obviously, necessary that your rsync bandwidth exceeds your plotting
bandwidth. Given this, in normal operation, the `dst` dirs remain empty until
a plot is finished, after which it is shortly thereafter picked up by the
archive job. However, the decoupling provided by using `dst` drives as a
buffer means that should the farmer/harvester or the network become
unavailable, plotting continues uninterrupted.
## Screenshot Overview
```
Plotman 19:01:06 (refresh 9s/20s) | Plotting: stagger (1623s/1800s) Archival: active pid 1599918
Prefixes: tmp=/mnt/tmp dst=/home/chia/chia/plots archive=/plots (remote)
# plot id k tmp dst wall phase tmp pid stat mem user sys io
0 6b4e7375... 32 03 001 0:27 1:2 71G 1590196 SLP 5.5G 0:52 0:02 0s
1 9ab50d0e... 32 02 005 1:00 1:4 199G 1539209 SLP 5.5G 3:50 0:09 0s
2 018cf561... 32 01 000 1:32 1:5 224G 1530045 SLP 5.5G 4:46 0:11 2s
3 f771de9c... 32 00 004 2:03 1:5 241G 1524772 SLP 5.5G 5:43 0:14 2s
...
16 58045bef... 32 10 002 11:23 3:5 193G 1381622 RUN 5.4G 15:02 0:53 0:02
17 8134a2dd... 32 11 003 11:55 3:6 148G 1372206 RUN 5.4G 15:27 0:57 0:03
18 50165422... 32 08 001 12:43 3:6 102G 1357782 RUN 5.4G 16:14 1:00 0:03
19 100df84f... 32 09 005 13:19 4:0 0 1347430 DSK 705.9M 16:44 1:04 0:06
tmp ready phases tmp ready phases dst plots GB free phases priority
00 -- 1:5, 3:4 06 -- 2:4 000 1 1890 1:5, 2:2, 3:4 47
01 -- 1:5, 3:4 07 -- 2:2 001 0 1998 1:2, 1:7, 3:2, 3:6 34
02 -- 1:4, 3:3 08 -- 1:7, 3:6 002 0 1967 1:6, 2:5, 3:5 42
03 -- 1:2, 3:2 09 -- 2:1, 4:0 003 0 1998 1:6, 3:1, 3:6 34
04 OK 3:1 10 -- 1:6, 3:5 004 0 1998 1:5, 2:4, 3:4 46
05 OK 2:5 11 -- 1:6, 3:6 005 0 1955 1:4, 2:1, 3:3, 4:0 18
Archive dirs free space
000: 94GB | 005: 94GB | 012: 24GB | 017: 99GB | 022: 94GB | 027: 94GB | 032: 9998GB | 037: 9998GB
001: 94GB | 006: 93GB | 013: 25GB | 018: 94GB | 023: 94GB | 028: 94GB | 033: 9998GB |
002: 93GB | 009: 25GB | 014: 93GB | 019: 31GB | 024: 94GB | 029: 7777GB | 034: 9998GB |
003: 94GB | 010: 25GB | 015: 94GB | 020: 47GB | 025: 94GB | 030: 9998GB | 035: 9998GB |
004: 94GB | 011: 25GB | 016: 99GB | 021: 93GB | 026: 94GB | 031: 9998GB | 036: 9998GB |
Log:
01-02 18:33:53 Starting plot job: chia plots create -k 32 -r 8 -u 128 -b 4580 -t /mnt/tmp/03 -2 /mnt/tmp/a -d /home/chi
01-02 18:33:53 Starting archive: rsync --bwlimit=100000 --remove-source-files -P /home/chia/chia/plots/004/plot-k32-202
01-02 18:52:40 Starting archive: rsync --bwlimit=100000 --remove-source-files -P /home/chia/chia/plots/000/plot-k32-202
```
The screenshot shows some of the main features of Plotman.
The first line shows the status. The plotting status shows whether we just
started a plot, or, if not, why not (e.g., stagger time, tmp directories being
ready, etc.; in this case, the 1800s stagger between plots has not been reached
yet). Archival status says whether we are currently archiving (and provides
the `rsync` pid) or whether there are no plots available in the `dst` drives to
archive.
The second line provides a key to some directory abbrevations used throughout.
For `tmp` and `dst` directories, we assume they have a common prefix, which is
computed and indicated here, after which they can be referred to (in context)
by their unique suffix. For example, if we have `tmp` dirs `/mnt/tmp/00`,
`/mnt/tmp/01`, `/mnt/tmp/02`, etc., we show `/mnt/tmp` as the prefix here and
can then talk about `tmp` dirs `00` or `01` etc. The `archive` directories are
the same except that these are paths on a remote host and accessed via an
`rsyncd` module (see `src/plotman/resources/plotman.yaml` for details).
The next table shows information about the active plotting jobs. It is
abbreviated to show the most and least recently started jobs (the full list is
available via the command line mode). It shows various information about the
plot jobs, including the plot ID (first 8 chars), the directories used,
walltime, the current plot phase and subphase, space used on the `tmp` drive,
pid, etc.
The next tables are a bit hard to read; there is actually a `tmp` table on the
left which is split into two tables for rendering purposes, and a `dst` table
on the right. The `tmp` tables show the phases of the plotting jobs using
them, and whether or not they're ready to take a new plot job. The `dst` table
shows how many plots have accumulated, how much free space is left, and the
phases of jobs that are destined to write to them, and finally, the priority
computed for the archive job to move the plots away.
The last table simply shows free space of drives on the remote
harverster/farmer.
Finally, the last section shows a log of actions performed -- namely, plot and
archive jobs initiated. This is the one part of the interactive tool which is
stateful. There is no permanent record of these executed command lines, so if
you start a new interactive plotman session, this log is empty.
## `plotman` commands
To get a complete list of all available commands run:
```shell
plotman -h
```
You can also use `plotman <command> -h` to get help about a specific command, like
```shell
plotman interactive -h
```
## Running `plotman` as a daemon
> _PS: this section assumes that you have already configured `plotman.yaml`._
By default the command `plotman plot` will start the plotting job and continue to run on the foregroud as long as you keep the terminal window open. If you want to have it constantly running, try the following:
```shell
nohup plotman plot >> ~/plotman.log 2>&1 &
```
## Limitations and Issues
The system is tested on Linux only. Plotman should be generalizable to other
platforms, but this is not done yet. Some of the issues around making calls
out to command line programs (e.g., running `df` over `ssh` to obtain the free
space on the remote archive directories) are very linux-y.
The interactive mode uses the `curses` library ... poorly. Keypresses are
not received, screen resizing does not work, and the minimum terminal size
is pretty big.
Plotman assumes all plots are k32s. Again, this is just an unimplemented
generalization.
Many features are inconsistently supported between either the "interactive"
mode or the command line mode.
There are many bugs and TODOs.
Plotman will always look for the `plotman.yaml` file within your computer at an OS-based
default location. To generate a default `plotman.yaml`, run:
```shell
> plotman config generate
```
To display the current location of your `plotman.yaml` file and check if it exists, run:
```shell
> plotman config path
```
([See also](https://github.com/ericaltendorf/plotman/pull/61#issuecomment-812967363)).
## Installation
Installation for Linux and macOS:
1. Plotman assumes that a functioning [Chia](https://github.com/Chia-Network/chia-blockchain)
installation is present on the system.
- virtual environment (Linux, macOS): Activate your `chia` environment by typing
`source /path/to/your/chia/install/activate`.
- dmg (macOS): Follow [these instructions](https://github.com/Chia-Network/chia-blockchain/wiki/CLI-Commands-Reference#mac)
to add the `chia` binary to the `PATH`
2. Then, install Plotman using the following command:
```shell
> pip install --force-reinstall git+https://github.com/ericaltendorf/plotman@main
```
3. Plotman will look for `plotman.yaml` within your computer at an OS-based
default location. To create a default `plotman.yaml` and display its location,
run the following command:
```shell
> plotman config generate
```
The default configuration file used as a starting point is located [here](./src/plotman/resources/plotman.yaml)
4. That's it! You can now run Plotman by typing `plotman version` to verify its version.
Run `plotman --help` to learn about the available commands.
*Note:* If you see `ModuleNotFoundError: No module named 'readline'` when using `plotman` on [RHEL based linux](https://github.com/ericaltendorf/plotman/issues/195) after installing using [chia's guide](https://github.com/Chia-Network/chia-blockchain/wiki/INSTALL#centos--red-hat--fedora), install `readline-devel` then reinstall chia starting at compiling python in a new build environment; or consider using a project like `pyenv`.
## Basic Usage:
1. Install
2. Generate initial config
3. Configure (default location can be found with `plotman config path`). Options explained in the default config file (step 2)
4. Create log directory specified in `directories: { log: "" }`
5. Start plotman: `plotman plot` or `plotman interactive`
6. Check status: `plotman status`
### Development note:
If you are forking Plotman, simply replace the installation step with `pip install --editable .[dev]` from the project root directory to install *your* version of plotman with test and development extras.

@@ -150,2 +150,21 @@ # `plotman`: a Chia plotting manager

## `plotman` commands
To get a complete list of all available commands run:
```shell
plotman -h
```
You can also use `plotman <command> -h` to get help about a specific command, like
```shell
plotman interactive -h
```
## Running `plotman` as a daemon
> _PS: this section assumes that you have already configured `plotman.yaml`._
By default the command `plotman plot` will start the plotting job and continue to run on the foregroud as long as you keep the terminal window open. If you want to have it constantly running, try the following:
```shell
nohup plotman plot >> ~/plotman.log 2>&1 &
```
## Limitations and Issues

@@ -207,4 +226,20 @@

*Note:* If you see `ModuleNotFoundError: No module named 'readline'` when using `plotman` on [RHEL based linux](https://github.com/ericaltendorf/plotman/issues/195) after installing using [chia's guide](https://github.com/Chia-Network/chia-blockchain/wiki/INSTALL#centos--red-hat--fedora), install `readline-devel` then reinstall chia starting at compiling python in a new build environment; or consider using a project like `pyenv`.
## Basic Usage:
1. Install
2. Generate initial config
3. Configure (default location can be found with `plotman config path`). Options explained in the default config file (step 2)
4. Create log directory specified in `directories: { log: "" }`
5. Start plotman: `plotman plot` or `plotman interactive`
6. Check status: `plotman status`
### Development note:
If you are forking Plotman, simply replace the installation step with `pip install --editable .[dev]` from the project root directory to install *your* version of plotman with test and development extras.
Metadata-Version: 2.1
Name: plotman
Version: 0.3.1
Version: 0.4
Summary: Chia plotting manager

@@ -10,211 +10,2 @@ Home-page: https://github.com/ericaltendorf/plotman

Project-URL: Changelog, https://github.com/ericaltendorf/plotman/blob/main/CHANGELOG.md
Description: # `plotman`: a Chia plotting manager
This is a tool for managing [Chia](https://github.com/Chia-Network/chia-blockchain)
plotting operations. The tool runs on the plotting machine and provides
the following functionality:
- Automatic spawning of new plotting jobs, possibly overlapping ("staggered")
on multiple temp directories, rate-limited globally and by per-temp-dir
limits.
- Rsync'ing of newly generated plots to a remote host (a farmer/harvester),
called "archiving".
- Monitoring of ongoing plotting and archiving jobs, progress, resources used,
temp files, etc.
- Control of ongoing plotting jobs (suspend, resume, plus kill and clean up
temp files).
- Both an interactive live dashboard mode as well as command line mode tools.
- (very alpha) Analyzing performance statistics of past jobs, to aggregate on
various plotting parameters or temp dir type.
Plotman is designed for the following configuration:
- A plotting machine with an array of `tmp` dirs, a single `tmp2` dir, and an
array of `dst` dirs to which the plot jobs plot. The `dst` dirs serve as a
temporary buffer space for generated plots.
- A farming machine with a large number of drives, made accessible via an
`rsyncd` module, and to be entirely populated with plots. These are known as
the `archive` directories.
- Plot jobs are run with STDOUT/STDERR redirected to a log file in a configured
directory. This allows analysis of progress (plot phase) as well as timing
(e.g. for analyzing performance).
## Functionality
Plotman tools are stateless. Rather than keep an internal record of what jobs
have been started, Plotman relies on the process tables, open files, and
logfiles of plot jobs to understand "what's going on". This means the tools
can be stopped and started, even from a different login session, without loss
of information. It also means Plotman can see and manage jobs started manually
or by other tools, as long as their STDOUT/STDERR redirected to a file in a
known logfile directory. (Note: The tool relies on reading the chia plot
command line arguments and the format of the plot tool output. Changes in
those may break this tool.)
Plot scheduling is done by waiting for a certain amount of wall time since the
last job was started, finding the best (e.g. least recently used) `tmp` dir for
plotting, and ensuring that job has progressed to at least a certain point
(e.g., phase 2, subphase 5).
Plots are output to the `dst` dirs, which serve as a temporary buffer until they
are rsync'd ("archived") to the farmer/harvester. The archiver does several
things to attempt to avoid concurrent IO. First, it only allows one rsync
process at a time (more sophisticated scheduling could remove this
restriction, but it's nontrivial). Second, it inspects the pipeline of plot
jobs to see which `dst` dirs are about to have plots written to them. This
is balanced against how full the `dst` drives are in a priority scheme.
It is, obviously, necessary that your rsync bandwidth exceeds your plotting
bandwidth. Given this, in normal operation, the `dst` dirs remain empty until
a plot is finished, after which it is shortly thereafter picked up by the
archive job. However, the decoupling provided by using `dst` drives as a
buffer means that should the farmer/harvester or the network become
unavailable, plotting continues uninterrupted.
## Screenshot Overview
```
Plotman 19:01:06 (refresh 9s/20s) | Plotting: stagger (1623s/1800s) Archival: active pid 1599918
Prefixes: tmp=/mnt/tmp dst=/home/chia/chia/plots archive=/plots (remote)
# plot id k tmp dst wall phase tmp pid stat mem user sys io
0 6b4e7375... 32 03 001 0:27 1:2 71G 1590196 SLP 5.5G 0:52 0:02 0s
1 9ab50d0e... 32 02 005 1:00 1:4 199G 1539209 SLP 5.5G 3:50 0:09 0s
2 018cf561... 32 01 000 1:32 1:5 224G 1530045 SLP 5.5G 4:46 0:11 2s
3 f771de9c... 32 00 004 2:03 1:5 241G 1524772 SLP 5.5G 5:43 0:14 2s
...
16 58045bef... 32 10 002 11:23 3:5 193G 1381622 RUN 5.4G 15:02 0:53 0:02
17 8134a2dd... 32 11 003 11:55 3:6 148G 1372206 RUN 5.4G 15:27 0:57 0:03
18 50165422... 32 08 001 12:43 3:6 102G 1357782 RUN 5.4G 16:14 1:00 0:03
19 100df84f... 32 09 005 13:19 4:0 0 1347430 DSK 705.9M 16:44 1:04 0:06
tmp ready phases tmp ready phases dst plots GB free phases priority
00 -- 1:5, 3:4 06 -- 2:4 000 1 1890 1:5, 2:2, 3:4 47
01 -- 1:5, 3:4 07 -- 2:2 001 0 1998 1:2, 1:7, 3:2, 3:6 34
02 -- 1:4, 3:3 08 -- 1:7, 3:6 002 0 1967 1:6, 2:5, 3:5 42
03 -- 1:2, 3:2 09 -- 2:1, 4:0 003 0 1998 1:6, 3:1, 3:6 34
04 OK 3:1 10 -- 1:6, 3:5 004 0 1998 1:5, 2:4, 3:4 46
05 OK 2:5 11 -- 1:6, 3:6 005 0 1955 1:4, 2:1, 3:3, 4:0 18
Archive dirs free space
000: 94GB | 005: 94GB | 012: 24GB | 017: 99GB | 022: 94GB | 027: 94GB | 032: 9998GB | 037: 9998GB
001: 94GB | 006: 93GB | 013: 25GB | 018: 94GB | 023: 94GB | 028: 94GB | 033: 9998GB |
002: 93GB | 009: 25GB | 014: 93GB | 019: 31GB | 024: 94GB | 029: 7777GB | 034: 9998GB |
003: 94GB | 010: 25GB | 015: 94GB | 020: 47GB | 025: 94GB | 030: 9998GB | 035: 9998GB |
004: 94GB | 011: 25GB | 016: 99GB | 021: 93GB | 026: 94GB | 031: 9998GB | 036: 9998GB |
Log:
01-02 18:33:53 Starting plot job: chia plots create -k 32 -r 8 -u 128 -b 4580 -t /mnt/tmp/03 -2 /mnt/tmp/a -d /home/chi
01-02 18:33:53 Starting archive: rsync --bwlimit=100000 --remove-source-files -P /home/chia/chia/plots/004/plot-k32-202
01-02 18:52:40 Starting archive: rsync --bwlimit=100000 --remove-source-files -P /home/chia/chia/plots/000/plot-k32-202
```
The screenshot shows some of the main features of Plotman.
The first line shows the status. The plotting status shows whether we just
started a plot, or, if not, why not (e.g., stagger time, tmp directories being
ready, etc.; in this case, the 1800s stagger between plots has not been reached
yet). Archival status says whether we are currently archiving (and provides
the `rsync` pid) or whether there are no plots available in the `dst` drives to
archive.
The second line provides a key to some directory abbrevations used throughout.
For `tmp` and `dst` directories, we assume they have a common prefix, which is
computed and indicated here, after which they can be referred to (in context)
by their unique suffix. For example, if we have `tmp` dirs `/mnt/tmp/00`,
`/mnt/tmp/01`, `/mnt/tmp/02`, etc., we show `/mnt/tmp` as the prefix here and
can then talk about `tmp` dirs `00` or `01` etc. The `archive` directories are
the same except that these are paths on a remote host and accessed via an
`rsyncd` module (see `src/plotman/resources/plotman.yaml` for details).
The next table shows information about the active plotting jobs. It is
abbreviated to show the most and least recently started jobs (the full list is
available via the command line mode). It shows various information about the
plot jobs, including the plot ID (first 8 chars), the directories used,
walltime, the current plot phase and subphase, space used on the `tmp` drive,
pid, etc.
The next tables are a bit hard to read; there is actually a `tmp` table on the
left which is split into two tables for rendering purposes, and a `dst` table
on the right. The `tmp` tables show the phases of the plotting jobs using
them, and whether or not they're ready to take a new plot job. The `dst` table
shows how many plots have accumulated, how much free space is left, and the
phases of jobs that are destined to write to them, and finally, the priority
computed for the archive job to move the plots away.
The last table simply shows free space of drives on the remote
harverster/farmer.
Finally, the last section shows a log of actions performed -- namely, plot and
archive jobs initiated. This is the one part of the interactive tool which is
stateful. There is no permanent record of these executed command lines, so if
you start a new interactive plotman session, this log is empty.
## Limitations and Issues
The system is tested on Linux only. Plotman should be generalizable to other
platforms, but this is not done yet. Some of the issues around making calls
out to command line programs (e.g., running `df` over `ssh` to obtain the free
space on the remote archive directories) are very linux-y.
The interactive mode uses the `curses` library ... poorly. Keypresses are
not received, screen resizing does not work, and the minimum terminal size
is pretty big.
Plotman assumes all plots are k32s. Again, this is just an unimplemented
generalization.
Many features are inconsistently supported between either the "interactive"
mode or the command line mode.
There are many bugs and TODOs.
Plotman will always look for the `plotman.yaml` file within your computer at an OS-based
default location. To generate a default `plotman.yaml`, run:
```shell
> plotman config generate
```
To display the current location of your `plotman.yaml` file and check if it exists, run:
```shell
> plotman config path
```
([See also](https://github.com/ericaltendorf/plotman/pull/61#issuecomment-812967363)).
## Installation
Installation for Linux and macOS:
1. Plotman assumes that a functioning [Chia](https://github.com/Chia-Network/chia-blockchain)
installation is present on the system.
- virtual environment (Linux, macOS): Activate your `chia` environment by typing
`source /path/to/your/chia/install/activate`.
- dmg (macOS): Follow [these instructions](https://github.com/Chia-Network/chia-blockchain/wiki/CLI-Commands-Reference#mac)
to add the `chia` binary to the `PATH`
2. Then, install Plotman using the following command:
```shell
> pip install --force-reinstall git+https://github.com/ericaltendorf/plotman@main
```
3. Plotman will look for `plotman.yaml` within your computer at an OS-based
default location. To create a default `plotman.yaml` and display its location,
run the following command:
```shell
> plotman config generate
```
The default configuration file used as a starting point is located [here](./src/plotman/resources/plotman.yaml)
4. That's it! You can now run Plotman by typing `plotman version` to verify its version.
Run `plotman --help` to learn about the available commands.
### Development note:
If you are forking Plotman, simply replace the installation step with `pip install --editable .[dev]` from the project root directory to install *your* version of plotman with test and development extras.
Keywords: chia,blockchain,automation,process management

@@ -241,1 +32,248 @@ Platform: UNKNOWN

Provides-Extra: test
License-File: LICENSE
# `plotman`: a Chia plotting manager
This is a tool for managing [Chia](https://github.com/Chia-Network/chia-blockchain)
plotting operations. The tool runs on the plotting machine and provides
the following functionality:
- Automatic spawning of new plotting jobs, possibly overlapping ("staggered")
on multiple temp directories, rate-limited globally and by per-temp-dir
limits.
- Rsync'ing of newly generated plots to a remote host (a farmer/harvester),
called "archiving".
- Monitoring of ongoing plotting and archiving jobs, progress, resources used,
temp files, etc.
- Control of ongoing plotting jobs (suspend, resume, plus kill and clean up
temp files).
- Both an interactive live dashboard mode as well as command line mode tools.
- (very alpha) Analyzing performance statistics of past jobs, to aggregate on
various plotting parameters or temp dir type.
Plotman is designed for the following configuration:
- A plotting machine with an array of `tmp` dirs, a single `tmp2` dir, and an
array of `dst` dirs to which the plot jobs plot. The `dst` dirs serve as a
temporary buffer space for generated plots.
- A farming machine with a large number of drives, made accessible via an
`rsyncd` module, and to be entirely populated with plots. These are known as
the `archive` directories.
- Plot jobs are run with STDOUT/STDERR redirected to a log file in a configured
directory. This allows analysis of progress (plot phase) as well as timing
(e.g. for analyzing performance).
## Functionality
Plotman tools are stateless. Rather than keep an internal record of what jobs
have been started, Plotman relies on the process tables, open files, and
logfiles of plot jobs to understand "what's going on". This means the tools
can be stopped and started, even from a different login session, without loss
of information. It also means Plotman can see and manage jobs started manually
or by other tools, as long as their STDOUT/STDERR redirected to a file in a
known logfile directory. (Note: The tool relies on reading the chia plot
command line arguments and the format of the plot tool output. Changes in
those may break this tool.)
Plot scheduling is done by waiting for a certain amount of wall time since the
last job was started, finding the best (e.g. least recently used) `tmp` dir for
plotting, and ensuring that job has progressed to at least a certain point
(e.g., phase 2, subphase 5).
Plots are output to the `dst` dirs, which serve as a temporary buffer until they
are rsync'd ("archived") to the farmer/harvester. The archiver does several
things to attempt to avoid concurrent IO. First, it only allows one rsync
process at a time (more sophisticated scheduling could remove this
restriction, but it's nontrivial). Second, it inspects the pipeline of plot
jobs to see which `dst` dirs are about to have plots written to them. This
is balanced against how full the `dst` drives are in a priority scheme.
It is, obviously, necessary that your rsync bandwidth exceeds your plotting
bandwidth. Given this, in normal operation, the `dst` dirs remain empty until
a plot is finished, after which it is shortly thereafter picked up by the
archive job. However, the decoupling provided by using `dst` drives as a
buffer means that should the farmer/harvester or the network become
unavailable, plotting continues uninterrupted.
## Screenshot Overview
```
Plotman 19:01:06 (refresh 9s/20s) | Plotting: stagger (1623s/1800s) Archival: active pid 1599918
Prefixes: tmp=/mnt/tmp dst=/home/chia/chia/plots archive=/plots (remote)
# plot id k tmp dst wall phase tmp pid stat mem user sys io
0 6b4e7375... 32 03 001 0:27 1:2 71G 1590196 SLP 5.5G 0:52 0:02 0s
1 9ab50d0e... 32 02 005 1:00 1:4 199G 1539209 SLP 5.5G 3:50 0:09 0s
2 018cf561... 32 01 000 1:32 1:5 224G 1530045 SLP 5.5G 4:46 0:11 2s
3 f771de9c... 32 00 004 2:03 1:5 241G 1524772 SLP 5.5G 5:43 0:14 2s
...
16 58045bef... 32 10 002 11:23 3:5 193G 1381622 RUN 5.4G 15:02 0:53 0:02
17 8134a2dd... 32 11 003 11:55 3:6 148G 1372206 RUN 5.4G 15:27 0:57 0:03
18 50165422... 32 08 001 12:43 3:6 102G 1357782 RUN 5.4G 16:14 1:00 0:03
19 100df84f... 32 09 005 13:19 4:0 0 1347430 DSK 705.9M 16:44 1:04 0:06
tmp ready phases tmp ready phases dst plots GB free phases priority
00 -- 1:5, 3:4 06 -- 2:4 000 1 1890 1:5, 2:2, 3:4 47
01 -- 1:5, 3:4 07 -- 2:2 001 0 1998 1:2, 1:7, 3:2, 3:6 34
02 -- 1:4, 3:3 08 -- 1:7, 3:6 002 0 1967 1:6, 2:5, 3:5 42
03 -- 1:2, 3:2 09 -- 2:1, 4:0 003 0 1998 1:6, 3:1, 3:6 34
04 OK 3:1 10 -- 1:6, 3:5 004 0 1998 1:5, 2:4, 3:4 46
05 OK 2:5 11 -- 1:6, 3:6 005 0 1955 1:4, 2:1, 3:3, 4:0 18
Archive dirs free space
000: 94GB | 005: 94GB | 012: 24GB | 017: 99GB | 022: 94GB | 027: 94GB | 032: 9998GB | 037: 9998GB
001: 94GB | 006: 93GB | 013: 25GB | 018: 94GB | 023: 94GB | 028: 94GB | 033: 9998GB |
002: 93GB | 009: 25GB | 014: 93GB | 019: 31GB | 024: 94GB | 029: 7777GB | 034: 9998GB |
003: 94GB | 010: 25GB | 015: 94GB | 020: 47GB | 025: 94GB | 030: 9998GB | 035: 9998GB |
004: 94GB | 011: 25GB | 016: 99GB | 021: 93GB | 026: 94GB | 031: 9998GB | 036: 9998GB |
Log:
01-02 18:33:53 Starting plot job: chia plots create -k 32 -r 8 -u 128 -b 4580 -t /mnt/tmp/03 -2 /mnt/tmp/a -d /home/chi
01-02 18:33:53 Starting archive: rsync --bwlimit=100000 --remove-source-files -P /home/chia/chia/plots/004/plot-k32-202
01-02 18:52:40 Starting archive: rsync --bwlimit=100000 --remove-source-files -P /home/chia/chia/plots/000/plot-k32-202
```
The screenshot shows some of the main features of Plotman.
The first line shows the status. The plotting status shows whether we just
started a plot, or, if not, why not (e.g., stagger time, tmp directories being
ready, etc.; in this case, the 1800s stagger between plots has not been reached
yet). Archival status says whether we are currently archiving (and provides
the `rsync` pid) or whether there are no plots available in the `dst` drives to
archive.
The second line provides a key to some directory abbrevations used throughout.
For `tmp` and `dst` directories, we assume they have a common prefix, which is
computed and indicated here, after which they can be referred to (in context)
by their unique suffix. For example, if we have `tmp` dirs `/mnt/tmp/00`,
`/mnt/tmp/01`, `/mnt/tmp/02`, etc., we show `/mnt/tmp` as the prefix here and
can then talk about `tmp` dirs `00` or `01` etc. The `archive` directories are
the same except that these are paths on a remote host and accessed via an
`rsyncd` module (see `src/plotman/resources/plotman.yaml` for details).
The next table shows information about the active plotting jobs. It is
abbreviated to show the most and least recently started jobs (the full list is
available via the command line mode). It shows various information about the
plot jobs, including the plot ID (first 8 chars), the directories used,
walltime, the current plot phase and subphase, space used on the `tmp` drive,
pid, etc.
The next tables are a bit hard to read; there is actually a `tmp` table on the
left which is split into two tables for rendering purposes, and a `dst` table
on the right. The `tmp` tables show the phases of the plotting jobs using
them, and whether or not they're ready to take a new plot job. The `dst` table
shows how many plots have accumulated, how much free space is left, and the
phases of jobs that are destined to write to them, and finally, the priority
computed for the archive job to move the plots away.
The last table simply shows free space of drives on the remote
harverster/farmer.
Finally, the last section shows a log of actions performed -- namely, plot and
archive jobs initiated. This is the one part of the interactive tool which is
stateful. There is no permanent record of these executed command lines, so if
you start a new interactive plotman session, this log is empty.
## `plotman` commands
To get a complete list of all available commands run:
```shell
plotman -h
```
You can also use `plotman <command> -h` to get help about a specific command, like
```shell
plotman interactive -h
```
## Running `plotman` as a daemon
> _PS: this section assumes that you have already configured `plotman.yaml`._
By default the command `plotman plot` will start the plotting job and continue to run on the foregroud as long as you keep the terminal window open. If you want to have it constantly running, try the following:
```shell
nohup plotman plot >> ~/plotman.log 2>&1 &
```
## Limitations and Issues
The system is tested on Linux only. Plotman should be generalizable to other
platforms, but this is not done yet. Some of the issues around making calls
out to command line programs (e.g., running `df` over `ssh` to obtain the free
space on the remote archive directories) are very linux-y.
The interactive mode uses the `curses` library ... poorly. Keypresses are
not received, screen resizing does not work, and the minimum terminal size
is pretty big.
Plotman assumes all plots are k32s. Again, this is just an unimplemented
generalization.
Many features are inconsistently supported between either the "interactive"
mode or the command line mode.
There are many bugs and TODOs.
Plotman will always look for the `plotman.yaml` file within your computer at an OS-based
default location. To generate a default `plotman.yaml`, run:
```shell
> plotman config generate
```
To display the current location of your `plotman.yaml` file and check if it exists, run:
```shell
> plotman config path
```
([See also](https://github.com/ericaltendorf/plotman/pull/61#issuecomment-812967363)).
## Installation
Installation for Linux and macOS:
1. Plotman assumes that a functioning [Chia](https://github.com/Chia-Network/chia-blockchain)
installation is present on the system.
- virtual environment (Linux, macOS): Activate your `chia` environment by typing
`source /path/to/your/chia/install/activate`.
- dmg (macOS): Follow [these instructions](https://github.com/Chia-Network/chia-blockchain/wiki/CLI-Commands-Reference#mac)
to add the `chia` binary to the `PATH`
2. Then, install Plotman using the following command:
```shell
> pip install --force-reinstall git+https://github.com/ericaltendorf/plotman@main
```
3. Plotman will look for `plotman.yaml` within your computer at an OS-based
default location. To create a default `plotman.yaml` and display its location,
run the following command:
```shell
> plotman config generate
```
The default configuration file used as a starting point is located [here](./src/plotman/resources/plotman.yaml)
4. That's it! You can now run Plotman by typing `plotman version` to verify its version.
Run `plotman --help` to learn about the available commands.
*Note:* If you see `ModuleNotFoundError: No module named 'readline'` when using `plotman` on [RHEL based linux](https://github.com/ericaltendorf/plotman/issues/195) after installing using [chia's guide](https://github.com/Chia-Network/chia-blockchain/wiki/INSTALL#centos--red-hat--fedora), install `readline-devel` then reinstall chia starting at compiling python in a new build environment; or consider using a project like `pyenv`.
## Basic Usage:
1. Install
2. Generate initial config
3. Configure (default location can be found with `plotman config path`). Options explained in the default config file (step 2)
4. Create log directory specified in `directories: { log: "" }`
5. Start plotman: `plotman plot` or `plotman interactive`
6. Check status: `plotman status`
### Development note:
If you are forking Plotman, simply replace the installation step with `pip install --editable .[dev]` from the project root directory to install *your* version of plotman with test and development extras.

@@ -18,2 +18,3 @@ .coveragerc

src/plotman/chia.py
src/plotman/chiapos.py
src/plotman/configuration.py

@@ -44,2 +45,3 @@ src/plotman/interactive.py

src/plotman/resources/plotman.yaml
src/plotman/resources/target_definitions.yaml
util/listlogs

@@ -7,5 +7,5 @@ #!/usr/bin/env python3

"""Plotman module launcher.
This is a shim that allows you to run plotman via
This is a shim that allows you to run plotman via
python3 -m plotman
"""
plotman.main()

@@ -1,2 +0,2 @@

from plotman import archive, configuration, job, manager
from plotman import archive, job

@@ -7,20 +7,1 @@

archive.compute_priority( job.Phase(major=3, minor=6), 1000, 10) )
def test_rsync_dest():
arch_dir = '/plotdir/012'
arch_cfg = configuration.Archive(
rsyncd_module='plots_mod',
rsyncd_path='/plotdir',
rsyncd_host='thehostname',
rsyncd_user='theusername',
rsyncd_bwlimit=80000
)
# Normal usage
assert ('rsync://theusername@thehostname:12000/plots_mod/012' ==
archive.rsync_dest(arch_cfg, arch_dir))
# Usage for constructing just the prefix, for scanning process tables
# for matching jobs.
assert ('rsync://theusername@thehostname:12000/' ==
archive.rsync_dest(arch_cfg, '/'))

@@ -16,9 +16,15 @@ """Tests for plotman/configuration.py"""

def test_get_validated_configs__default(config_text):
@pytest.fixture(name='target_definitions_text')
def target_definitions_text_fixture():
return importlib.resources.read_text(
plotman_resources, "target_definitions.yaml",
)
def test_get_validated_configs__default(config_text, target_definitions_text):
"""Check that get_validated_configs() works with default/example plotman.yaml file."""
res = configuration.get_validated_configs(config_text, '')
res = configuration.get_validated_configs(config_text, '', target_definitions_text)
assert isinstance(res, configuration.PlotmanConfig)
def test_get_validated_configs__malformed(config_text):
def test_get_validated_configs__malformed(config_text, target_definitions_text):
"""Check that get_validated_configs() raises exception with invalid plotman.yaml contents."""

@@ -32,3 +38,3 @@ loaded_yaml = yaml.load(config_text, Loader=yaml.SafeLoader)

with pytest.raises(configuration.ConfigurationException) as exc_info:
configuration.get_validated_configs(malformed_config_text, '/the_path')
configuration.get_validated_configs(malformed_config_text, '/the_path', target_definitions_text)

@@ -49,3 +55,3 @@ assert exc_info.value.args[0] == f"Config file at: '/the_path' is malformed"

def test_loads_without_user_interface(config_text):
def test_loads_without_user_interface(config_text, target_definitions_text):
loaded_yaml = yaml.load(config_text, Loader=yaml.SafeLoader)

@@ -57,4 +63,46 @@

reloaded_yaml = configuration.get_validated_configs(stripped_config_text, '')
reloaded_yaml = configuration.get_validated_configs(stripped_config_text, '', target_definitions_text)
assert reloaded_yaml.user_interface == configuration.UserInterface()
def test_loads_without_user_archiving(config_text, target_definitions_text):
loaded_yaml = yaml.load(config_text, Loader=yaml.SafeLoader)
del loaded_yaml["archiving"]
stripped_config_text = yaml.dump(loaded_yaml, Dumper=yaml.SafeDumper)
reloaded_yaml = configuration.get_validated_configs(stripped_config_text, '', target_definitions_text)
assert reloaded_yaml.archiving is None
def test_get_dst_directories_gets_dst():
tmp = ['/tmp']
dst = ['/dst0', '/dst1']
directories = configuration.Directories(tmp=tmp, dst=dst)
assert directories.get_dst_directories() == dst
def test_get_dst_directories_gets_tmp():
tmp = ['/tmp']
directories = configuration.Directories(tmp=tmp)
assert directories.get_dst_directories() == tmp
def test_dst_is_dst():
tmp = ['/tmp']
dst = ['/dst0', '/dst1']
directories = configuration.Directories(tmp=tmp, dst=dst)
assert not directories.dst_is_tmp()
def test_dst_is_tmp():
tmp = ['/tmp']
directories = configuration.Directories(tmp=tmp)
assert directories.dst_is_tmp()

@@ -23,3 +23,2 @@ # TODO: migrate away from unittest patch

return configuration.Directories(
log="/plots/log",
tmp=["/var/tmp", "/tmp"],

@@ -89,4 +88,4 @@ dst=["/mnt/dst/00", "/mnt/dst/01", "/mnt/dst/03"],

'/plots3' : (4, 1) } )
def test_dstdirs_to_youngest_phase():

@@ -93,0 +92,0 @@ all_jobs = [ job_w_dstdir_phase('/plots1', (1, 5)),

@@ -13,3 +13,6 @@ import os

assert (plot_util.human_format(354, 0) == '354')
assert (plot_util.human_format(354, 0, True) == '354')
assert (plot_util.human_format(354, 2) == '354.00')
assert (plot_util.human_format(422399296143, 2) == '422.40G')
assert (plot_util.human_format(422399296143, 2, True) == '393.39Gi')

@@ -58,1 +61,8 @@ def test_time_format():

'/t/plot-k32-5.plot' ] )
def test_get_plotsize():
assert (
[659272492, 107287518791, 221143636517, 455373353413, 936816632588]
== [plot_util.get_plotsize(n) for n in [25, 32, 33, 34, 35]]
)
import argparse
import contextlib
import logging
import math
import os
import posixpath
import random

@@ -11,33 +13,67 @@ import re

import pendulum
import psutil
import texttable as tt
from plotman import job, manager, plot_util
from plotman import configuration, job, manager, plot_util
logger = logging.getLogger(__name__)
_WINDOWS = sys.platform == 'win32'
# TODO : write-protect and delete-protect archived plots
def spawn_archive_process(dir_cfg, all_jobs):
'''Spawns a new archive process using the command created
def spawn_archive_process(dir_cfg, arch_cfg, log_cfg, all_jobs):
'''Spawns a new archive process using the command created
in the archive() function. Returns archiving status and a log message to print.'''
log_message = None
log_messages = []
archiving_status = None
# Look for running archive jobs. Be robust to finding more than one
# even though the scheduler should only run one at a time.
arch_jobs = get_running_archive_jobs(dir_cfg.archive)
arch_jobs = get_running_archive_jobs(arch_cfg)
if not arch_jobs:
(should_start, status_or_cmd) = archive(dir_cfg, all_jobs)
(should_start, status_or_cmd, archive_log_messages) = archive(dir_cfg, arch_cfg, all_jobs)
log_messages.extend(archive_log_messages)
if not should_start:
archiving_status = status_or_cmd
else:
cmd = status_or_cmd
# TODO: do something useful with output instead of DEVNULL
p = subprocess.Popen(cmd,
args = status_or_cmd
log_file_path = log_cfg.create_transfer_log_path(time=pendulum.now())
log_messages.append(f'Starting archive: {args["args"]} ; logging to {log_file_path}')
# TODO: CAMPid 09840103109429840981397487498131
try:
open_log_file = open(log_file_path, 'x')
except FileExistsError:
log_messages.append(
f'Archiving log file already exists, skipping attempt to start a'
f' new archive transfer: {log_file_path!r}'
)
return (False, log_messages)
except FileNotFoundError as e:
message = (
f'Unable to open log file. Verify that the directory exists'
f' and has proper write permissions: {log_file_path!r}'
)
raise Exception(message) from e
# Preferably, do not add any code between the try block above
# and the with block below. IOW, this space intentionally left
# blank... As is, this provides a good chance that our handle
# of the log file will get closed explicitly while still
# allowing handling of just the log file opening error.
with open_log_file:
# start_new_sessions to make the job independent of this controlling tty.
p = subprocess.Popen(**args,
shell=True,
stdout=subprocess.DEVNULL,
stdout=open_log_file,
stderr=subprocess.STDOUT,
start_new_session=True)
log_message = 'Starting archive: ' + cmd
start_new_session=True,
creationflags=0 if not _WINDOWS else subprocess.CREATE_NO_WINDOW)
# At least for now it seems that even if we get a new running

@@ -54,4 +90,4 @@ # archive jobs list it doesn't contain the new rsync process.

return archiving_status, log_message
return archiving_status, log_messages
def compute_priority(phase, gb_free, n_plots):

@@ -76,3 +112,3 @@ # All these values are designed around dst buffer dirs of about

priority -= 32
# If a drive is getting full, we should prioritize it

@@ -91,24 +127,51 @@ if (gb_free < 1000):

def get_archdir_freebytes(arch_cfg):
log_messages = []
target = arch_cfg.target_definition()
archdir_freebytes = {}
df_cmd = ('ssh %s@%s df -aBK | grep " %s/"' %
(arch_cfg.rsyncd_user, arch_cfg.rsyncd_host, arch_cfg.rsyncd_path) )
with subprocess.Popen(df_cmd, shell=True, stdout=subprocess.PIPE) as proc:
for line in proc.stdout.readlines():
fields = line.split()
if fields[3] == b'-':
# not actually mounted
timeout = 5
try:
completed_process = subprocess.run(
[target.disk_space_path],
env={**os.environ, **arch_cfg.environment()},
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
timeout=timeout,
)
except subprocess.TimeoutExpired as e:
log_messages.append(f'Disk space check timed out in {timeout} seconds')
if e.stdout is None:
stdout = ''
else:
stdout = e.stdout.decode('utf-8', errors='ignore').strip()
if e.stderr is None:
stderr = ''
else:
stderr = e.stderr.decode('utf-8', errors='ignore').strip()
else:
stdout = completed_process.stdout.decode('utf-8', errors='ignore').strip()
stderr = completed_process.stderr.decode('utf-8', errors='ignore').strip()
for line in stdout.splitlines():
line = line.strip()
split = line.split(':')
if len(split) != 2:
log_messages.append(f'Unable to parse disk script line: {line!r}')
continue
freebytes = int(fields[3][:-1]) * 1024 # Strip the final 'K'
archdir = (fields[5]).decode('utf-8')
archdir_freebytes[archdir] = freebytes
return archdir_freebytes
archdir, space = split
freebytes = int(space)
archdir_freebytes[archdir.strip()] = freebytes
def rsync_dest(arch_cfg, arch_dir):
rsync_path = arch_dir.replace(arch_cfg.rsyncd_path, arch_cfg.rsyncd_module)
if rsync_path.startswith('/'):
rsync_path = rsync_path[1:] # Avoid dup slashes. TODO use path join?
rsync_url = 'rsync://%s@%s:12000/%s' % (
arch_cfg.rsyncd_user, arch_cfg.rsyncd_host, rsync_path)
return rsync_url
for line in log_messages:
logger.info(line)
logger.info('stdout from disk space script:')
for line in stdout.splitlines():
logger.info(f' {line}')
logger.info('stderr from disk space script:')
for line in stderr.splitlines():
logger.info(f' {line}')
return archdir_freebytes, log_messages
# TODO: maybe consolidate with similar code in job.py?

@@ -119,19 +182,24 @@ def get_running_archive_jobs(arch_cfg):

jobs = []
dest = rsync_dest(arch_cfg, '/')
for proc in psutil.process_iter(['pid', 'name']):
target = arch_cfg.target_definition()
variables = {**os.environ, **arch_cfg.environment()}
dest = target.transfer_process_argument_prefix.format(**variables)
proc_name = target.transfer_process_name.format(**variables)
for proc in psutil.process_iter():
with contextlib.suppress(psutil.NoSuchProcess):
if proc.name() == 'rsync':
args = proc.cmdline()
for arg in args:
if arg.startswith(dest):
jobs.append(proc.pid)
with proc.oneshot():
if proc.name() == proc_name:
args = proc.cmdline()
for arg in args:
if arg.startswith(dest):
jobs.append(proc.pid)
return jobs
def archive(dir_cfg, all_jobs):
def archive(dir_cfg, arch_cfg, all_jobs):
'''Configure one archive job. Needs to know all jobs so it can avoid IO
contention on the plotting dstdir drives. Returns either (False, <reason>)
contention on the plotting dstdir drives. Returns either (False, <reason>)
if we should not execute an archive job or (True, <cmd>) with the archive
command if we should.'''
if dir_cfg.archive is None:
return (False, "No 'archive' settings declared in plotman.yaml")
log_messages = []
if arch_cfg is None:
return (False, "No 'archive' settings declared in plotman.yaml", log_messages)

@@ -141,4 +209,4 @@ dir2ph = manager.dstdirs_to_furthest_phase(all_jobs)

chosen_plot = None
for d in dir_cfg.dst:
dst_dir = dir_cfg.get_dst_directories()
for d in dst_dir:
ph = dir2ph.get(d, job.Phase(0, 0))

@@ -148,3 +216,3 @@ dir_plots = plot_util.list_k32_plots(d)

n_plots = len(dir_plots)
priority = compute_priority(ph, gb_free, n_plots)
priority = compute_priority(ph, gb_free, n_plots)
if priority >= best_priority and dir_plots:

@@ -155,3 +223,3 @@ best_priority = priority

if not chosen_plot:
return (False, 'No plots found')
return (False, 'No plots found', log_messages)

@@ -164,23 +232,30 @@ # TODO: sanity check that archive machine is available

#
archdir_freebytes = get_archdir_freebytes(dir_cfg.archive)
archdir_freebytes, freebytes_log_messages = get_archdir_freebytes(arch_cfg)
log_messages.extend(freebytes_log_messages)
if not archdir_freebytes:
return(False, 'No free archive dirs found.')
return(False, 'No free archive dirs found.', log_messages)
archdir = ''
available = [(d, space) for (d, space) in archdir_freebytes.items() if
space > 1.2 * plot_util.get_k32_plotsize()]
chosen_plot_size = os.stat(chosen_plot).st_size
# 10MB is big enough to outsize filesystem block sizes hopefully, but small
# enough to make this a pretty tight corner for people to get stuck in.
free_space_margin = 10_000_000
available = [(d, space) for (d, space) in archdir_freebytes.items() if
space > (chosen_plot_size + free_space_margin)]
if len(available) > 0:
index = min(dir_cfg.archive.index, len(available) - 1)
index = min(arch_cfg.index, len(available) - 1)
(archdir, freespace) = sorted(available)[index]
if not archdir:
return(False, 'No archive directories found with enough free space')
msg = 'Found %s with ~%d GB free' % (archdir, freespace / plot_util.GB)
return(False, 'No archive directories found with enough free space', log_messages)
bwlimit = dir_cfg.archive.rsyncd_bwlimit
throttle_arg = ('--bwlimit=%d' % bwlimit) if bwlimit else ''
cmd = ('rsync %s --compress-level=0 --remove-source-files -P %s %s' %
(throttle_arg, chosen_plot, rsync_dest(dir_cfg.archive, archdir)))
env = arch_cfg.environment(
source=chosen_plot,
destination=archdir,
)
subprocess_arguments = {
'args': arch_cfg.target_definition().transfer_path,
'env': {**os.environ, **env}
}
return (True, cmd)
return (True, subprocess_arguments, log_messages)

@@ -35,4 +35,4 @@ import functools

@click.command()
# https://github.com/Chia-Network/chia-blockchain/blob/v1.1.2/LICENSE
# https://github.com/Chia-Network/chia-blockchain/blob/v1.1.2/chia/cmds/plots.py#L39-L83
# https://github.com/Chia-Network/chia-blockchain/blob/1.1.2/LICENSE
# https://github.com/Chia-Network/chia-blockchain/blob/1.1.2/chia/cmds/plots.py#L39-L83
# start copied code

@@ -91,4 +91,4 @@ @click.option("-k", "--size", help="Plot size", type=int, default=32, show_default=True)

@click.command()
# https://github.com/Chia-Network/chia-blockchain/blob/v1.1.3/LICENSE
# https://github.com/Chia-Network/chia-blockchain/blob/v1.1.3/chia/cmds/plots.py#L39-L83
# https://github.com/Chia-Network/chia-blockchain/blob/1.1.3/LICENSE
# https://github.com/Chia-Network/chia-blockchain/blob/1.1.3/chia/cmds/plots.py#L39-L83
# start copied code

@@ -143,1 +143,221 @@ @click.option("-k", "--size", help="Plot size", type=int, default=32, show_default=True)

pass
@commands.register(version=(1, 1, 4))
@click.command()
# https://github.com/Chia-Network/chia-blockchain/blob/1.1.4/LICENSE
# https://github.com/Chia-Network/chia-blockchain/blob/1.1.4/chia/cmds/plots.py#L39-L83
# start copied code
@click.option("-k", "--size", help="Plot size", type=int, default=32, show_default=True)
@click.option("--override-k", help="Force size smaller than 32", default=False, show_default=True, is_flag=True)
@click.option("-n", "--num", help="Number of plots or challenges", type=int, default=1, show_default=True)
@click.option("-b", "--buffer", help="Megabytes for sort/plot buffer", type=int, default=3389, show_default=True)
@click.option("-r", "--num_threads", help="Number of threads to use", type=int, default=2, show_default=True)
@click.option("-u", "--buckets", help="Number of buckets", type=int, default=128, show_default=True)
@click.option(
"-a",
"--alt_fingerprint",
type=int,
default=None,
help="Enter the alternative fingerprint of the key you want to use",
)
@click.option(
"-c",
"--pool_contract_address",
type=str,
default=None,
help="Address of where the pool reward will be sent to. Only used if alt_fingerprint and pool public key are None",
)
@click.option("-f", "--farmer_public_key", help="Hex farmer public key", type=str, default=None)
@click.option("-p", "--pool_public_key", help="Hex public key of pool", type=str, default=None)
@click.option(
"-t",
"--tmp_dir",
help="Temporary directory for plotting files",
type=click.Path(),
default=Path("."),
show_default=True,
)
@click.option("-2", "--tmp2_dir", help="Second temporary directory for plotting files", type=click.Path(), default=None)
@click.option(
"-d",
"--final_dir",
help="Final directory for plots (relative or absolute)",
type=click.Path(),
default=Path("."),
show_default=True,
)
@click.option("-i", "--plotid", help="PlotID in hex for reproducing plots (debugging only)", type=str, default=None)
@click.option("-m", "--memo", help="Memo in hex for reproducing plots (debugging only)", type=str, default=None)
@click.option("-e", "--nobitfield", help="Disable bitfield", default=False, is_flag=True)
@click.option(
"-x", "--exclude_final_dir", help="Skips adding [final dir] to harvester for farming", default=False, is_flag=True
)
# end copied code
def _cli():
pass
@commands.register(version=(1, 1, 5))
@click.command()
# https://github.com/Chia-Network/chia-blockchain/blob/1.1.5/LICENSE
# https://github.com/Chia-Network/chia-blockchain/blob/1.1.5/chia/cmds/plots.py#L39-L83
# start copied code
@click.option("-k", "--size", help="Plot size", type=int, default=32, show_default=True)
@click.option("--override-k", help="Force size smaller than 32", default=False, show_default=True, is_flag=True)
@click.option("-n", "--num", help="Number of plots or challenges", type=int, default=1, show_default=True)
@click.option("-b", "--buffer", help="Megabytes for sort/plot buffer", type=int, default=3389, show_default=True)
@click.option("-r", "--num_threads", help="Number of threads to use", type=int, default=2, show_default=True)
@click.option("-u", "--buckets", help="Number of buckets", type=int, default=128, show_default=True)
@click.option(
"-a",
"--alt_fingerprint",
type=int,
default=None,
help="Enter the alternative fingerprint of the key you want to use",
)
@click.option(
"-c",
"--pool_contract_address",
type=str,
default=None,
help="Address of where the pool reward will be sent to. Only used if alt_fingerprint and pool public key are None",
)
@click.option("-f", "--farmer_public_key", help="Hex farmer public key", type=str, default=None)
@click.option("-p", "--pool_public_key", help="Hex public key of pool", type=str, default=None)
@click.option(
"-t",
"--tmp_dir",
help="Temporary directory for plotting files",
type=click.Path(),
default=Path("."),
show_default=True,
)
@click.option("-2", "--tmp2_dir", help="Second temporary directory for plotting files", type=click.Path(), default=None)
@click.option(
"-d",
"--final_dir",
help="Final directory for plots (relative or absolute)",
type=click.Path(),
default=Path("."),
show_default=True,
)
@click.option("-i", "--plotid", help="PlotID in hex for reproducing plots (debugging only)", type=str, default=None)
@click.option("-m", "--memo", help="Memo in hex for reproducing plots (debugging only)", type=str, default=None)
@click.option("-e", "--nobitfield", help="Disable bitfield", default=False, is_flag=True)
@click.option(
"-x", "--exclude_final_dir", help="Skips adding [final dir] to harvester for farming", default=False, is_flag=True
)
# end copied code
def _cli():
pass
@commands.register(version=(1, 1, 6))
@click.command()
# https://github.com/Chia-Network/chia-blockchain/blob/1.1.6/LICENSE
# https://github.com/Chia-Network/chia-blockchain/blob/1.1.6/chia/cmds/plots.py#L39-L83
# start copied code
@click.option("-k", "--size", help="Plot size", type=int, default=32, show_default=True)
@click.option("--override-k", help="Force size smaller than 32", default=False, show_default=True, is_flag=True)
@click.option("-n", "--num", help="Number of plots or challenges", type=int, default=1, show_default=True)
@click.option("-b", "--buffer", help="Megabytes for sort/plot buffer", type=int, default=3389, show_default=True)
@click.option("-r", "--num_threads", help="Number of threads to use", type=int, default=2, show_default=True)
@click.option("-u", "--buckets", help="Number of buckets", type=int, default=128, show_default=True)
@click.option(
"-a",
"--alt_fingerprint",
type=int,
default=None,
help="Enter the alternative fingerprint of the key you want to use",
)
@click.option(
"-c",
"--pool_contract_address",
type=str,
default=None,
help="Address of where the pool reward will be sent to. Only used if alt_fingerprint and pool public key are None",
)
@click.option("-f", "--farmer_public_key", help="Hex farmer public key", type=str, default=None)
@click.option("-p", "--pool_public_key", help="Hex public key of pool", type=str, default=None)
@click.option(
"-t",
"--tmp_dir",
help="Temporary directory for plotting files",
type=click.Path(),
default=Path("."),
show_default=True,
)
@click.option("-2", "--tmp2_dir", help="Second temporary directory for plotting files", type=click.Path(), default=None)
@click.option(
"-d",
"--final_dir",
help="Final directory for plots (relative or absolute)",
type=click.Path(),
default=Path("."),
show_default=True,
)
@click.option("-i", "--plotid", help="PlotID in hex for reproducing plots (debugging only)", type=str, default=None)
@click.option("-m", "--memo", help="Memo in hex for reproducing plots (debugging only)", type=str, default=None)
@click.option("-e", "--nobitfield", help="Disable bitfield", default=False, is_flag=True)
@click.option(
"-x", "--exclude_final_dir", help="Skips adding [final dir] to harvester for farming", default=False, is_flag=True
)
# end copied code
def _cli():
pass
@commands.register(version=(1, 1, 7))
@click.command()
# https://github.com/Chia-Network/chia-blockchain/blob/1.1.7/LICENSE
# https://github.com/Chia-Network/chia-blockchain/blob/1.1.7/chia/cmds/plots.py#L39-L83
# start copied code
@click.option("-k", "--size", help="Plot size", type=int, default=32, show_default=True)
@click.option("--override-k", help="Force size smaller than 32", default=False, show_default=True, is_flag=True)
@click.option("-n", "--num", help="Number of plots or challenges", type=int, default=1, show_default=True)
@click.option("-b", "--buffer", help="Megabytes for sort/plot buffer", type=int, default=3389, show_default=True)
@click.option("-r", "--num_threads", help="Number of threads to use", type=int, default=2, show_default=True)
@click.option("-u", "--buckets", help="Number of buckets", type=int, default=128, show_default=True)
@click.option(
"-a",
"--alt_fingerprint",
type=int,
default=None,
help="Enter the alternative fingerprint of the key you want to use",
)
@click.option(
"-c",
"--pool_contract_address",
type=str,
default=None,
help="Address of where the pool reward will be sent to. Only used if alt_fingerprint and pool public key are None",
)
@click.option("-f", "--farmer_public_key", help="Hex farmer public key", type=str, default=None)
@click.option("-p", "--pool_public_key", help="Hex public key of pool", type=str, default=None)
@click.option(
"-t",
"--tmp_dir",
help="Temporary directory for plotting files",
type=click.Path(),
default=Path("."),
show_default=True,
)
@click.option("-2", "--tmp2_dir", help="Second temporary directory for plotting files", type=click.Path(), default=None)
@click.option(
"-d",
"--final_dir",
help="Final directory for plots (relative or absolute)",
type=click.Path(),
default=Path("."),
show_default=True,
)
@click.option("-i", "--plotid", help="PlotID in hex for reproducing plots (debugging only)", type=str, default=None)
@click.option("-m", "--memo", help="Memo in hex for reproducing plots (debugging only)", type=str, default=None)
@click.option("-e", "--nobitfield", help="Disable bitfield", default=False, is_flag=True)
@click.option(
"-x", "--exclude_final_dir", help="Skips adding [final dir] to harvester for farming", default=False, is_flag=True
)
# end copied code
def _cli():
pass
import contextlib
import importlib
import os
import stat
import tempfile
import textwrap
from typing import Dict, List, Optional

@@ -10,3 +15,5 @@

from plotman import resources as plotman_resources
class ConfigurationException(Exception):

@@ -32,3 +39,3 @@ """Raised when plotman.yaml configuration is missing or malformed."""

def get_validated_configs(config_text, config_path):
def get_validated_configs(config_text, config_path, preset_target_definitions_text):
"""Return a validated instance of PlotmanConfig with data from plotman.yaml

@@ -41,2 +48,14 @@

version = config_objects.get('version', (0,))
expected_major_version = 1
if version[0] != expected_major_version:
message = textwrap.dedent(f"""\
Expected major version {expected_major_version}, found version {version}
See https://github.com/ericaltendorf/plotman/wiki/Configuration#versions
""")
raise Exception(message)
try:

@@ -49,16 +68,124 @@ loaded = schema.load(config_objects)

if loaded.archiving is not None:
preset_target_objects = yaml.safe_load(preset_target_definitions_text)
preset_target_schema = desert.schema(PresetTargetDefinitions)
preset_target_definitions = preset_target_schema.load(preset_target_objects)
loaded.archiving.target_definitions = {
**preset_target_definitions.target_definitions,
**loaded.archiving.target_definitions,
}
return loaded
class CustomStringField(marshmallow.fields.String):
def _deserialize(self, value, attr, data, **kwargs):
if isinstance(value, int):
value = str(value)
return super()._deserialize(value, attr, data, **kwargs)
# Data models used to deserializing/formatting plotman.yaml files.
# TODO: bah, mutable? bah.
@attr.mutable
class ArchivingTarget:
transfer_process_name: str
transfer_process_argument_prefix: str
# TODO: mutable attribute...
env: Dict[str, Optional[str]] = desert.ib(
factory=dict,
marshmallow_field=marshmallow.fields.Dict(
keys=marshmallow.fields.String(),
values=CustomStringField(allow_none=True),
),
)
disk_space_path: Optional[str] = None
disk_space_script: Optional[str] = None
transfer_path: Optional[str] = None
transfer_script: Optional[str] = None
@attr.frozen
class Archive:
rsyncd_module: str
rsyncd_path: str
rsyncd_bwlimit: int
rsyncd_host: str
rsyncd_user: str
class PresetTargetDefinitions:
target_definitions: Dict[str, ArchivingTarget] = attr.ib(factory=dict)
# TODO: bah, mutable? bah.
@attr.mutable
class Archiving:
target: str
# TODO: mutable attribute...
env: Dict[str, str] = desert.ib(
factory=dict,
marshmallow_field=marshmallow.fields.Dict(
keys=marshmallow.fields.String(),
values=CustomStringField(),
),
)
index: int = 0 # If not explicit, "index" will default to 0
target_definitions: Dict[str, ArchivingTarget] = attr.ib(factory=dict)
def target_definition(self):
return self.target_definitions[self.target]
def environment(
self,
source=None,
destination=None,
):
target = self.target_definition()
complete = {**target.env, **self.env}
missing_mandatory_keys = [
key
for key, value in complete.items()
if value is None
]
if len(missing_mandatory_keys) > 0:
target = repr(self.target)
missing = ', '.join(repr(key) for key in missing_mandatory_keys)
message = f'Missing env options for archival target {target}: {missing}'
raise Exception(message)
variables = {**os.environ, **complete}
complete['process_name'] = target.transfer_process_name.format(**variables)
if source is not None:
complete['source'] = source
if destination is not None:
complete['destination'] = destination
return complete
def maybe_create_scripts(self, temp):
rwx = stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR
target = self.target_definition()
if target.disk_space_path is None:
with tempfile.NamedTemporaryFile(
mode='w',
encoding='utf-8',
prefix='plotman-disk-space-script',
delete=False,
dir=temp,
) as disk_space_script_file:
disk_space_script_file.write(target.disk_space_script)
target.disk_space_path = disk_space_script_file.name
os.chmod(target.disk_space_path, rwx)
if target.transfer_path is None:
with tempfile.NamedTemporaryFile(
mode='w',
encoding='utf-8',
prefix='plotman-transfer-script',
delete=False,
dir=temp,
) as transfer_script_file:
transfer_script_file.write(target.transfer_script)
target.transfer_path = transfer_script_file.name
os.chmod(target.transfer_path, rwx)
@attr.frozen

@@ -69,10 +196,54 @@ class TmpOverrides:

@attr.frozen
class Logging:
plots: str = os.path.join(appdirs.user_data_dir("plotman"), 'plots')
transfers: str = os.path.join(appdirs.user_data_dir("plotman"), 'transfers')
application: str = os.path.join(appdirs.user_log_dir("plotman"), 'plotman.log')
def setup(self):
os.makedirs(self.plots, exist_ok=True)
os.makedirs(self.transfers, exist_ok=True)
os.makedirs(os.path.dirname(self.application), exist_ok=True)
def create_plot_log_path(self, time):
return self._create_log_path(
time=time,
directory=self.plots,
group='plot',
)
def create_transfer_log_path(self, time):
return self._create_log_path(
time=time,
directory=self.transfers,
group='transfer',
)
def _create_log_path(self, time, directory, group):
timestamp = time.isoformat(timespec='microseconds').replace(':', '_')
return os.path.join(directory, f'{timestamp}.{group}.log')
@attr.frozen
class Directories:
log: str
tmp: List[str]
dst: List[str]
dst: Optional[List[str]] = None
tmp2: Optional[str] = None
tmp_overrides: Optional[Dict[str, TmpOverrides]] = None
archive: Optional[Archive] = None
def dst_is_tmp(self):
return self.dst is None and self.tmp2 is None
def dst_is_tmp2(self):
return self.dst is None and self.tmp2 is not None
def get_dst_directories(self):
"""Returns either <Directories.dst> or <Directories.tmp>. If
Directories.dst is None, Use Directories.tmp as dst directory.
"""
if self.dst_is_tmp2():
return [self.tmp2]
elif self.dst_is_tmp():
return self.tmp
return self.dst
@attr.frozen

@@ -97,2 +268,4 @@ class Scheduling:

pool_pk: Optional[str] = None
pool_contract_address: Optional[str] = None
x: bool = False

@@ -104,2 +277,11 @@ @attr.frozen

@attr.frozen
class Interactive:
autostart_plotting: bool = True
autostart_archiving: bool = True
@attr.frozen
class Commands:
interactive: Interactive = attr.ib(factory=Interactive)
@attr.frozen
class PlotmanConfig:

@@ -109,2 +291,18 @@ directories: Directories

plotting: Plotting
commands: Commands = attr.ib(factory=Commands)
logging: Logging = Logging()
archiving: Optional[Archiving] = None
user_interface: UserInterface = attr.ib(factory=UserInterface)
version: List[int] = [0]
@contextlib.contextmanager
def setup(self):
prefix = f'plotman-pid_{os.getpid()}-'
self.logging.setup()
with tempfile.TemporaryDirectory(prefix=prefix) as temp:
if self.archiving is not None:
self.archiving.maybe_create_scripts(temp=temp)
yield

@@ -7,3 +7,3 @@ import curses

import subprocess
import sys
from plotman import archive, configuration, manager, reporting

@@ -66,13 +66,20 @@ from plotman.job import Job

def curses_main(stdscr):
# cmd_autostart_plotting is the (optional) argument passed from the command line. May be None
def curses_main(stdscr, cmd_autostart_plotting, cmd_autostart_archiving, cfg):
log = Log()
config_path = configuration.get_path()
config_text = configuration.read_configuration_text(config_path)
cfg = configuration.get_validated_configs(config_text, config_path)
if cmd_autostart_plotting is not None:
plotting_active = cmd_autostart_plotting
else:
plotting_active = cfg.commands.interactive.autostart_plotting
plotting_active = True
archiving_configured = cfg.directories.archive is not None
archiving_active = archiving_configured
archiving_configured = cfg.archiving is not None
if not archiving_configured:
archiving_active = False
elif cmd_autostart_archiving is not None:
archiving_active = cmd_autostart_archiving
else:
archiving_active = cfg.commands.interactive.autostart_archiving
plotting_status = '<startup>' # todo rename these msg?

@@ -90,3 +97,3 @@ archiving_status = '<startup>'

jobs = Job.get_running_jobs(cfg.directories.log)
jobs = Job.get_running_jobs(cfg.logging.plots)
last_refresh = None

@@ -109,15 +116,15 @@

else:
elapsed = (datetime.datetime.now() - last_refresh).total_seconds()
elapsed = (datetime.datetime.now() - last_refresh).total_seconds()
do_full_refresh = elapsed >= cfg.scheduling.polling_time_s
if not do_full_refresh:
jobs = Job.get_running_jobs(cfg.directories.log, cached_jobs=jobs)
jobs = Job.get_running_jobs(cfg.logging.plots, cached_jobs=jobs)
else:
last_refresh = datetime.datetime.now()
jobs = Job.get_running_jobs(cfg.directories.log)
jobs = Job.get_running_jobs(cfg.logging.plots)
if plotting_active:
(started, msg) = manager.maybe_start_new_plot(
cfg.directories, cfg.scheduling, cfg.plotting
cfg.directories, cfg.scheduling, cfg.plotting, cfg.logging
)

@@ -130,3 +137,3 @@ if (started):

plotting_status = '<just started job>'
jobs = Job.get_running_jobs(cfg.directories.log, cached_jobs=jobs)
jobs = Job.get_running_jobs(cfg.logging.plots, cached_jobs=jobs)
else:

@@ -140,7 +147,9 @@ # If a plot is delayed for any reason other than stagger, log it

if archiving_active:
archiving_status, log_message = archive.spawn_archive_process(cfg.directories, jobs)
if log_message:
archiving_status, log_messages = archive.spawn_archive_process(cfg.directories, cfg.archiving, cfg.logging, jobs)
for log_message in log_messages:
log.log(log_message)
archdir_freebytes = archive.get_archdir_freebytes(cfg.directories.archive)
archdir_freebytes, log_messages = archive.get_archdir_freebytes(cfg.archiving)
for log_message in log_messages:
log.log(log_message)

@@ -177,5 +186,10 @@

tmp_prefix = os.path.commonpath(cfg.directories.tmp)
dst_prefix = os.path.commonpath(cfg.directories.dst)
dst_dir = cfg.directories.get_dst_directories()
dst_prefix = os.path.commonpath(dst_dir)
if archiving_configured:
arch_prefix = cfg.directories.archive.rsyncd_path
archive_directories = archdir_freebytes.keys()
if len(archive_directories) == 0:
arch_prefix = ''
else:
arch_prefix = os.path.commonpath(archive_directories)

@@ -188,3 +202,3 @@ n_tmpdirs = len(cfg.directories.tmp)

dst_report = reporting.dst_dir_report(
jobs, cfg.directories.dst, n_cols, dst_prefix)
jobs, dst_dir, n_cols, dst_prefix)
if archiving_configured:

@@ -200,3 +214,3 @@ arch_report = reporting.arch_dir_report(archdir_freebytes, n_cols, arch_prefix)

#
tmp_h = len(tmp_report.splitlines())

@@ -248,3 +262,3 @@ tmp_w = len(max(tmp_report.splitlines(), key=len)) + 1

archiving_status_msg(archiving_configured,
archiving_active, archiving_status), linecap)
archiving_active, archiving_status), linecap)

@@ -268,6 +282,6 @@ # Oneliner progress display

header_win.addnstr(' (remote)', linecap)
# Jobs
jobs_win.addstr(0, 0, reporting.status_report(jobs, n_cols, jobs_h,
jobs_win.addstr(0, 0, reporting.status_report(jobs, n_cols, jobs_h,
tmp_prefix, dst_prefix))

@@ -338,4 +352,3 @@ jobs_win.chgat(0, 0, curses.A_REVERSE)

def run_interactive():
def run_interactive(cfg, autostart_plotting=None, autostart_archiving=None):
locale.setlocale(locale.LC_ALL, '')

@@ -346,3 +359,8 @@ code = locale.getpreferredencoding()

try:
curses.wrapper(curses_main)
curses.wrapper(
curses_main,
cmd_autostart_plotting=autostart_plotting,
cmd_autostart_archiving=autostart_archiving,
cfg=cfg,
)
except curses.error as e:

@@ -349,0 +367,0 @@ raise TerminalTooSmallError(

@@ -36,3 +36,3 @@ # TODO do we use all these?

len(cmdline) >= 3
and cmdline[0].endswith("chia")
and 'chia' in cmdline[0]
and 'plots' == cmdline[1]

@@ -141,7 +141,35 @@ and 'create' == cmdline[2]

for proc in psutil.process_iter(['pid', 'cmdline']):
# Ignore processes which most likely have terminated between the time of
# iteration and data access.
with contextlib.suppress(psutil.NoSuchProcess, psutil.AccessDenied):
if is_plotting_cmdline(proc.cmdline()):
with contextlib.ExitStack() as exit_stack:
processes = []
pids = set()
ppids = set()
for process in psutil.process_iter():
# Ignore processes which most likely have terminated between the time of
# iteration and data access.
with contextlib.suppress(psutil.NoSuchProcess, psutil.AccessDenied):
exit_stack.enter_context(process.oneshot())
if is_plotting_cmdline(process.cmdline()):
ppids.add(process.ppid())
pids.add(process.pid)
processes.append(process)
# https://github.com/ericaltendorf/plotman/pull/418
# The experimental Chia GUI .deb installer launches plots
# in a manner that results in a parent and child process
# that both share the same command line and, as such, are
# both identified as plot processes. Only the child is
# really plotting. Filter out the parent.
wanted_pids = pids - ppids
wanted_processes = [
process
for process in processes
if process.pid in wanted_pids
]
for proc in wanted_processes:
with contextlib.suppress(psutil.NoSuchProcess, psutil.AccessDenied):
if proc.pid in cached_jobs_by_pid.keys():

@@ -151,4 +179,8 @@ jobs.append(cached_jobs_by_pid[proc.pid]) # Copy from cache

with proc.oneshot():
command_line = list(proc.cmdline())
if len(command_line) == 0:
# https://github.com/ericaltendorf/plotman/issues/610
continue
parsed_command = parse_chia_plots_create_command_line(
command_line=proc.cmdline(),
command_line=command_line,
)

@@ -227,7 +259,8 @@ if parsed_command.error is not None:

self.init_from_logfile()
else:
print('Found plotting process PID {pid}, but could not find '
'logfile in its open files:'.format(pid = self.proc.pid))
for f in self.proc.open_files():
print(f.path)
# TODO: turn this into logging or somesuch
# else:
# print('Found plotting process PID {pid}, but could not find '
# 'logfile in its open files:'.format(pid = self.proc.pid))
# for f in self.proc.open_files():
# print(f.path)

@@ -246,13 +279,14 @@

with open(self.logfile, 'r') as f:
for line in f:
m = re.match('^ID: ([0-9a-f]*)', line)
if m:
self.plot_id = m.group(1)
found_id = True
m = re.match(r'^Starting phase 1/4:.*\.\.\. (.*)', line)
if m:
# Mon Nov 2 08:39:53 2020
self.start_time = parse_chia_plot_time(m.group(1))
found_log = True
break # Stop reading lines in file
with contextlib.suppress(UnicodeDecodeError):
for line in f:
m = re.match('^ID: ([0-9a-f]*)', line)
if m:
self.plot_id = m.group(1)
found_id = True
m = re.match(r'^Starting phase 1/4:.*\.\.\. (.*)', line)
if m:
# Mon Nov 2 08:39:53 2020
self.start_time = parse_chia_plot_time(m.group(1))
found_log = True
break # Stop reading lines in file

@@ -288,35 +322,36 @@ if found_id and found_log:

with open(self.logfile, 'r') as f:
for line in f:
# "Starting phase 1/4: Forward Propagation into tmp files... Sat Oct 31 11:27:04 2020"
m = re.match(r'^Starting phase (\d).*', line)
if m:
phase = int(m.group(1))
phase_subphases[phase] = 0
with contextlib.suppress(UnicodeDecodeError):
for line in f:
# "Starting phase 1/4: Forward Propagation into tmp files... Sat Oct 31 11:27:04 2020"
m = re.match(r'^Starting phase (\d).*', line)
if m:
phase = int(m.group(1))
phase_subphases[phase] = 0
# Phase 1: "Computing table 2"
m = re.match(r'^Computing table (\d).*', line)
if m:
phase_subphases[1] = max(phase_subphases[1], int(m.group(1)))
# Phase 1: "Computing table 2"
m = re.match(r'^Computing table (\d).*', line)
if m:
phase_subphases[1] = max(phase_subphases[1], int(m.group(1)))
# Phase 2: "Backpropagating on table 2"
m = re.match(r'^Backpropagating on table (\d).*', line)
if m:
phase_subphases[2] = max(phase_subphases[2], 7 - int(m.group(1)))
# Phase 2: "Backpropagating on table 2"
m = re.match(r'^Backpropagating on table (\d).*', line)
if m:
phase_subphases[2] = max(phase_subphases[2], 7 - int(m.group(1)))
# Phase 3: "Compressing tables 4 and 5"
m = re.match(r'^Compressing tables (\d) and (\d).*', line)
if m:
phase_subphases[3] = max(phase_subphases[3], int(m.group(1)))
# Phase 3: "Compressing tables 4 and 5"
m = re.match(r'^Compressing tables (\d) and (\d).*', line)
if m:
phase_subphases[3] = max(phase_subphases[3], int(m.group(1)))
# TODO also collect timing info:
# TODO also collect timing info:
# "Time for phase 1 = 22796.7 seconds. CPU (98%) Tue Sep 29 17:57:19 2020"
# for phase in ['1', '2', '3', '4']:
# m = re.match(r'^Time for phase ' + phase + ' = (\d+.\d+) seconds..*', line)
# data.setdefault....
# "Time for phase 1 = 22796.7 seconds. CPU (98%) Tue Sep 29 17:57:19 2020"
# for phase in ['1', '2', '3', '4']:
# m = re.match(r'^Time for phase ' + phase + ' = (\d+.\d+) seconds..*', line)
# data.setdefault....
# Total time = 49487.1 seconds. CPU (97.26%) Wed Sep 30 01:22:10 2020
# m = re.match(r'^Total time = (\d+.\d+) seconds.*', line)
# if m:
# data.setdefault(key, {}).setdefault('total time', []).append(float(m.group(1)))
# Total time = 49487.1 seconds. CPU (97.26%) Wed Sep 30 01:22:10 2020
# m = re.match(r'^Total time = (\d+.\d+) seconds.*', line)
# if m:
# data.setdefault(key, {}).setdefault('total time', []).append(float(m.group(1)))

@@ -357,10 +392,10 @@ if phase_subphases:

total_bytes = 0
with os.scandir(self.tmpdir) as it:
for entry in it:
if self.plot_id in entry.name:
try:
total_bytes += entry.stat().st_size
except FileNotFoundError:
# The file might disappear; this being an estimate we don't care
pass
with contextlib.suppress(FileNotFoundError):
# The directory might not exist at this name, or at all, anymore
with os.scandir(self.tmpdir) as it:
for entry in it:
if self.plot_id in entry.name:
with contextlib.suppress(FileNotFoundError):
# The file might disappear; this being an estimate we don't care
total_bytes += entry.stat().st_size
return total_bytes

@@ -367,0 +402,0 @@

@@ -6,3 +6,2 @@ import logging

import re
import readline # For nice CLI
import subprocess

@@ -27,2 +26,4 @@ import sys

_WINDOWS = sys.platform == 'win32'
def dstdirs_to_furthest_phase(all_jobs):

@@ -77,4 +78,4 @@ '''Return a map from dst dir to a phase tuple for the most progressed job

def maybe_start_new_plot(dir_cfg, sched_cfg, plotting_cfg):
jobs = job.Job.get_running_jobs(dir_cfg.log)
def maybe_start_new_plot(dir_cfg, sched_cfg, plotting_cfg, log_cfg):
jobs = job.Job.get_running_jobs(log_cfg.plots)

@@ -95,3 +96,3 @@ wait_reason = None # If we don't start a job this iteration, this says why.

for (d, phases) in eligible ]
if not eligible:

@@ -103,15 +104,19 @@ wait_reason = 'no eligible tempdirs (%ds/%ds)' % (youngest_job_age, global_stagger)

# Select the dst dir least recently selected
dir2ph = { d:ph for (d, ph) in dstdirs_to_youngest_phase(jobs).items()
if d in dir_cfg.dst and ph is not None}
unused_dirs = [d for d in dir_cfg.dst if d not in dir2ph.keys()]
dstdir = ''
if unused_dirs:
dstdir = random.choice(unused_dirs)
if dir_cfg.dst_is_tmp2():
dstdir = dir_cfg.tmp2
elif dir_cfg.dst_is_tmp():
dstdir = tmpdir
else:
dstdir = max(dir2ph, key=dir2ph.get)
# Select the dst dir least recently selected
dst_dirs = dir_cfg.get_dst_directories()
dir2ph = { d:ph for (d, ph) in dstdirs_to_youngest_phase(jobs).items()
if d in dst_dirs and ph is not None}
unused_dirs = [d for d in dst_dirs if d not in dir2ph.keys()]
dstdir = ''
if unused_dirs:
dstdir = random.choice(unused_dirs)
else:
dstdir = max(dir2ph, key=dir2ph.get)
logfile = os.path.join(
dir_cfg.log, pendulum.now().isoformat(timespec='microseconds').replace(':', '_') + '.log'
)
log_file_path = log_cfg.create_plot_log_path(time=pendulum.now())

@@ -133,10 +138,16 @@ plot_args = ['chia', 'plots', 'create',

plot_args.append(plotting_cfg.pool_pk)
if plotting_cfg.pool_contract_address is not None:
plot_args.append('-c')
plot_args.append(plotting_cfg.pool_contract_address)
if dir_cfg.tmp2 is not None:
plot_args.append('-2')
plot_args.append(dir_cfg.tmp2)
if plotting_cfg.x:
plot_args.append('-x')
logmsg = ('Starting plot job: %s ; logging to %s' % (' '.join(plot_args), logfile))
logmsg = ('Starting plot job: %s ; logging to %s' % (' '.join(plot_args), log_file_path))
# TODO: CAMPid 09840103109429840981397487498131
try:
open_log_file = open(logfile, 'x')
open_log_file = open(log_file_path, 'x')
except FileExistsError:

@@ -151,3 +162,3 @@ # The desired log file name already exists. Most likely another

f'Plot log file already exists, skipping attempt to start a'
f' new plot: {logfile!r}'
f' new plot: {log_file_path!r}'
)

@@ -158,3 +169,3 @@ return (False, logmsg)

f'Unable to open log file. Verify that the directory exists'
f' and has proper write permissions: {logfile!r}'
f' and has proper write permissions: {log_file_path!r}'
)

@@ -170,9 +181,11 @@ raise Exception(message) from e

with open_log_file:
# start_new_sessions to make the job independent of this controlling tty.
# start_new_sessions to make the job independent of this controlling tty (POSIX only).
# subprocess.CREATE_NO_WINDOW to make the process independent of this controlling tty and have no console window on Windows.
p = subprocess.Popen(plot_args,
stdout=open_log_file,
stderr=subprocess.STDOUT,
start_new_session=True)
start_new_session=True,
creationflags=0 if not _WINDOWS else subprocess.CREATE_NO_WINDOW)
psutil.Process(p.pid).nice(15)
psutil.Process(p.pid).nice(15 if not _WINDOWS else psutil.BELOW_NORMAL_PRIORITY_CLASS)
return (True, logmsg)

@@ -179,0 +192,0 @@

import math
import os
import re
import shutil
from plotman import chiapos

@@ -9,16 +11,26 @@ GB = 1_000_000_000

'Return free space for directory (in bytes)'
stat = os.statvfs(d)
return stat.f_frsize * stat.f_bavail
usage = shutil.disk_usage(d)
return usage.free
def get_k32_plotsize():
return 108 * GB
return get_plotsize(32)
def human_format(num, precision):
def get_plotsize(k):
return (int)(_get_plotsize_scaler(k) * k * pow(2, k))
def human_format(num, precision, powerOfTwo=False):
divisor = 1024 if powerOfTwo else 1000
magnitude = 0
while abs(num) >= 1000:
while abs(num) >= divisor:
magnitude += 1
num /= 1000.0
return (('%.' + str(precision) + 'f%s') %
num /= divisor
result = (('%.' + str(precision) + 'f%s') %
(num, ['', 'K', 'M', 'G', 'T', 'P'][magnitude]))
if powerOfTwo and magnitude > 0:
result += 'i'
return result
def time_format(sec):

@@ -60,3 +72,3 @@ if sec is None:

continue
return plots

@@ -75,1 +87,55 @@

# use k as index to get plotsize_scaler, note that 0 means the value is not calculated yet
# we can safely assume that k is never going to be greater than 100, due to the exponential nature of plot file size, this avoids using constants from chiapos
_plotsize_scaler_cache = [0.0 for _ in range(0, 101)]
def calc_average_size_of_entry(k, table_index):
'''
calculate the average size of entries in bytes, given k and table_index
'''
# assumes that chia uses constant park size for each table
# it is approximately k/8, uses chia's actual park size calculation to get a more accurate estimation
return chiapos.CalculateParkSize(k, table_index) / chiapos.kEntriesPerPark
def _get_probability_of_entries_kept(k, table_index):
'''
get the probibility of entries in table of table_index that is not dropped
'''
# the formula is derived from https://www.chia.net/assets/proof_of_space.pdf, section Space Required, p5 and pt
if table_index > 5:
return 1
pow_2_k = 2**k
if table_index == 5:
return 1 - (1 - 2 / pow_2_k) ** pow_2_k # p5
else:
return 1 - (1 - 2 / pow_2_k) ** (_get_probability_of_entries_kept(k, table_index + 1) * pow_2_k) # pt
def _get_plotsize_scaler(k:int):
'''
get scaler for plot size so that the plot size can be calculated by scaler * k * 2 ** k
'''
result = _plotsize_scaler_cache[k]
if result > 0:
return result
result = _get_plotsize_scaler_impl(k)
_plotsize_scaler_cache[k] = result
return result
def _get_plotsize_scaler_impl(k):
'''
get scaler for plot size so that the plot size can be calculated by scaler * k * 2 ** k
'''
result = 0
# there are 7 tables
for i in range(1, 8):
probability = _get_probability_of_entries_kept(k, i)
average_size_of_entry = calc_average_size_of_entry(k, i)
scaler_for_table = probability * average_size_of_entry / k
result += scaler_for_table
return result
import argparse
import datetime
import importlib
import importlib.resources
import logging
import logging.handlers
import os

@@ -9,2 +12,4 @@ import random

import pendulum
# Plotman libraries

@@ -15,3 +20,2 @@ from plotman import analyzer, archive, configuration, interactive, manager, plot_util, reporting

class PlotmanArgParser:

@@ -32,6 +36,10 @@ def add_idprefix_arg(self, subparser):

sp.add_parser('status', help='show current plotting status')
sp.add_parser('dirs', help='show directories info')
sp.add_parser('interactive', help='run interactive control/monitoring mode')
p_interactive = sp.add_parser('interactive', help='run interactive control/monitoring mode')
p_interactive.add_argument('--autostart-plotting', action='store_true', default=None, dest='autostart_plotting')
p_interactive.add_argument('--no-autostart-plotting', action='store_false', default=None, dest='autostart_plotting')
p_interactive.add_argument('--autostart-archiving', action='store_true', default=None, dest='autostart_archiving')
p_interactive.add_argument('--no-autostart-archiving', action='store_false', default=None, dest='autostart_archiving')

@@ -92,2 +100,7 @@ sp.add_parser('dsched', help='print destination dir schedule')

class Iso8601Formatter(logging.Formatter):
def formatTime(self, record, datefmt=None):
time = pendulum.from_timestamp(timestamp=record.created, tz='local')
return time.isoformat(timespec='microseconds', )
def main():

@@ -141,115 +154,141 @@ random.seed()

config_text = configuration.read_configuration_text(config_path)
cfg = configuration.get_validated_configs(config_text, config_path)
preset_target_definitions_text = importlib.resources.read_text(
plotman_resources, "target_definitions.yaml",
)
#
# Stay alive, spawning plot jobs
#
if args.cmd == 'plot':
print('...starting plot loop')
while True:
wait_reason = manager.maybe_start_new_plot(cfg.directories, cfg.scheduling, cfg.plotting)
cfg = configuration.get_validated_configs(config_text, config_path, preset_target_definitions_text)
# TODO: report this via a channel that can be polled on demand, so we don't spam the console
if wait_reason:
print('...sleeping %d s: %s' % (cfg.scheduling.polling_time_s, wait_reason))
with cfg.setup():
root_logger = logging.getLogger()
handler = logging.handlers.RotatingFileHandler(
backupCount=10,
encoding='utf-8',
filename=cfg.logging.application,
maxBytes=10_000_000,
)
formatter = Iso8601Formatter(fmt='%(asctime)s: %(message)s')
handler.setFormatter(formatter)
root_logger.addHandler(handler)
root_logger.setLevel(logging.INFO)
time.sleep(cfg.scheduling.polling_time_s)
#
# Stay alive, spawning plot jobs
#
if args.cmd == 'plot':
print('...starting plot loop')
while True:
wait_reason = manager.maybe_start_new_plot(cfg.directories, cfg.scheduling, cfg.plotting, cfg.logging)
#
# Analysis of completed jobs
#
elif args.cmd == 'analyze':
# TODO: report this via a channel that can be polled on demand, so we don't spam the console
if wait_reason:
print('...sleeping %d s: %s' % (cfg.scheduling.polling_time_s, wait_reason))
analyzer.analyze(args.logfile, args.clipterminals,
args.bytmp, args.bybitfield)
time.sleep(cfg.scheduling.polling_time_s)
else:
jobs = Job.get_running_jobs(cfg.directories.log)
#
# Analysis of completed jobs
#
elif args.cmd == 'analyze':
# Status report
if args.cmd == 'status':
print(reporting.status_report(jobs, get_term_width()))
analyzer.analyze(args.logfile, args.clipterminals,
args.bytmp, args.bybitfield)
# Directories report
elif args.cmd == 'dirs':
print(reporting.dirs_report(jobs, cfg.directories, cfg.scheduling, get_term_width()))
else:
jobs = Job.get_running_jobs(cfg.logging.plots)
elif args.cmd == 'interactive':
interactive.run_interactive()
# Status report
if args.cmd == 'status':
result = "{0}\n\n{1}\n\nUpdated at: {2}".format(
reporting.status_report(jobs, get_term_width()),
reporting.summary(jobs),
datetime.datetime.today().strftime("%c"),
)
print(result)
# Start running archival
elif args.cmd == 'archive':
print('...starting archive loop')
firstit = True
while True:
if not firstit:
print('Sleeping 60s until next iteration...')
time.sleep(60)
jobs = Job.get_running_jobs(cfg.directories.log)
firstit = False
# Directories report
elif args.cmd == 'dirs':
print(reporting.dirs_report(jobs, cfg.directories, cfg.scheduling, get_term_width()))
archiving_status, log_message = archive.spawn_archive_process(cfg.directories, jobs)
if log_message:
print(log_message)
elif args.cmd == 'interactive':
interactive.run_interactive(
cfg=cfg,
autostart_plotting=args.autostart_plotting,
autostart_archiving=args.autostart_archiving,
)
# Start running archival
elif args.cmd == 'archive':
print('...starting archive loop')
firstit = True
while True:
if not firstit:
print('Sleeping 60s until next iteration...')
time.sleep(60)
jobs = Job.get_running_jobs(cfg.logging.plots)
firstit = False
# Debugging: show the destination drive usage schedule
elif args.cmd == 'dsched':
for (d, ph) in manager.dstdirs_to_furthest_phase(jobs).items():
print(' %s : %s' % (d, str(ph)))
#
# Job control commands
#
elif args.cmd in [ 'details', 'files', 'kill', 'suspend', 'resume' ]:
print(args)
archiving_status, log_messages = archive.spawn_archive_process(cfg.directories, cfg.archiving, cfg.logging, jobs)
for log_message in log_messages:
print(log_message)
selected = []
# TODO: clean up treatment of wildcard
if args.idprefix[0] == 'all':
selected = jobs
else:
# TODO: allow multiple idprefixes, not just take the first
selected = manager.select_jobs_by_partial_id(jobs, args.idprefix[0])
if (len(selected) == 0):
print('Error: %s matched no jobs.' % args.idprefix[0])
elif len(selected) > 1:
print('Error: "%s" matched multiple jobs:' % args.idprefix[0])
for j in selected:
print(' %s' % j.plot_id)
selected = []
# Debugging: show the destination drive usage schedule
elif args.cmd == 'dsched':
for (d, ph) in manager.dstdirs_to_furthest_phase(jobs).items():
print(' %s : %s' % (d, str(ph)))
for job in selected:
if args.cmd == 'details':
print(job.status_str_long())
#
# Job control commands
#
elif args.cmd in [ 'details', 'files', 'kill', 'suspend', 'resume' ]:
print(args)
elif args.cmd == 'files':
temp_files = job.get_temp_files()
for f in temp_files:
print(' %s' % f)
selected = []
elif args.cmd == 'kill':
# First suspend so job doesn't create new files
print('Pausing PID %d, plot id %s' % (job.proc.pid, job.plot_id))
job.suspend()
# TODO: clean up treatment of wildcard
if args.idprefix[0] == 'all':
selected = jobs
else:
# TODO: allow multiple idprefixes, not just take the first
selected = manager.select_jobs_by_partial_id(jobs, args.idprefix[0])
if (len(selected) == 0):
print('Error: %s matched no jobs.' % args.idprefix[0])
elif len(selected) > 1:
print('Error: "%s" matched multiple jobs:' % args.idprefix[0])
for j in selected:
print(' %s' % j.plot_id)
selected = []
temp_files = job.get_temp_files()
print('Will kill pid %d, plot id %s' % (job.proc.pid, job.plot_id))
print('Will delete %d temp files' % len(temp_files))
conf = input('Are you sure? ("y" to confirm): ')
if (conf != 'y'):
print('canceled. If you wish to resume the job, do so manually.')
else:
print('killing...')
job.cancel()
print('cleaing up temp files...')
for job in selected:
if args.cmd == 'details':
print(job.status_str_long())
elif args.cmd == 'files':
temp_files = job.get_temp_files()
for f in temp_files:
os.remove(f)
print(' %s' % f)
elif args.cmd == 'suspend':
print('Suspending ' + job.plot_id)
job.suspend()
elif args.cmd == 'resume':
print('Resuming ' + job.plot_id)
job.resume()
elif args.cmd == 'kill':
# First suspend so job doesn't create new files
print('Pausing PID %d, plot id %s' % (job.proc.pid, job.plot_id))
job.suspend()
temp_files = job.get_temp_files()
print('Will kill pid %d, plot id %s' % (job.proc.pid, job.plot_id))
print('Will delete %d temp files' % len(temp_files))
conf = input('Are you sure? ("y" to confirm): ')
if (conf != 'y'):
print('canceled. If you wish to resume the job, do so manually.')
else:
print('killing...')
job.cancel()
print('cleaning up temp files...')
for f in temp_files:
os.remove(f)
elif args.cmd == 'suspend':
print('Suspending ' + job.plot_id)
job.suspend()
elif args.cmd == 'resume':
print('Resuming ' + job.plot_id)
job.resume()

@@ -6,2 +6,3 @@ import math

import texttable as tt # from somewhere?
from itertools import groupby

@@ -16,3 +17,3 @@ from plotman import archive, job, manager, plot_util

return path
def phase_str(phase):

@@ -42,3 +43,3 @@ if not phase.known:

n_to_char_map = dict(enumerate(" .:;!"))
if n < 0:

@@ -48,3 +49,3 @@ return 'X' # Should never be negative

n = len(n_to_char_map) - 1
return n_to_char_map[n]

@@ -69,2 +70,4 @@

# Command: plotman status
# Shows a general overview of all running jobs
def status_report(jobs, width, height=None, tmp_prefix='', dst_prefix=''):

@@ -93,6 +96,7 @@ '''height, if provided, will limit the number of rows in the table,

tab.set_header_align('r' * len(headings))
for i, j in enumerate(sorted(jobs, key=job.Job.get_time_wall)):
# Elipsis row
if abbreviate_jobs_list and i == n_begin_rows:
row = ['...'] + ([''] * 13)
row = ['...'] + ([''] * (len(headings) - 1))
# Omitted row

@@ -106,15 +110,15 @@ elif abbreviate_jobs_list and i > n_begin_rows and i < (len(jobs) - n_end_rows):

with j.proc.oneshot():
row = [j.plot_id[:8],
j.k,
abbr_path(j.tmpdir, tmp_prefix),
abbr_path(j.dstdir, dst_prefix),
plot_util.time_format(j.get_time_wall()),
phase_str(j.progress()),
plot_util.human_format(j.get_tmp_usage(), 0),
j.proc.pid,
j.get_run_status(),
plot_util.human_format(j.get_mem_usage(), 1),
plot_util.time_format(j.get_time_user()),
plot_util.time_format(j.get_time_sys()),
plot_util.time_format(j.get_time_iowait())
row = [j.plot_id[:8], # Plot ID
j.k, # k size
abbr_path(j.tmpdir, tmp_prefix), # Temp directory
abbr_path(j.dstdir, dst_prefix), # Destination directory
plot_util.time_format(j.get_time_wall()), # Time wall
phase_str(j.progress()), # Overall progress (major:minor)
plot_util.human_format(j.get_tmp_usage(), 0), # Current temp file size
j.proc.pid, # System pid
j.get_run_status(), # OS status for the job process
plot_util.human_format(j.get_mem_usage(), 1, True), # Memory usage
plot_util.time_format(j.get_time_user()), # user system time
plot_util.time_format(j.get_time_sys()), # system time
plot_util.time_format(j.get_time_iowait()) # io wait
]

@@ -132,5 +136,21 @@ except (psutil.NoSuchProcess, psutil.AccessDenied):

tab.set_deco(0) # No borders
# return ('tmp dir prefix: %s ; dst dir prefix: %s\n' % (tmp_prefix, dst_prefix)
return tab.draw()
def summary(jobs, tmp_prefix=''):
"""Creates a small summary of running jobs"""
summary = [
'Total jobs: {0}'.format(len(jobs))
]
# Number of jobs in each tmp disk
tmp_dir_paths = sorted([abbr_path(job.tmpdir, tmp_prefix) for job in jobs])
for key, group in groupby(tmp_dir_paths, lambda dir: dir):
summary.append(
'Jobs in {0}: {1}'.format(key, len(list(group)))
)
return '\n'.join(summary)
def tmp_dir_report(jobs, dir_cfg, sched_cfg, width, start_row=None, end_row=None, prefix=''):

@@ -155,3 +175,3 @@ '''start_row, end_row let you split the table up if you want'''

return tab.draw()
def dst_dir_report(jobs, dstdirs, width, prefix=''):

@@ -174,3 +194,3 @@ tab = tt.Texttable()

n_plots = len(dir_plots)
priority = archive.compute_priority(eldest_ph, gb_free, n_plots)
priority = archive.compute_priority(eldest_ph, gb_free, n_plots)
row = [abbr_path(d, prefix), n_plots, gb_free,

@@ -185,3 +205,3 @@ phases_str(phases, 5), priority]

def arch_dir_report(archdir_freebytes, width, prefix=''):
cells = ['%s:%5dGB' % (abbr_path(d, prefix), int(int(space) / plot_util.GB))
cells = ['%s:%5dG' % (abbr_path(d, prefix), int(int(space) / plot_util.GB))
for (d, space) in sorted(archdir_freebytes.items())]

@@ -201,13 +221,16 @@ if not cells:

# TODO: remove this
def dirs_report(jobs, dir_cfg, sched_cfg, width):
def dirs_report(jobs, dir_cfg, arch_cfg, sched_cfg, width):
dst_dir = dir_cfg.get_dst_directories()
reports = [
tmp_dir_report(jobs, dir_cfg, sched_cfg, width),
dst_dir_report(jobs, dir_cfg.dst, width),
dst_dir_report(jobs, dst_dir, width),
]
if dir_cfg.archive is not None:
if arch_cfg is not None:
freebytes, archive_log_messages = archive.get_archdir_freebytes(arch_cfg)
reports.extend([
'archive dirs free space:',
arch_dir_report(archive.get_archdir_freebytes(dir_cfg.archive), width),
arch_dir_report(freebytes, width),
*archive_log_messages,
])
return '\n'.join(reports) + '\n'
# Default/example plotman.yaml configuration file
# https://github.com/ericaltendorf/plotman/wiki/Configuration#versions
version: [1]
logging:
# One directory in which to store all plot job logs (the STDOUT/
# STDERR of all plot jobs). In order to monitor progress, plotman
# reads these logs on a regular basis, so using a fast drive is
# recommended.
plots: /home/chia/chia/logs
# transfers:
# application:
# Options for display and rendering

@@ -12,10 +24,12 @@ user_interface:

# Optional custom settings for the subcommands (status, interactive etc)
commands:
interactive:
# Set it to False if you don't want to auto start plotting when 'interactive' is ran.
# You can override this value from the command line, type "plotman interactive -h" for details
autostart_plotting: True
autostart_archiving: True
# Where to plot and log.
directories:
# One directory in which to store all plot job logs (the STDOUT/
# STDERR of all plot jobs). In order to monitor progress, plotman
# reads these logs on a regular basis, so using a fast drive is
# recommended.
log: /home/chia/chia/logs
# One or more directories to use as tmp dirs for plotting. The

@@ -51,6 +65,9 @@ # scheduler will use all of them and distribute jobs among them.

# One or more directories; the scheduler will use all of them.
# These again are presumed to be on independent physical devices,
# so writes (plot jobs) and reads (archivals) can be scheduled
# to minimize IO contention.
# Optional: A list of one or more directories; the scheduler will
# use all of them. These again are presumed to be on independent
# physical devices so writes (plot jobs) and reads (archivals) can
# be scheduled to minimize IO contention.
#
# If dst is commented out, the tmp directories will be used as the
# buffer.
dst:

@@ -60,29 +77,33 @@ - /mnt/dst/00

# Archival configuration. Optional; if you do not wish to run the
# archiving operation, comment this section out.
#
# Currently archival depends on an rsync daemon running on the remote
# host.
# The archival also uses ssh to connect to the remote host and check
# for available directories. Set up ssh keys on the remote host to
# allow public key login from rsyncd_user.
# Complete example: https://github.com/ericaltendorf/plotman/wiki/Archiving
archive:
rsyncd_module: plots # Define this in remote rsyncd.conf.
rsyncd_path: /plots # This is used via ssh. Should match path
# defined in the module referenced above.
rsyncd_bwlimit: 80000 # Bandwidth limit in KB/s
rsyncd_host: myfarmer
rsyncd_user: chia
# Optional index. If omitted or set to 0, plotman will archive
# to the first archive dir with free space. If specified,
# plotman will skip forward up to 'index' drives (if they exist).
# This can be useful to reduce io contention on a drive on the
# archive host if you have multiple plotters (simultaneous io
# can still happen at the time a drive fills up.) E.g., if you
# have four plotters, you could set this to 0, 1, 2, and 3, on
# the 4 machines, or 0, 1, 0, 1.
# index: 0
# Archival configuration. Optional; if you do not wish to run the
# archiving operation, comment this section out. Almost everyone
# should be using the archival feature. It is meant to distribute
# plots among multiple disks filling them all. This can be done both
# to local and to remote disks.
#
# As of v0.4, archiving commands are highly configurable. The basic
# configuration consists of a script for checking available disk space
# and another for actually transferring plots. Each can be specified
# as either a path to an existing script or inline script contents.
# It is expected that most people will use existing recipes and will
# adjust them by specifying environment variables that will set their
# system specific values. These can be provided to the scripts via
# the `env` key. plotman will additionally provide `source` and
# `destination` environment variables to the transfer script so it
# knows the specifically selected items to process. plotman also needs
# to be able to generally detect if a transfer process is already
# running. To be able to identify externally launched transfers, the
# process name and an argument prefix to match must be provided. Note
# that variable substitution of environment variables including those
# specified in the env key can be used in both process name and process
# argument prefix elements but that they use the python substitution
# format.
#
# Complete example: https://github.com/ericaltendorf/plotman/wiki/Archiving
archiving:
target: local_rsync
env:
command: rsync
site_root: /farm/sites
# Plotting scheduling parameters

@@ -129,1 +150,5 @@ scheduling:

# pool_pk: ...
# If true, Skips adding [final dir] / dst to harvester for farming.
# Especially useful if you have harvesters that are running somewhere else
# and you are just plotting on the machine where plotman is running.
# x: True

@@ -1,1 +0,1 @@

0.3.1
0.4