
Security News
Another Round of TEA Protocol Spam Floods npm, But It’s Not a Worm
Recent coverage mislabels the latest TEA protocol spam as a worm. Here’s what’s actually happening.
pytorch3dunet-smax
Advanced tools
Link to forked project: https://github.com/wolny/pytorch-3dunet SpotMAX: https://github.com/SchmollerLab/SpotMAX
PyTorch implementation 3D U-Net and its variants:
Standard 3D U-Net based on 3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation Özgün Çiçek et al.
Residual 3D U-Net based on Superhuman Accuracy on the SNEMI3D Connectomics Challenge Kisuk Lee et al.
The code allows for training the U-Net for both: semantic segmentation (binary and multi-class) and regression problems (e.g. de-noising, learning deconvolutions).
Training the standard 2D U-Net is also possible, see 2DUnet_dsb2018 for example configuration. Just make sure to keep the singleton z-dimension in your H5 dataset (i.e. (1, Y, X) instead of (Y, X)) , because data loading / data augmentation requires tensors of rank 3 always.
The package has not been tested on Windows, however some users reported using it successfully on Windows.
DiceLoss defined as 1 - DiceCoefficient used for binary semantic segmentation; when more than 2 classes are present in the ground truth, it computes the DiceLoss per channel and averages the values)alpha * BCE + beta * Dice, alpha, beta can be specified in the loss section of the config)weight: [w_1, ..., w_k] in the loss section of the config)For a detailed explanation of some of the supported loss functions see: Generalised Dice overlap as a deep learning loss function for highly unbalanced segmentations Carole H. Sudre et al.
If not specified MeanIoU will be used by default.
pytorch-3dunet package is via conda:conda create -n pytorch3dunet -c pytorch -c conda-forge -c awolny pytorch-3dunet
conda activate pytorch3dunet
After installation the following commands are accessible within the conda environment:
train3dunet for training the network and predict3dunet for prediction (see below).
python setup.py install
Make sure that the installed pytorch is compatible with your CUDA version, otherwise the training/prediction will fail to run on GPU. You can re-install pytorch compatible with your CUDA in the pytorch3dunet environment by:
conda install -c pytorch cudatoolkit=<YOU_CUDA_VERSION> pytorch
Given that pytorch-3dunet package was installed via conda as described above, one can train the network by simply invoking:
train3dunet --config <CONFIG>
where CONFIG is the path to a YAML configuration file, which specifies all aspects of the training procedure.
In order to train on your own data just provide the paths to your HDF5 training and validation datasets in the config.
The HDF5 files should contain the raw/label data sets in the following axis order: DHW (in case of 3D) CDHW (in case of 4D).
One can monitor the training progress with Tensorboard tensorboard --logdir <checkpoint_dir>/logs/ (you need tensorflow installed in your conda env), where checkpoint_dir is the path to the checkpoint directory specified in the config.
BCEWithLogitsLoss, DiceLoss, BCEDiceLoss, GeneralizedDiceLoss:
The target data has to be 4D (one target binary mask per channel).
When training with WeightedCrossEntropyLoss, CrossEntropyLoss, PixelWiseCrossEntropyLoss the target dataset has to be 3D, see also pytorch documentation for CE loss: https://pytorch.org/docs/master/generated/torch.nn.CrossEntropyLoss.htmlfinal_sigmoid in the model config section applies only to the inference time (validation, test):
When training with cross entropy based losses (WeightedCrossEntropyLoss, CrossEntropyLoss, PixelWiseCrossEntropyLoss) set final_sigmoid=False so that Softmax normalization is applied to the output.
When training with BCEWithLogitsLoss, DiceLoss, BCEDiceLoss, GeneralizedDiceLoss set final_sigmoid=TrueGiven that pytorch-3dunet package was installed via conda as described above, one can run the prediction via:
predict3dunet --config <CONFIG>
In order to predict on your own data, just provide the path to your model as well as paths to HDF5 test files (see example test_config_segmentation.yaml).
In order to avoid patch boundary artifacts in the output prediction masks the patch predictions are averaged, so make sure that patch/stride params lead to overlapping blocks, e.g. patch: [64, 128, 128] stride: [32, 96, 96] will give you a 'halo' of 32 voxels in each direction.
By default, if multiple GPUs are available training/prediction will be run on all the GPUs using DataParallel.
If training/prediction on all available GPUs is not desirable, restrict the number of GPUs using CUDA_VISIBLE_DEVICES, e.g.
CUDA_VISIBLE_DEVICES=0,1 train3dunet --config <CONFIG>
or
CUDA_VISIBLE_DEVICES=0,1 predict3dunet --config <CONFIG>
Training/predictions configs can be found in 3DUnet_lightsheet_boundary. Pre-trained model weights available here. In order to use the pre-trained model on your own data:
best_checkpoint.pytorch from the above linkpredict3dunet --config test_config.ymlpre_trained attribute in the YAML config to point to the best_checkpoint.pytorch pathThe data used for training can be downloaded from the following OSF project:
Sample z-slice predictions on the test set (top: raw input , bottom: boundary predictions):
Training/predictions configs can be found in 3DUnet_confocal_boundary. Pre-trained model weights available here. In order to use the pre-trained model on your own data:
best_checkpoint.pytorch from the above linkpredict3dunet --config test_config.ymlpre_trained attribute in the YAML config to point to the best_checkpoint.pytorch pathThe data used for training can be downloaded from the following OSF project:
Sample z-slice predictions on the test set (top: raw input , bottom: boundary predictions):
Training/predictions configs can be found in 3DUnet_lightsheet_nuclei. Pre-trained model weights available here. In order to use the pre-trained model on your own data:
best_checkpoint.pytorch from the above linkpredict3dunet --config test_config.ymlpre_trained attribute in the YAML config to point to the best_checkpoint.pytorch pathThe training and validation sets can be downloaded from the following OSF project: https://osf.io/thxzn/
Sample z-slice predictions on the test set (top: raw input, bottom: nuclei predictions):
The data can be downloaded from: https://www.kaggle.com/c/data-science-bowl-2018/data
Training/predictions configs can be found in 2DUnet_dsb2018.
Sample predictions on the test image (top: raw input, bottom: nuclei predictions):
If you want to contribute back, please make a pull request.
If you use this code for your research, please cite as:
@article {10.7554/eLife.57613,
article_type = {journal},
title = {Accurate and versatile 3D segmentation of plant tissues at cellular resolution},
author = {Wolny, Adrian and Cerrone, Lorenzo and Vijayan, Athul and Tofanelli, Rachele and Barro, Amaya Vilches and Louveaux, Marion and Wenzl, Christian and Strauss, Sören and Wilson-Sánchez, David and Lymbouridou, Rena and Steigleder, Susanne S and Pape, Constantin and Bailoni, Alberto and Duran-Nebreda, Salva and Bassel, George W and Lohmann, Jan U and Tsiantis, Miltos and Hamprecht, Fred A and Schneitz, Kay and Maizel, Alexis and Kreshuk, Anna},
editor = {Hardtke, Christian S and Bergmann, Dominique C and Bergmann, Dominique C and Graeff, Moritz},
volume = 9,
year = 2020,
month = {jul},
pub_date = {2020-07-29},
pages = {e57613},
citation = {eLife 2020;9:e57613},
doi = {10.7554/eLife.57613},
url = {https://doi.org/10.7554/eLife.57613},
keywords = {instance segmentation, cell segmentation, deep learning, image analysis},
journal = {eLife},
issn = {2050-084X},
publisher = {eLife Sciences Publications, Ltd},
}
FAQs
3D U-Net model for volumetric semantic segmentation use in SpotMAX
We found that pytorch3dunet-smax demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Recent coverage mislabels the latest TEA protocol spam as a worm. Here’s what’s actually happening.

Security News
PyPI adds Trusted Publishing support for GitLab Self-Managed as adoption reaches 25% of uploads

Research
/Security News
A malicious Chrome extension posing as an Ethereum wallet steals seed phrases by encoding them into Sui transactions, enabling full wallet takeover.