The file is a Torch/Pickle serialized model bundle (segmentation model + efficientnet encoder + weights) containing many binary pickles and debug metadata. I found no explicit plaintext backdoor indicators (hardcoded credentials, network endpoints, shell code) in the visible text, but this format (pickle/torch serialization) is inherently dangerous to load from untrusted sources because unpickling can execute arbitrary code. Recommendation: treat this as data only; do NOT load with torch.load or pickle.load in an untrusted environment. Verify provenance (checksums, signatures, trusted origin) and, when possible, load in a sandboxed environment or convert models via safer formats (e.g. ONNX with verified tooling) where applicable. If you must use torch.load, ensure it’s from a trusted source and consider loading map_location and strict options and run in isolated runtime.
Live on pypi for 8 hours and 39 minutes before removal. Socket users were protected even while the package was live.