Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

loadtime

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

loadtime

Package to display a progress bar for long processes with uncertain end times

  • 1.0.6
  • PyPI
  • Socket score

Maintainers
1

LoadTime

English | 日本語

LoadTime is a Python package designed specifically to tackle the challenge of long waiting times associated with loading large-scale pretrained language models, such as HuggingFace models, into GPU or CPU memory.

With LoadTime, instead of waiting in uncertainty, you can visualize the progress of your loading process.

Of course, it can also be used for other long-term operations.

Installation

You can install LoadTime via pip:

pip install loadtime

Key Features

  • Real-time tracking: LoadTime provides real-time tracking of the loading process. No more staring at a static screen!

  • Progress Bar: The package displays a progress bar, showing you how much of the process has been completed and how much is still remaining. It takes the guesswork out of waiting!

  • Past Loading Time Cache: One unique feature of LoadTime is its ability to remember the time it took to load a model in the past. The package automatically caches the total loading time of your operations. The next time you load the same model, LoadTime uses this cached information to provide an even more accurate progress bar.

  • Customizable Display: LoadTime allows you to customize the progress display with your own messages. You can tailor the tool to fit your personal needs.

  • Optimized for HuggingFace Models: LoadTime has been optimized for loading HuggingFace models, with special handling of the download progress display when the model is not cached locally.

Basic Usage

Here is a simple example of how to use the LoadTime package:

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from loadtime import LoadTime

model_path = "togethercomputer/RedPajama-INCITE-Chat-3B-v1"

model = LoadTime(name=model_path,
                 fn=lambda: AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.float16))()

tokenizer = AutoTokenizer.from_pretrained(model_path) # important: load tokenizer after loading model

Initial Parameters

ParameterDescription
nameName of the long-term process. For loading HuggingFace models, specify the model name.
messageSpecify the message to be displayed. If omitted, the default message is used.
pbarSet to True to display the progress bar and percentage.
dirnameDirectory name for cache storage.
hfSet to True to use for time display for loading HuggingFace models. If the model data has not yet been downloaded to the disk, HuggingFace's loader displays the download progress, so this library does not display it.
fnFunction to execute the long-term process.
fn_printFunction to perform the display. If omitted, it will be output to the console.

Take control of your loading times with LoadTime!

FAQs


Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc