Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

optype

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

optype

Building blocks for precise & flexible type hints

  • 0.7.3
  • PyPI
  • Socket score

Maintainers
1

optype

Building blocks for precise & flexible type hints.

optype - PyPI optype - conda-forge optype - Python Versions optype - SPEC 0 — minimum supported dependencies optype - license

optype - CI optype - pre-commit optype - basedmypy optype - basedpyright optype - code style: ruff


Installation

PyPI

Optype is available as optype on PyPI:

pip install optype

For optional NumPy support, it is recommended to use the numpy extra. This ensures that the installed numpy version is compatible with optype, following NEP 29 and SPEC 0.

pip install "optype[numpy]"

See the optype.numpy docs for more info.

Conda

Optype can also be installed as with conda from the conda-forge channel:

conda install conda-forge::optype

Example

Let's say you're writing a twice(x) function, that evaluates 2 * x. Implementing it is trivial, but what about the type annotations?

Because twice(2) == 4, twice(3.14) == 6.28 and twice('I') = 'II', it might seem like a good idea to type it as twice[T](x: T) -> T: .... However, that wouldn't include cases such as twice(True) == 2 or twice((42, True)) == (42, True, 42, True), where the input- and output types differ. Moreover, twice should accept any type with a custom __rmul__ method that accepts 2 as argument.

This is where optype comes in handy, which has single-method protocols for all the builtin special methods. For twice, we can use optype.CanRMul[T, R], which, as the name suggests, is a protocol with (only) the def __rmul__(self, lhs: T) -> R: ... method. With this, the twice function can written as:

Python 3.10Python 3.12+
from typing import Literal
from typing import TypeAlias, TypeVar
from optype import CanRMul

R = TypeVar("R")
Two: TypeAlias = Literal[2]
RMul2: TypeAlias = CanRMul[Two, R]


def twice(x: RMul2[R]) -> R:
    return 2 * x
from typing import Literal
from optype import CanRMul

type Two = Literal[2]
type RMul2[R] = CanRMul[Two, R]


def twice[R](x: RMul2[R]) -> R:
    return 2 * x

But what about types that implement __add__ but not __radd__? In this case, we could return x * 2 as fallback (assuming commutativity). Because the optype.Can* protocols are runtime-checkable, the revised twice2 function can be compactly written as:

Python 3.10Python 3.12+
from optype import CanMul

Mul2: TypeAlias = CanMul[Two, R]
CMul2: TypeAlias = Mul2[R] | RMul2[R]


def twice2(x: CMul2[R]) -> R:
    if isinstance(x, CanRMul):
        return 2 * x
    else:
        return x * 2
from optype import CanMul

type Mul2[R] = CanMul[Two, R]
type CMul2[R] = Mul2[R] | RMul2[R]


def twice2[R](x: CMul2[R]) -> R:
    if isinstance(x, CanRMul):
        return 2 * x
    else:
        return x * 2

See examples/twice.py for the full example.

Reference

The API of optype is flat; a single import optype as opt is all you need (except for optype.numpy).

optype

There are four flavors of things that live within optype,

  • optype.Can{} types describe what can be done with it. For instance, any CanAbs[T] type can be used as argument to the abs() builtin function with return type T. Most Can{} implement a single special method, whose name directly matched that of the type. CanAbs implements __abs__, CanAdd implements __add__, etc.
  • optype.Has{} is the analogue of Can{}, but for special attributes. HasName has a __name__ attribute, HasDict has a __dict__, etc.
  • optype.Does{} describe the type of operators. So DoesAbs is the type of the abs({}) builtin function, and DoesPos the type of the +{} prefix operator.
  • optype.do_{} are the correctly-typed implementations of Does{}. For each do_{} there is a Does{}, and vice-versa. So do_abs: DoesAbs is the typed alias of abs({}), and do_pos: DoesPos is a typed version of operator.pos. The optype.do_ operators are more complete than operators, have runtime-accessible type annotations, and have names you don't need to know by heart.

The reference docs are structured as follows:

All typing protocols here live in the root optype namespace. They are runtime-checkable so that you can do e.g. isinstance('snail', optype.CanAdd), in case you want to check whether snail implements __add__.

Unlikecollections.abc, optype's protocols aren't abstract base classes, i.e. they don't extend abc.ABC, only typing.Protocol. This allows the optype protocols to be used as building blocks for .pyi type stubs.

Builtin type conversion

The return type of these special methods is invariant. Python will raise an error if some other (sub)type is returned. This is why these optype interfaces don't accept generic type arguments.

operatoroperand
expressionfunctiontypemethodtype
complex(_)do_complexDoesComplex__complex__CanComplex
float(_)do_floatDoesFloat__float__CanFloat
int(_)do_intDoesInt__int__CanInt[R: int = int]
bool(_)do_boolDoesBool__bool__CanBool[R: bool = bool]
bytes(_)do_bytesDoesBytes__bytes__CanBytes[R: bytes = bytes]
str(_)do_strDoesStr__str__CanStr[R: str = str]

[!NOTE] The Can* interfaces of the types that can used as typing.Literal accept an optional type parameter R. This can be used to indicate a literal return type, for surgically precise typing, e.g. None, True, and 42 are instances of CanBool[Literal[False]], CanInt[Literal[1]], and CanStr[Literal['42']], respectively.

These formatting methods are allowed to return instances that are a subtype of the str builtin. The same holds for the __format__ argument. So if you're a 10x developer that wants to hack Python's f-strings, but only if your type hints are spot-on; optype is you friend.

operatoroperand
expressionfunctiontypemethodtype
repr(_)do_reprDoesRepr__repr__CanRepr[R: str = str]
format(_, x)do_formatDoesFormat__format__CanFormat[T: str = str, R: str = str]

Additionally, optype provides protocols for types with (custom) hash or index methods:

operatoroperand
expressionfunctiontypemethodtype
hash(_)do_hashDoesHash__hash__CanHash
_.__index__() (docs) do_indexDoesIndex__index__CanIndex[R: int = int]
Rich relations

The "rich" comparison special methods often return a bool. However, instances of any type can be returned (e.g. a numpy array). This is why the corresponding optype.Can* interfaces accept a second type argument for the return type, that defaults to bool when omitted. The first type parameter matches the passed method argument, i.e. the right-hand side operand, denoted here as x.

operatoroperand
expressionreflectedfunctiontypemethodtype
_ == xx == _do_eqDoesEq__eq__CanEq[T = object, R = bool]
_ != xx != _do_neDoesNe__ne__CanNe[T = object, R = bool]
_ < xx > _do_ltDoesLt__lt__CanLt[T, R = bool]
_ <= xx >= _do_leDoesLe__le__CanLe[T, R = bool]
_ > xx < _do_gtDoesGt__gt__CanGt[T, R = bool]
_ >= xx <= _do_geDoesGe__ge__CanGe[T, R = bool]
Binary operations

In the Python docs, these are referred to as "arithmetic operations". But the operands aren't limited to numeric types, and because the operations aren't required to be commutative, might be non-deterministic, and could have side-effects. Classifying them "arithmetic" is, at the very least, a bit of a stretch.

operatoroperand
expressionfunctiontypemethodtype
_ + xdo_addDoesAdd__add__CanAdd[T, R]
_ - xdo_subDoesSub__sub__CanSub[T, R]
_ * xdo_mulDoesMul__mul__CanMul[T, R]
_ @ xdo_matmulDoesMatmul__matmul__CanMatmul[T, R]
_ / xdo_truedivDoesTruediv__truediv__CanTruediv[T, R]
_ // xdo_floordivDoesFloordiv__floordiv__CanFloordiv[T, R]
_ % xdo_modDoesMod__mod__CanMod[T, R]
divmod(_, x)do_divmodDoesDivmod__divmod__CanDivmod[T, R]
_ ** x
pow(_, x)
do_pow/2DoesPow__pow__ CanPow2[T, R]
CanPow[T, None, R, Never]
pow(_, x, m)do_pow/3DoesPow__pow__ CanPow3[T, M, R]
CanPow[T, M, Never, R]
_ << xdo_lshiftDoesLshift__lshift__CanLshift[T, R]
_ >> xdo_rshiftDoesRshift__rshift__CanRshift[T, R]
_ & xdo_andDoesAnd__and__CanAnd[T, R]
_ ^ xdo_xorDoesXor__xor__CanXor[T, R]
_ | xdo_orDoesOr__or__CanOr[T, R]

[!NOTE] Because pow() can take an optional third argument, optype provides separate interfaces for pow() with two and three arguments. Additionally, there is the overloaded intersection type CanPow[T, M, R, RM] =: CanPow2[T, R] & CanPow3[T, M, RM], as interface for types that can take an optional third argument.

Reflected operations

For the binary infix operators above, optype additionally provides interfaces with reflected (swapped) operands, e.g. __radd__ is a reflected __add__. They are named like the original, but prefixed with CanR prefix, i.e. __name__.replace('Can', 'CanR').

operatoroperand
expressionfunctiontypemethodtype
x + _do_raddDoesRAdd__radd__CanRAdd[T, R]
x - _do_rsubDoesRSub__rsub__CanRSub[T, R]
x * _do_rmulDoesRMul__rmul__CanRMul[T, R]
x @ _do_rmatmulDoesRMatmul__rmatmul__CanRMatmul[T, R]
x / _do_rtruedivDoesRTruediv__rtruediv__CanRTruediv[T, R]
x // _do_rfloordivDoesRFloordiv__rfloordiv__CanRFloordiv[T, R]
x % _do_rmodDoesRMod__rmod__CanRMod[T, R]
divmod(x, _)do_rdivmodDoesRDivmod__rdivmod__CanRDivmod[T, R]
x ** _
pow(x, _)
do_rpowDoesRPow__rpow__CanRPow[T, R]
x << _do_rlshiftDoesRLshift__rlshift__CanRLshift[T, R]
x >> _do_rrshiftDoesRRshift__rrshift__CanRRshift[T, R]
x & _do_randDoesRAnd__rand__CanRAnd[T, R]
x ^ _do_rxorDoesRXor__rxor__CanRXor[T, R]
x | _do_rorDoesROr__ror__CanROr[T, R]

[!NOTE] CanRPow corresponds to CanPow2; the 3-parameter "modulo" pow does not reflect in Python.

According to the relevant python docs:

Note that ternary pow() will not try calling __rpow__() (the coercion rules would become too complicated).

Inplace operations

Similar to the reflected ops, the inplace/augmented ops are prefixed with CanI, namely:

operatoroperand
expressionfunctiontypemethodtypes
_ += xdo_iaddDoesIAdd__iadd__ CanIAdd[T, R]
CanIAddSelf[T]
_ -= xdo_isubDoesISub__isub__ CanISub[T, R]
CanISubSelf[T]
_ *= xdo_imulDoesIMul__imul__ CanIMul[T, R]
CanIMulSelf[T]
_ @= xdo_imatmulDoesIMatmul__imatmul__ CanIMatmul[T, R]
CanIMatmulSelf[T]
_ /= xdo_itruedivDoesITruediv__itruediv__ CanITruediv[T, R]
CanITruedivSelf[T]
_ //= xdo_ifloordivDoesIFloordiv__ifloordiv__ CanIFloordiv[T, R]
CanIFloordivSelf[T]
_ %= xdo_imodDoesIMod__imod__ CanIMod[T, R]
CanIModSelf[T]
_ **= xdo_ipowDoesIPow__ipow__ CanIPow[T, R]
CanIPowSelf[T]
_ <<= xdo_ilshiftDoesILshift__ilshift__ CanILshift[T, R]
CanILshiftSelf[T]
_ >>= xdo_irshiftDoesIRshift__irshift__ CanIRshift[T, R]
CanIRshiftSelf[T]
_ &= xdo_iandDoesIAnd__iand__ CanIAnd[T, R]
CanIAndSelf[T]
_ ^= xdo_ixorDoesIXor__ixor__ CanIXor[T, R]
CanIXorSelf[T]
_ |= xdo_iorDoesIOr__ior__ CanIOr[T, R]
CanIOrSelf[T]

These inplace operators usually return itself (after some in-place mutation). But unfortunately, it currently isn't possible to use Self for this (i.e. something like type MyAlias[T] = optype.CanIAdd[T, Self] isn't allowed). So to help ease this unbearable pain, optype comes equipped with ready-made aliases for you to use. They bear the same name, with an additional *Self suffix, e.g. optype.CanIAddSelf[T].

Unary operations
operatoroperand
expressionfunctiontypemethodtypes
+_do_posDoesPos__pos__ CanPos[R]
CanPosSelf
-_do_negDoesNeg__neg__ CanNeg[R]
CanNegSelf
~_do_invertDoesInvert__invert__ CanInvert[R]
CanInvertSelf
abs(_)do_absDoesAbs__abs__ CanAbs[R]
CanAbsSelf
Rounding

The round() built-in function takes an optional second argument. From a typing perspective, round() has two overloads, one with 1 parameter, and one with two. For both overloads, optype provides separate operand interfaces: CanRound1[R] and CanRound2[T, RT]. Additionally, optype also provides their (overloaded) intersection type: CanRound[T, R, RT] = CanRound1[R] & CanRound2[T, RT].

operatoroperand
expressionfunctiontypemethodtype
round(_)do_round/1DoesRound__round__/1CanRound1[T = int]
round(_, n)do_round/2DoesRound__round__/2CanRound2[T = int, RT = float]
round(_, n=...)do_roundDoesRound__round__CanRound[T = int, R = int, RT = float]

For example, type-checkers will mark the following code as valid (tested with pyright in strict mode):

x: float = 3.14
x1: CanRound1[int] = x
x2: CanRound2[int, float] = x
x3: CanRound[int, int, float] = x

Furthermore, there are the alternative rounding functions from the math standard library:

operatoroperand
expressionfunctiontypemethodtype
math.trunc(_)do_truncDoesTrunc__trunc__CanTrunc[R = int]
math.floor(_)do_floorDoesFloor__floor__CanFloor[R = int]
math.ceil(_)do_ceilDoesCeil__ceil__CanCeil[R = int]

Almost all implementations use int for R. In fact, if no type for R is specified, it will default in int. But technially speaking, these methods can be made to return anything.

Callables

Unlike operator, optype provides the operator for callable objects: optype.do_call(f, *args. **kwargs).

CanCall is similar to collections.abc.Callable, but is runtime-checkable, and doesn't use esoteric hacks.

operatoroperand
expressionfunctiontypemethodtype
_(*args, **kwargs)do_callDoesCall__call__CanCall[**Pss, R]

[!NOTE] Pyright (and probably other typecheckers) tend to accept collections.abc.Callable in more places than optype.CanCall. This could be related to the lack of co/contra-variance specification for typing.ParamSpec (they should almost always be contravariant, but currently they can only be invariant).

In case you encounter such a situation, please open an issue about it, so we can investigate further.

Iteration

The operand x of iter(_) is within Python known as an iterable, which is what collections.abc.Iterable[V] is often used for (e.g. as base class, or for instance checking).

The optype analogue is CanIter[R], which as the name suggests, also implements __iter__. But unlike Iterable[V], its type parameter R binds to the return type of iter(_) -> R. This makes it possible to annotate the specific type of the iterable that iter(_) returns. Iterable[V] is only able to annotate the type of the iterated value. To see why that isn't possible, see python/typing#548.

The collections.abc.Iterator[V] is even more awkward; it is a subtype of Iterable[V]. For those familiar with collections.abc this might come as a surprise, but an iterator only needs to implement __next__, __iter__ isn't needed. This means that the Iterator[V] is unnecessarily restrictive. Apart from that being theoretically "ugly", it has significant performance implications, because the time-complexity of isinstance on a typing.Protocol is $O(n)$, with the $n$ referring to the amount of members. So even if the overhead of the inheritance and the abc.ABC usage is ignored, collections.abc.Iterator is twice as slow as it needs to be.

That's one of the (many) reasons that optype.CanNext[V] and optype.CanNext[V] are the better alternatives to Iterable and Iterator from the abracadabra collections. This is how they are defined:

operatoroperand
expressionfunctiontypemethodtype
next(_)do_nextDoesNext__next__CanNext[V]
iter(_)do_iterDoesIter__iter__CanIter[R: CanNext[object]]

For the sake of compatibility with collections.abc, there is optype.CanIterSelf[V], which is a protocol whose __iter__ returns typing.Self, as well as a __next__ method that returns T. I.e. it is equivalent to collections.abc.Iterator[V], but without the abc nonsense.

Awaitables

The optype is almost the same as collections.abc.Awaitable[R], except that optype.CanAwait[R] is a pure interface, whereas Awaitable is also an abstract base class (making it absolutely useless when writing stubs).

operatoroperand
expressionmethodtype
await ___await__CanAwait[R]
Async Iteration

Yes, you guessed it right; the abracadabra collections made the exact same mistakes for the async iterablors (or was it "iteramblers"...?).

But fret not; the optype alternatives are right here:

operatoroperand
expressionfunctiontypemethodtype
anext(_)do_anextDoesANext__anext__CanANext[V]
aiter(_)do_aiterDoesAIter__aiter__CanAIter[R: CanAnext[object]]

But wait, shouldn't V be a CanAwait? Well, only if you don't want to get fired... Technically speaking, __anext__ can return any type, and anext will pass it along without nagging (instance checks are slow, now stop bothering that liberal). For details, see the discussion at python/typeshed#7491. Just because something is legal, doesn't mean it's a good idea (don't eat the yellow snow).

Additionally, there is optype.CanAIterSelf[R], with both the __aiter__() -> Self and the __anext__() -> V methods.

Containers
operatoroperand
expressionfunctiontypemethodtype
len(_)do_lenDoesLen__len__CanLen[R: int = int]
_.__length_hint__() (docs) do_length_hintDoesLengthHint__length_hint__CanLengthHint[R: int = int]
_[k]do_getitemDoesGetitem__getitem__CanGetitem[K, V]
_.__missing__() (docs) do_missingDoesMissing__missing__CanMissing[K, D]
_[k] = vdo_setitemDoesSetitem__setitem__CanSetitem[K, V]
del _[k]do_delitemDoesDelitem__delitem__CanDelitem[K]
k in _do_containsDoesContains__contains__CanContains[K = object]
reversed(_)do_reversedDoesReversed__reversed__ CanReversed[R], or
CanSequence[I, V, N = int]

Because CanMissing[K, D] generally doesn't show itself without CanGetitem[K, V] there to hold its hand, optype conveniently stitched them together as optype.CanGetMissing[K, V, D=V].

Similarly, there is optype.CanSequence[K: CanIndex | slice, V], which is the combination of both CanLen and CanItem[I, V], and serves as a more specific and flexible collections.abc.Sequence[V].

Attributes
operatoroperand
expressionfunctiontypemethodtype
v = _.k or
v = getattr(_, k)
do_getattrDoesGetattr__getattr__CanGetattr[K: str = str, V = object]
_.k = v or
setattr(_, k, v)
do_setattrDoesSetattr__setattr__CanSetattr[K: str = str, V = object]
del _.k or
delattr(_, k)
do_delattrDoesDelattr__delattr__CanDelattr[K: str = str]
dir(_)do_dirDoesDir__dir__CanDir[R: CanIter[CanIterSelf[str]]]
Context managers

Support for the with statement.

operatoroperand
expressionmethod(s)type(s)
__enter__ CanEnter[C], or CanEnterSelf
__exit__ CanExit[R = None]
with _ as c: __enter__, and
__exit__
CanWith[C, R=None], or
CanWithSelf[R=None]

CanEnterSelf and CanWithSelf are (runtime-checkable) aliases for CanEnter[Self] and CanWith[Self, R], respectively.

For the async with statement the interfaces look very similar:

operatoroperand
expressionmethod(s)type(s)
__aenter__ CanAEnter[C], or
CanAEnterSelf
__aexit__CanAExit[R=None]
async with _ as c: __aenter__, and
__aexit__
CanAsyncWith[C, R=None], or
CanAsyncWithSelf[R=None]
Descriptors

Interfaces for descriptors.

operatoroperand
expressionmethodtype
v: V = T().d
vt: VT = T.d
__get__CanGet[T: object, V, VT = V]
T().k = v__set__CanSet[T: object, V]
del T().k__delete__CanDelete[T: object]
class T: d = ___set_name__CanSetName[T: object, N: str = str]
Buffer types

Interfaces for emulating buffer types using the buffer protocol.

operatoroperand
expressionmethodtype
v = memoryview(_)__buffer__CanBuffer[T: int = int]
del v__release_buffer__CanReleaseBuffer

optype.copy

For the copy standard library, optype.copy provides the following runtime-checkable interfaces:

copy standard libraryoptype.copy
functiontypemethod
copy.copy(_) -> R__copy__() -> RCanCopy[R]
copy.deepcopy(_, memo={}) -> R__deepcopy__(memo, /) -> RCanDeepcopy[R]
copy.replace(_, /, **changes: V) -> R [1] __replace__(**changes: V) -> RCanReplace[V, R]

[1] copy.replace requires python>=3.13 (but optype.copy.CanReplace doesn't)

In practice, it makes sense that a copy of an instance is the same type as the original. But because typing.Self cannot be used as a type argument, this difficult to properly type. Instead, you can use the optype.copy.Can{}Self types, which are the runtime-checkable equivalents of the following (recursive) type aliases:

type CanCopySelf = CanCopy[CanCopySelf]
type CanDeepcopySelf = CanDeepcopy[CanDeepcopySelf]
type CanReplaceSelf[V] = CanReplace[V, CanReplaceSelf[V]]

optype.dataclasses

For the dataclasses standard library, optype.dataclasses provides the HasDataclassFields[V: Mapping[str, Field]] interface. It can conveniently be used to check whether a type or instance is a dataclass, i.e. isinstance(obj, HasDataclassFields).

optype.inspect

A collection of functions for runtime inspection of types, modules, and other objects.

FunctionDescription
get_args(_)

A better alternative to typing.get_args(), that

  • unpacks typing.Annotated and Python 3.12 type _ alias types (i.e. typing.TypeAliasType),
  • recursively flattens unions and nested typing.Literal types, and
  • raises TypeError if not a type expression.

Return a tuple[type | object, ...] of type arguments or parameters.

To illustrate one of the (many) issues with typing.get_args:

>>> from typing import Literal, TypeAlias, get_args
>>> Falsy: TypeAlias = Literal[None] | Literal[False, 0] | Literal["", b""]
>>> get_args(Falsy)
(typing.Literal[None], typing.Literal[False, 0], typing.Literal['', b''])

But this is in direct contradiction with the official typing documentation:

When a Literal is parameterized with more than one value, it’s treated as exactly equivalent to the union of those types. That is, Literal[v1, v2, v3] is equivalent to Literal[v1] | Literal[v2] | Literal[v3].

So this is why optype.inspect.get_args should be used

>>> import optype as opt
>>> opt.inspect.get_args(Falsy)
(None, False, 0, '', b'')

Another issue of typing.get_args is with Python 3.12 type _ = ... aliases, which are meant as a replacement for _: typing.TypeAlias = ..., and should therefore be treated equally:

>>> import typing
>>> import optype as opt
>>> type StringLike = str | bytes
>>> typing.get_args(StringLike)
()
>>> opt.inspect.get_args(StringLike)
(<class 'str'>, <class 'bytes'>)

Clearly, typing.get_args fails misarably here; it would have been better if it would have raised an error, but it instead returns an empty tuple, hiding the fact that it doesn't support the new type _ = ... aliases. But luckily, optype.inspect.get_args doesn't have this problem, and treats it just like it treats typing.Alias (and so do the other optype.inspect functions).

get_protocol_members(_)

A better alternative to typing.get_protocol_members(), that

  • doesn't require Python 3.13 or above,
  • supports PEP 695 type _ alias types on Python 3.12 and above,
  • unpacks unions of typing.Literal ...
  • ... and flattens them if nested within another typing.Literal,
  • treats typing.Annotated[T] as T, and
  • raises a TypeError if the passed value isn't a type expression.

Returns a frozenset[str] with member names.

get_protocols(_)

Returns a frozenset[type] of the public protocols within the passed module. Pass private=True to also return the private protocols.

is_iterable(_)

Check whether the object can be iterated over, i.e. if it can be used in a for loop, without attempting to do so. If True is returned, then the object is a optype.typing.AnyIterable instance.

is_final(_)

Check if the type, method / classmethod / staticmethod / property, is decorated with @typing.final.

Note that a @property won't be recognized unless the @final decorator is placed below the @property decorator. See the function docstring for more information.

is_protocol(_)

A backport of typing.is_protocol that was added in Python 3.13, a re-export of typing_extensions.is_protocol.

is_runtime_protocol(_)

Check if the type expression is a runtime-protocol, i.e. a typing.Protocol type, decorated with @typing.runtime_checkable (also supports typing_extensions).

is_union_type(_)

Check if the type is a typing.Union type, e.g. str | int.

Unlike isinstance(_, types.Union), this function also returns True for unions of user-defined Generic or Protocol types (because those are different union types for some reason).

is_generic_alias(_)

Check if the type is a subscripted type, e.g. list[str] or optype.CanNext[int], but not list, CanNext.

Unlike isinstance(_, typing.GenericAlias), this function also returns True for user-defined Generic or Protocol types (because those are use a different generic alias for some reason).

Even though technically T1 | T2 is represented as typing.Union[T1, T2] (which is a (special) generic alias), is_generic_alias will returns False for such union types, because calling T1 | T2 a subscripted type just doesn't make much sense.

[!NOTE] All functions in optype.inspect also work for Python 3.12 type _ aliases (i.e. types.TypeAliasType) and with typing.Annotated.

optype.json

Type aliases for the json standard library:

ValueAnyValue
json.load(s) return typejson.dumps(s) input type
Array[V: Value = Value]AnyArray[V: AnyValue = AnyValue]
Object[V: Value = Value]AnyObject[V: AnyValue = AnyValue]

The (Any)Value can be any json input, i.e. Value | Array | Object is equivalent to Value. It's also worth noting that Value is a subtype of AnyValue, which means that AnyValue | Value is equivalent to AnyValue.

optype.pickle

For the pickle standard library, optype.pickle provides the following interfaces:

method(s)signature (bound)type
__reduce__() -> RCanReduce[R: str | tuple = ...]
__reduce_ex__(CanIndex) -> RCanReduceEx[R: str | tuple = ...]
__getstate__() -> SCanGetstate[S]
__setstate__(S) -> NoneCanSetstate[S]
__getnewargs__
__new__
() -> tuple[V, ...]
(V) -> Self
CanGetnewargs[V]
__getnewargs_ex__
__new__
() -> tuple[tuple[V, ...], dict[str, KV]]
(*tuple[V, ...], **dict[str, KV]) -> Self
CanGetnewargsEx[V, KV]

optype.string

The string standard library contains practical constants, but it has two issues:

  • The constants contain a collection of characters, but are represented as a single string. This makes it practically impossible to type-hint the individual characters, so typeshed currently types these constants as a LiteralString.
  • The names of the constants are inconsistent, and doesn't follow PEP 8.

So instead, optype.string provides an alternative interface, that is compatible with string, but with slight differences:

  • For each constant, there is a corresponding Literal type alias for the individual characters. Its name matches the name of the constant, but is singular instead of plural.
  • Instead of a single string, optype.string uses a tuple of characters, so that each character has its own typing.Literal annotation. Note that this is only tested with (based)pyright / pylance, so it might not work with mypy (it has more bugs than it has lines of codes).
  • The names of the constant are consistent with PEP 8, and use a postfix notation for variants, e.g. DIGITS_HEX instead of hexdigits.
  • Unlike string, optype.string has a constant (and type alias) for binary digits '0' and '1'; DIGITS_BIN (and DigitBin). Because besides oct and hex functions in builtins, there's also the builtins.bin function.
string._optype.string._
constantchar typeconstantchar type
missingDIGITS_BINDigitBin
octdigitsLiteralStringDIGITS_OCTDigitOct
digitsDIGITSDigit
hexdigitsDIGITS_HEXDigitHex
ascii_lettersLETTERSLetter
ascii_lowercaseLETTERS_LOWERLetterLower
ascii_uppercaseLETTERS_UPPERLetterUpper
punctuationPUNCTUATIONPunctuation
whitespaceWHITESPACEWhitespace
printablePRINTABLEPrintable

Each of the optype.string constants is exactly the same as the corresponding string constant (after concatenation / splitting), e.g.

>>> import string
>>> import optype as opt
>>> "".join(opt.string.PRINTABLE) == string.printable
True
>>> tuple(string.printable) == opt.string.PRINTABLE
True

Similarly, the values within a constant's Literal type exactly match the values of its constant:

>>> import optype as opt
>>> from optype.inspect import get_args
>>> get_args(opt.string.Printable) == opt.string.PRINTABLE
True

The optype.inspect.get_args is a non-broken variant of typing.get_args that correctly flattens nested literals, type-unions, and PEP 695 type aliases, so that it matches the official typing specs. In other words; typing.get_args is yet another fundamentally broken python-typing feature that's useless in the situations where you need it most.

optype.typing

Any* type aliases

Type aliases for anything that can always be passed to int, float, complex, iter, or typing.Literal

Python constructoroptype.typing alias
int(_)AnyInt
float(_)AnyFloat
complex(_)AnyComplex
iter(_)AnyIterable
typing.Literal[_]AnyLiteral

[!NOTE] Even though some str and bytes can be converted to int, float, complex, most of them can't, and are therefore not included in these type aliases.

Empty* type aliases

These are builtin types or collections that are empty, i.e. have length 0 or yield no elements.

instanceoptype.typing type
''EmptyString
b''EmptyBytes
()EmptyTuple
[]EmptyList
{}EmptyDict
set()EmptySet
(i for i in range(0))EmptyIterable
Literal types
Literal valuesoptype.typing typeNotes
{False, True}LiteralFalse Similar to typing.LiteralString, but for bool.
{0, 1, ..., 255}LiteralByte Integers in the range 0-255, that make up a bytes or bytearray objects.
Just types

[!WARNING] This is experimental, and is likely to change in the future.

The JustInt type can be used to only accept instances of type int. Subtypes like bool will be rejected. This works with recent versions of mypy and pyright.

import optype.typing as opt

def only_int_pls(x: opt.JustInt, /) -> None: ...

f(42)  # accepted
f(True)  # rejected

The Just type is a generic variant of JustInt. At the moment of writing, pyright doesn't support this yet, but it will soon (after the bundled typeshed is updated).

import optype.typing as opt

class A: ...
class B(A): ...

def must_have_type_a(a: opt.Just[A]) -> None: ...

must_have_type_a(A())  # accepted
must_have_type_a(B())  # rejected (at least with mypy)

optype.dlpack

A collection of low-level types for working DLPack.

Protocols
type signaturebound method
CanDLPack[
    +T = int,
    +D: int = int,
]
def __dlpack__(
    *,
    stream: int | None = ...,
    max_version: tuple[int, int] | None = ...,
    dl_device: tuple[T, D] | None = ...,
    copy: bool | None = ...,
) -> types.CapsuleType: ...
CanDLPackDevice[
    +T = int,
    +D: int = int,
]
def __dlpack_device__() -> tuple[T, D]: ...

The + prefix indicates that the type parameter is covariant.

Enums

There are also two convenient IntEnums in optype.dlpack: DLDeviceType for the device types, and DLDataTypeCode for the internal type-codes of the DLPack data types.

optype.numpy

Optype supports both NumPy 1 and 2. The current minimum supported version is 1.24, following NEP 29 and SPEC 0.

When using optype.numpy, it is recommended to install optype with the numpy extra, ensuring version compatibility:

pip install "optype[numpy]"

[!NOTE] For the remainder of the optype.numpy docs, assume that the following import aliases are available.

from typing import Any, Literal
import numpy as np
import numpy.typing as npt
import optype.numpy as onp

For the sake of brevity and readability, the PEP 695 and PEP 696 type parameter syntax will be used, which is supported since Python 3.13.

Shape-typing with Array

Optype provides the generic onp.Array type alias for np.ndarray. It is similar to npt.NDArray, but includes two (optional) type parameters: one that matches the shape type (ND: tuple[int, ...]), and one that matches the scalar type (ST: np.generic).

When put the definitions of npt.NDArray and onp.Array side-by-side, their differences become clear:

numpy.typing.NDArray1

optype.numpy.Array

optype.numpy.ArrayND

type NDArray[
    # no shape type
    ST: generic,  # no default
] = ndarray[Any, dtype[ST]]
type Array[
    ND: (int, ...) = (int, ...),
    ST: generic = generic,
] = ndarray[ND, dtype[ST]]
type ArrayND[
    ST: generic = generic,
    ND: (int, ...) = (int, ...),
] = ndarray[ND, dtype[ST]]

Additionally, there are the three Array{0,1,2,3}D[ST: generic] aliases, which are equivalent to Array with tuple[()], tuple[int], tuple[int, int] and tuple[int, int, int] as shape-type, respectively.

[!TIP] Before NumPy 2.1, the shape type parameter of ndarray (i.e. the type of ndarray.shape) was invariant. It is therefore recommended to not use Literal within shape types on numpy<2.1. So with numpy>=2.1 you can use tuple[Literal[3], Literal[3]] without problem, but with numpy<2.1 you should use tuple[int, int] instead.

See numpy/numpy#25729 and numpy/numpy#26081 for details.

With onp.Array, it becomes possible to type the shape of arrays.

A shape is nothing more than a tuple of (non-negative) integers, i.e. an instance of tuple[int, ...] such as (42,), (480, 720, 3) or (). The length of a shape is often referred to as the number of dimensions or the dimensionality of the array or scalar. For arrays this is accessible through the np.ndarray.ndim, which is an alias for len(np.ndarray.shape).

[!NOTE] Before NumPy 2, the maximum number of dimensions was 32, but has since been increased to ndim <= 64.

To make typing the shape of an array easier, optype provides two families of shape type aliases: AtLeast{N}D and AtMost{N}D. The {N} should be replaced by the number of dimensions, which currently is limited to 0, 1, 2, and 3.

Both of these families are generic, and their (optional) type parameters must be either int (default), or a literal (non-negative) integer, i.e. like typing.Literal[N: int].

The names AtLeast{N}D and AtMost{N}D are pretty much as self-explanatory:

  • AtLeast{N}D is a tuple[int, ...] with ndim >= N
  • AtMost{N}D is a tuple[int, ...] with ndim <= N

The shape aliases are roughly defined as:

AtLeast{N}DAtMost{N}D
type signaturealias typetype signaturetype alias
type AtLeast0D[
    Ds: int = int,
] = _
tuple[Ds, ...]
type AtMost0D = _
tuple[()]
type AtLeast1D[
    D0: int = int,
    Ds: int = int,
] = _
tuple[
    D0,
    *tuple[Ds, ...],
]
type AtMost1D[
    D0: int = int,
] = _
tuple[D0] | AtMost0D
type AtLeast2D[
    D0: int = int,
    D1: int = int,
    Ds: int = int,
] = _
tuple[
    D0,
    D1,
    *tuple[Ds, ...],
]
type AtMost2D[
    D0: int = int,
    D1: int = int,
] = _
(
    tuple[D0, D1]
    | AtMost1D[D0]
)
type AtLeast3D[
    D0: int = int,
    D1: int = int,
    D2: int = int,
    Ds: int = int,
] = _
tuple[
    D0,
    D1,
    D2,
    *tuple[Ds, ...],
]
type AtMost3D[
    D0: int = int,
    D1: int = int,
    D2: int = int,
] = _
(
    tuple[D0, D1, D2]
    | AtMost2D[D0, D1]
)
Array-likes

Similar to the numpy._typing._ArrayLike{}_co coercible array-like types, optype.numpy provides the optype.numpy.To{}ND. Unlike the ones in numpy, these don't accept "bare" scalar types (the __len__ method is required). Additionally, there are the To{}1D, To{}2D, and To{}3D for vector-likes, matrix-likes, and cuboid-likes, and the To{} aliases for "bare" scalar types.

scalar typesscalar-like{1,2,3}-d array-like*-d array-like
builtins /
optype.typing
numpyoptype.numpy
boolbool_ToBoolToBool[strict]{1,2,3}DToBoolND
JustInt integer ToJustIntToJustInt[strict]{1,2,3}DToJustIntND
int integer
| bool_
ToIntToInt[strict]{1,2,3}DToIntND
float
| int
floating
| integer
| bool_
ToFloatToFloat[strict]{1,2,3}DToFloatND
complex
| float
| int
number
| bool_
ToComplexToComplex[strict]{1,2,3}DToComplexND
bytes
| str
| complex
| float
| int
genericToScalarToArray[strict]{1,2,3}DToArrayND

[!NOTE] The To*Strict{1,2,3}D aliases were added in optype 0.7.3.

These array-likes with strict shape-type require the shape-typed input to be shape-typed. This means that e.g. ToFloat1D and ToFloat2D are disjoint (non-overlapping), and makes them suitable to overload array-likes of a particular dtype for different numbers of dimensions.

Source code: optype/numpy/_to.py

DType

In NumPy, a dtype (data type) object, is an instance of the numpy.dtype[ST: np.generic] type. It's commonly used to convey metadata of a scalar type, e.g. within arrays.

Because the type parameter of np.dtype isn't optional, it could be more convenient to use the alias optype.numpy.DType, which is defined as:

type DType[ST: np.generic = np.generic] = np.dtype[ST]

Apart from the "CamelCase" name, the only difference with np.dtype is that the type parameter can be omitted, in which case it's equivalent to np.dtype[np.generic], but shorter.

Scalar

The optype.numpy.Scalar interface is a generic runtime-checkable protocol, that can be seen as a "more specific" np.generic, both in name, and from a typing perspective.

Its type signature looks roughly like this:

type Scalar[
    # The "Python type", so that `Scalar.item() -> PT`.
    PT: object,
    # The "N-bits" type (without having to deal with `npt.NBitBase`).
    # It matches the `itemsize: NB` property.
    NB: int = int,
] = ...

It can be used as e.g.

are_birds_real: Scalar[bool, Literal[1]] = np.bool_(True)
the_answer: Scalar[int, Literal[2]] = np.uint16(42)
alpha: Scalar[float, Literal[8]] = np.float64(1 / 137)

[!NOTE] The second type argument for itemsize can be omitted, which is equivalent to setting it to int, so Scalar[PT] and Scalar[PT, int] are equivalent.

UFunc

A large portion of numpy's public API consists of universal functions, often denoted as ufuncs, which are (callable) instances of np.ufunc.

[!TIP] Custom ufuncs can be created using np.frompyfunc, but also through a user-defined class that implements the required attributes and methods (i.e., duck typing).

But np.ufunc has a big issue; it accepts no type parameters. This makes it very difficult to properly annotate its callable signature and its literal attributes (e.g. .nin and .identity).

This is where optype.numpy.UFunc comes into play: It's a runtime-checkable generic typing protocol, that has been thoroughly type- and unit-tested to ensure compatibility with all of numpy's ufunc definitions. Its generic type signature looks roughly like:

type UFunc[
    # The type of the (bound) `__call__` method.
    Fn: CanCall = CanCall,
    # The types of the `nin` and `nout` (readonly) attributes.
    # Within numpy these match either `Literal[1]` or `Literal[2]`.
    Nin: int = int,
    Nout: int = int,
    # The type of the `signature` (readonly) attribute;
    # Must be `None` unless this is a generalized ufunc (gufunc), e.g.
    # `np.matmul`.
    Sig: str | None = str | None,
    # The type of the `identity` (readonly) attribute (used in `.reduce`).
    # Unless `Nin: Literal[2]`, `Nout: Literal[1]`, and `Sig: None`,
    # this should always be `None`.
    # Note that `complex` also includes `bool | int | float`.
    Id: complex | bytes | str | None = float | None,
] = ...

[!NOTE] Unfortunately, the extra callable methods of np.ufunc (at, reduce, reduceat, accumulate, and outer), are incorrectly annotated (as None attributes, even though at runtime they're methods that raise a ValueError when called). This currently makes it impossible to properly type these in optype.numpy.UFunc; doing so would make it incompatible with numpy's ufuncs.

Any*Array and Any*DType

The Any{Scalar}Array type aliases describe array-likes that are coercible to an numpy.ndarray with specific dtype.

Unlike numpy.typing.ArrayLike, these optype.numpy aliases don't accept "bare" scalar types such as float and np.float64. However, arrays of "zero dimensions" like onp.Array[tuple[()], np.float64] will be accepted. This is in line with the behavior of numpy.isscalar on numpy >= 2.

import numpy.typing as npt
import optype.numpy as onp

v_np: npt.ArrayLike = 3.14  # accepted
v_op: onp.AnyArray = 3.14  # rejected

sigma1_np: npt.ArrayLike = [[0, 1], [1, 0]]  # accepted
sigma1_op: onp.AnyArray = [[0, 1], [1, 0]]  # accepted

[!NOTE] The numpy.dtypes docs exists since NumPy 1.25, but its type annotations were incorrect before NumPy 2.1 (see numpy/numpy#27008)

See the docs for more info on the NumPy scalar type hierarchy.

Abstract types
numpy._optype.numpy._
scalarscalar basearray-likedtype-like
genericAnyArrayAnyDType
numbergenericAnyNumberArrayAnyNumberDType
integernumberAnyIntegerArrayAnyIntegerDType
inexactAnyInexactArrayAnyInexactDType
unsignedintegerintegerAnyUnsignedIntegerArrayAnyUnsignedIntegerDType
signedintegerAnySignedIntegerArrayAnySignedIntegerDType
floatinginexactAnyFloatingArrayAnyFloatingDType
complexfloatingAnyComplexFloatingArrayAnyComplexFloatingDType
Unsigned integers
numpy._numpy.dtypes._optype.numpy._
scalarscalar basedtypearray-likedtype-like
uint8, ubyteunsignedintegerUInt8DTypeAnyUInt8ArrayAnyUInt8DType
uint16, ushortUInt16DTypeAnyUInt16ArrayAnyUInt16DType

uint322

UInt32DTypeAnyUInt32ArrayAnyUInt32DType
uint64UInt64DTypeAnyUInt64ArrayAnyUInt64DType

uintc2

UIntDTypeAnyUIntCArrayAnyUIntCDType

uintp, uint_ 3

AnyUIntPArrayAnyUIntPDType

ulong4

ULongDTypeAnyULongArrayAnyULongDType
ulonglongULongLongDTypeAnyULongLongArrayAnyULongLongDType
Signed integers
numpy._numpy.dtypes._optype.numpy._
scalarscalar basedtypearray-likedtype-like
int8signedintegerInt8DTypeAnyInt8ArrayAnyInt8DType
int16Int16DTypeAnyInt16ArrayAnyInt16DType

int322

Int32DTypeAnyInt32ArrayAnyInt32DType
int64Int64DTypeAnyInt64ArrayAnyInt64DType

intc2

IntDTypeAnyIntCArrayAnyIntCDType

intp, int_ 3

AnyIntPArrayAnyIntPDType

long4

LongDTypeAnyLongArrayAnyLongDType
longlongLongLongDTypeAnyLongLongArrayAnyLongLongDType
Floats
numpy._numpy.dtypes._optype.numpy._
scalarscalar basedtypearray-likedtype-like
float16,
half
np.floatingFloat16DTypeAnyFloat16ArrayAnyFloat16DType
float32,
single
Float32DTypeAnyFloat32ArrayAnyFloat32DType
float64,
double
np.floating &
builtins.float
Float64DTypeAnyFloat64ArrayAnyFloat64DType

longdouble5

np.floatingLongDoubleDTypeAnyLongDoubleArrayAnyLongDoubleDType
Complex numbers
numpy._numpy.dtypes._optype.numpy._
scalarscalar basedtypearray-likedtype-like
complex64,
csingle
complexfloatingComplex64DTypeAnyComplex64ArrayAnyComplex64DType
complex128,
cdouble
complexfloating &
builtins.complex
Complex128DTypeAnyComplex128ArrayAnyComplex128DType

clongdouble6

complexfloatingCLongDoubleDTypeAnyCLongDoubleArrayAnyCLongDoubleDType
"Flexible"

Scalar types with "flexible" length, whose values have a (constant) length that depends on the specific np.dtype instantiation.

numpy._numpy.dtypes._optype.numpy._
scalarscalar basedtypearray-likedtype-like
bytes_characterBytesDTypeAnyBytesArrayAnyBytesDType
str_StrDTypeAnyStrArrayAnyStrDType
voidflexibleVoidDTypeAnyVoidArrayAnyVoidDType
Other types
numpy._numpy.dtypes._optype.numpy._
scalarscalar basedtypearray-likedtype-like

bool_7

genericBoolDTypeAnyBoolArrayAnyBoolDType
object_ObjectDTypeAnyObjectArrayAnyObjectDType
datetime64DateTime64DTypeAnyDateTime64ArrayAnyDateTime64DType
timedelta64

generic8

TimeDelta64DTypeAnyTimeDelta64ArrayAnyTimeDelta64DType

9

StringDTypeAnyStringArrayAnyStringDType
Low-level interfaces

Within optype.numpy there are several Can* (single-method) and Has* (single-attribute) protocols, related to the __array_*__ dunders of the NumPy Python API. These typing protocols are, just like the optype.Can* and optype.Has* ones, runtime-checkable and extensible (i.e. not @final).

[!TIP] All type parameters of these protocols can be omitted, which is equivalent to passing its upper type bound.

Protocol type signatureImplementsNumPy docs
class CanArray[
    ND: tuple[int, ...] = ...,
    ST: np.generic = ...,
]: ...
def __array__[RT = ST](
    _,
    dtype: DType[RT] | None = ...,
) -> Array[ND, RT]

User Guide: Interoperability with NumPy

class CanArrayUFunc[
    U: UFunc = ...,
    R: object = ...,
]: ...
def __array_ufunc__(
    _,
    ufunc: U,
    method: LiteralString,
    *args: object,
    **kwargs: object,
) -> R: ...

NEP 13

class CanArrayFunction[
    F: CanCall[..., object] = ...,
    R = object,
]: ...
def __array_function__(
    _,
    func: F,
    types: CanIterSelf[type[CanArrayFunction]],
    args: tuple[object, ...],
    kwargs: Mapping[str, object],
) -> R: ...

NEP 18

class CanArrayFinalize[
    T: object = ...,
]: ...
def __array_finalize__(_, obj: T): ...

User Guide: Subclassing ndarray

class CanArrayWrap: ...
def __array_wrap__[ND, ST](
    _,
    array: Array[ND, ST],
    context: (...) | None = ...,
    return_scalar: bool = ...,
) -> Self | Array[ND, ST]

API: Standard array subclasses

class HasArrayInterface[
    V: Mapping[str, object] = ...,
]: ...
__array_interface__: V

API: The array interface protocol

class HasArrayPriority: ...
__array_priority__: float

API: Standard array subclasses

class HasDType[
    DT: DType = ...,
]: ...
dtype: DT

API: Specifying and constructing data types

Footnotes

  1. Since numpy>=2.2 the NDArray alias uses tuple[int, ...] as shape-type instead of Any.

  2. On unix-based platforms np.[u]intc are aliases for np.[u]int32. 2 3 4

  3. Since NumPy 2, np.uint and np.int_ are aliases for np.uintp and np.intp, respectively. 2

  4. On NumPy 1 np.uint and np.int_ are what in NumPy 2 are now the np.ulong and np.long types, respectively. 2

  5. Depending on the platform, np.longdouble is (almost always) an alias for either float128, float96, or (sometimes) float64.

  6. Depending on the platform, np.clongdouble is (almost always) an alias for either complex256, complex192, or (sometimes) complex128.

  7. Since NumPy 2, np.bool is preferred over np.bool_, which only exists for backwards compatibility.

  8. At runtime np.timedelta64 is a subclass of np.signedinteger, but this is currently not reflected in the type annotations.

  9. The np.dypes.StringDType has no associated numpy scalar type, and its .type attribute returns the builtins.str type instead. But from a typing perspective, such a np.dtype[builtins.str] isn't a valid type.

Keywords

FAQs


Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc