Initial commit

This commit is contained in:
Untriex Programming
2021-08-31 22:06:02 +02:00
commit 9b6723e11e
5142 changed files with 1455625 additions and 0 deletions

View File

@@ -0,0 +1,384 @@
"""
============================
Typing (:mod:`numpy.typing`)
============================
.. warning::
Some of the types in this module rely on features only present in
the standard library in Python 3.8 and greater. If you want to use
these types in earlier versions of Python, you should install the
typing-extensions_ package.
Large parts of the NumPy API have PEP-484-style type annotations. In
addition a number of type aliases are available to users, most prominently
the two below:
- `ArrayLike`: objects that can be converted to arrays
- `DTypeLike`: objects that can be converted to dtypes
.. _typing-extensions: https://pypi.org/project/typing-extensions/
Mypy plugin
-----------
A mypy_ plugin is distributed in `numpy.typing` for managing a number of
platform-specific annotations. Its function can be split into to parts:
* Assigning the (platform-dependent) precisions of certain `~numpy.number` subclasses,
including the likes of `~numpy.int_`, `~numpy.intp` and `~numpy.longlong`.
See the documentation on :ref:`scalar types <arrays.scalars.built-in>` for a
comprehensive overview of the affected classes. without the plugin the precision
of all relevant classes will be inferred as `~typing.Any`.
* Removing all extended-precision `~numpy.number` subclasses that are unavailable
for the platform in question. Most notable this includes the likes of
`~numpy.float128` and `~numpy.complex256`. Without the plugin *all*
extended-precision types will, as far as mypy is concerned, be available
to all platforms.
To enable the plugin, one must add it to their mypy `configuration file`_:
.. code-block:: ini
[mypy]
plugins = numpy.typing.mypy_plugin
.. _mypy: http://mypy-lang.org/
.. _configuration file: https://mypy.readthedocs.io/en/stable/config_file.html
Differences from the runtime NumPy API
--------------------------------------
NumPy is very flexible. Trying to describe the full range of
possibilities statically would result in types that are not very
helpful. For that reason, the typed NumPy API is often stricter than
the runtime NumPy API. This section describes some notable
differences.
ArrayLike
~~~~~~~~~
The `ArrayLike` type tries to avoid creating object arrays. For
example,
.. code-block:: python
>>> np.array(x**2 for x in range(10))
array(<generator object <genexpr> at ...>, dtype=object)
is valid NumPy code which will create a 0-dimensional object
array. Type checkers will complain about the above example when using
the NumPy types however. If you really intended to do the above, then
you can either use a ``# type: ignore`` comment:
.. code-block:: python
>>> np.array(x**2 for x in range(10)) # type: ignore
or explicitly type the array like object as `~typing.Any`:
.. code-block:: python
>>> from typing import Any
>>> array_like: Any = (x**2 for x in range(10))
>>> np.array(array_like)
array(<generator object <genexpr> at ...>, dtype=object)
ndarray
~~~~~~~
It's possible to mutate the dtype of an array at runtime. For example,
the following code is valid:
.. code-block:: python
>>> x = np.array([1, 2])
>>> x.dtype = np.bool_
This sort of mutation is not allowed by the types. Users who want to
write statically typed code should instead use the `numpy.ndarray.view`
method to create a view of the array with a different dtype.
DTypeLike
~~~~~~~~~
The `DTypeLike` type tries to avoid creation of dtype objects using
dictionary of fields like below:
.. code-block:: python
>>> x = np.dtype({"field1": (float, 1), "field2": (int, 3)})
Although this is valid NumPy code, the type checker will complain about it,
since its usage is discouraged.
Please see : :ref:`Data type objects <arrays.dtypes>`
Number precision
~~~~~~~~~~~~~~~~
The precision of `numpy.number` subclasses is treated as a covariant generic
parameter (see :class:`~NBitBase`), simplifying the annotating of processes
involving precision-based casting.
.. code-block:: python
>>> from typing import TypeVar
>>> import numpy as np
>>> import numpy.typing as npt
>>> T = TypeVar("T", bound=npt.NBitBase)
>>> def func(a: "np.floating[T]", b: "np.floating[T]") -> "np.floating[T]":
... ...
Consequently, the likes of `~numpy.float16`, `~numpy.float32` and
`~numpy.float64` are still sub-types of `~numpy.floating`, but, contrary to
runtime, they're not necessarily considered as sub-classes.
Timedelta64
~~~~~~~~~~~
The `~numpy.timedelta64` class is not considered a subclass of `~numpy.signedinteger`,
the former only inheriting from `~numpy.generic` while static type checking.
0D arrays
~~~~~~~~~
During runtime numpy aggressively casts any passed 0D arrays into their
corresponding `~numpy.generic` instance. Until the introduction of shape
typing (see :pep:`646`) it is unfortunately not possible to make the
necessary distinction between 0D and >0D arrays. While thus not strictly
correct, all operations are that can potentially perform a 0D-array -> scalar
cast are currently annotated as exclusively returning an `ndarray`.
If it is known in advance that an operation _will_ perform a
0D-array -> scalar cast, then one can consider manually remedying the
situation with either `typing.cast` or a ``# type: ignore`` comment.
API
---
"""
# NOTE: The API section will be appended with additional entries
# further down in this file
from typing import TYPE_CHECKING, List, Any
if TYPE_CHECKING:
import sys
if sys.version_info >= (3, 8):
from typing import final
else:
from typing_extensions import final
else:
def final(f): return f
if not TYPE_CHECKING:
__all__ = ["ArrayLike", "DTypeLike", "NBitBase", "NDArray"]
else:
# Ensure that all objects within this module are accessible while
# static type checking. This includes private ones, as we need them
# for internal use.
#
# Declare to mypy that `__all__` is a list of strings without assigning
# an explicit value
__all__: List[str]
@final # Dissallow the creation of arbitrary `NBitBase` subclasses
class NBitBase:
"""
An object representing `numpy.number` precision during static type checking.
Used exclusively for the purpose static type checking, `NBitBase`
represents the base of a hierarchical set of subclasses.
Each subsequent subclass is herein used for representing a lower level
of precision, *e.g.* ``64Bit > 32Bit > 16Bit``.
Examples
--------
Below is a typical usage example: `NBitBase` is herein used for annotating a
function that takes a float and integer of arbitrary precision as arguments
and returns a new float of whichever precision is largest
(*e.g.* ``np.float16 + np.int64 -> np.float64``).
.. code-block:: python
>>> from __future__ import annotations
>>> from typing import TypeVar, Union, TYPE_CHECKING
>>> import numpy as np
>>> import numpy.typing as npt
>>> T1 = TypeVar("T1", bound=npt.NBitBase)
>>> T2 = TypeVar("T2", bound=npt.NBitBase)
>>> def add(a: np.floating[T1], b: np.integer[T2]) -> np.floating[Union[T1, T2]]:
... return a + b
>>> a = np.float16()
>>> b = np.int64()
>>> out = add(a, b)
>>> if TYPE_CHECKING:
... reveal_locals()
... # note: Revealed local types are:
... # note: a: numpy.floating[numpy.typing._16Bit*]
... # note: b: numpy.signedinteger[numpy.typing._64Bit*]
... # note: out: numpy.floating[numpy.typing._64Bit*]
"""
def __init_subclass__(cls) -> None:
allowed_names = {
"NBitBase", "_256Bit", "_128Bit", "_96Bit", "_80Bit",
"_64Bit", "_32Bit", "_16Bit", "_8Bit",
}
if cls.__name__ not in allowed_names:
raise TypeError('cannot inherit from final class "NBitBase"')
super().__init_subclass__()
# Silence errors about subclassing a `@final`-decorated class
class _256Bit(NBitBase): ... # type: ignore[misc]
class _128Bit(_256Bit): ... # type: ignore[misc]
class _96Bit(_128Bit): ... # type: ignore[misc]
class _80Bit(_96Bit): ... # type: ignore[misc]
class _64Bit(_80Bit): ... # type: ignore[misc]
class _32Bit(_64Bit): ... # type: ignore[misc]
class _16Bit(_32Bit): ... # type: ignore[misc]
class _8Bit(_16Bit): ... # type: ignore[misc]
from ._nbit import (
_NBitByte,
_NBitShort,
_NBitIntC,
_NBitIntP,
_NBitInt,
_NBitLongLong,
_NBitHalf,
_NBitSingle,
_NBitDouble,
_NBitLongDouble,
)
from ._char_codes import (
_BoolCodes,
_UInt8Codes,
_UInt16Codes,
_UInt32Codes,
_UInt64Codes,
_Int8Codes,
_Int16Codes,
_Int32Codes,
_Int64Codes,
_Float16Codes,
_Float32Codes,
_Float64Codes,
_Complex64Codes,
_Complex128Codes,
_ByteCodes,
_ShortCodes,
_IntCCodes,
_IntPCodes,
_IntCodes,
_LongLongCodes,
_UByteCodes,
_UShortCodes,
_UIntCCodes,
_UIntPCodes,
_UIntCodes,
_ULongLongCodes,
_HalfCodes,
_SingleCodes,
_DoubleCodes,
_LongDoubleCodes,
_CSingleCodes,
_CDoubleCodes,
_CLongDoubleCodes,
_DT64Codes,
_TD64Codes,
_StrCodes,
_BytesCodes,
_VoidCodes,
_ObjectCodes,
)
from ._scalars import (
_CharLike_co,
_BoolLike_co,
_UIntLike_co,
_IntLike_co,
_FloatLike_co,
_ComplexLike_co,
_TD64Like_co,
_NumberLike_co,
_ScalarLike_co,
_VoidLike_co,
)
from ._shape import _Shape, _ShapeLike
from ._dtype_like import (
DTypeLike as DTypeLike,
_SupportsDType,
_VoidDTypeLike,
_DTypeLikeBool,
_DTypeLikeUInt,
_DTypeLikeInt,
_DTypeLikeFloat,
_DTypeLikeComplex,
_DTypeLikeTD64,
_DTypeLikeDT64,
_DTypeLikeObject,
_DTypeLikeVoid,
_DTypeLikeStr,
_DTypeLikeBytes,
_DTypeLikeComplex_co,
)
from ._array_like import (
ArrayLike as ArrayLike,
_ArrayLike,
_NestedSequence,
_RecursiveSequence,
_SupportsArray,
_ArrayLikeInt,
_ArrayLikeBool_co,
_ArrayLikeUInt_co,
_ArrayLikeInt_co,
_ArrayLikeFloat_co,
_ArrayLikeComplex_co,
_ArrayLikeNumber_co,
_ArrayLikeTD64_co,
_ArrayLikeDT64_co,
_ArrayLikeObject_co,
_ArrayLikeVoid_co,
_ArrayLikeStr_co,
_ArrayLikeBytes_co,
)
from ._generic_alias import (
NDArray as NDArray,
_GenericAlias,
)
if TYPE_CHECKING:
from ._ufunc import (
_UFunc_Nin1_Nout1,
_UFunc_Nin2_Nout1,
_UFunc_Nin1_Nout2,
_UFunc_Nin2_Nout2,
_GUFunc_Nin2_Nout1,
)
else:
_UFunc_Nin1_Nout1 = Any
_UFunc_Nin2_Nout1 = Any
_UFunc_Nin1_Nout2 = Any
_UFunc_Nin2_Nout2 = Any
_GUFunc_Nin2_Nout1 = Any
# Clean up the namespace
del TYPE_CHECKING, final, List, Any
if __doc__ is not None:
from ._add_docstring import _docstrings
__doc__ += _docstrings
__doc__ += '\n.. autoclass:: numpy.typing.NBitBase\n'
del _docstrings
from numpy._pytesttester import PytestTester
test = PytestTester(__name__)
del PytestTester

View File

@@ -0,0 +1,143 @@
"""A module for creating docstrings for sphinx ``data`` domains."""
import re
import textwrap
from ._generic_alias import NDArray
_docstrings_list = []
def add_newdoc(name: str, value: str, doc: str) -> None:
"""Append ``_docstrings_list`` with a docstring for `name`.
Parameters
----------
name : str
The name of the object.
value : str
A string-representation of the object.
doc : str
The docstring of the object.
"""
_docstrings_list.append((name, value, doc))
def _parse_docstrings() -> str:
"""Convert all docstrings in ``_docstrings_list`` into a single
sphinx-legible text block.
"""
type_list_ret = []
for name, value, doc in _docstrings_list:
s = textwrap.dedent(doc).replace("\n", "\n ")
# Replace sections by rubrics
lines = s.split("\n")
new_lines = []
indent = ""
for line in lines:
m = re.match(r'^(\s+)[-=]+\s*$', line)
if m and new_lines:
prev = textwrap.dedent(new_lines.pop())
if prev == "Examples":
indent = ""
new_lines.append(f'{m.group(1)}.. rubric:: {prev}')
else:
indent = 4 * " "
new_lines.append(f'{m.group(1)}.. admonition:: {prev}')
new_lines.append("")
else:
new_lines.append(f"{indent}{line}")
s = "\n".join(new_lines)
# Done.
type_list_ret.append(f""".. data:: {name}\n :value: {value}\n {s}""")
return "\n".join(type_list_ret)
add_newdoc('ArrayLike', 'typing.Union[...]',
"""
A `~typing.Union` representing objects that can be coerced into an `~numpy.ndarray`.
Among others this includes the likes of:
* Scalars.
* (Nested) sequences.
* Objects implementing the `~class.__array__` protocol.
See Also
--------
:term:`array_like`:
Any scalar or sequence that can be interpreted as an ndarray.
Examples
--------
.. code-block:: python
>>> import numpy as np
>>> import numpy.typing as npt
>>> def as_array(a: npt.ArrayLike) -> np.ndarray:
... return np.array(a)
""")
add_newdoc('DTypeLike', 'typing.Union[...]',
"""
A `~typing.Union` representing objects that can be coerced into a `~numpy.dtype`.
Among others this includes the likes of:
* :class:`type` objects.
* Character codes or the names of :class:`type` objects.
* Objects with the ``.dtype`` attribute.
See Also
--------
:ref:`Specifying and constructing data types <arrays.dtypes.constructing>`
A comprehensive overview of all objects that can be coerced into data types.
Examples
--------
.. code-block:: python
>>> import numpy as np
>>> import numpy.typing as npt
>>> def as_dtype(d: npt.DTypeLike) -> np.dtype:
... return np.dtype(d)
""")
add_newdoc('NDArray', repr(NDArray),
"""
A :term:`generic <generic type>` version of
`np.ndarray[Any, np.dtype[+ScalarType]] <numpy.ndarray>`.
Can be used during runtime for typing arrays with a given dtype
and unspecified shape.
Examples
--------
.. code-block:: python
>>> import numpy as np
>>> import numpy.typing as npt
>>> print(npt.NDArray)
numpy.ndarray[typing.Any, numpy.dtype[+ScalarType]]
>>> print(npt.NDArray[np.float64])
numpy.ndarray[typing.Any, numpy.dtype[numpy.float64]]
>>> NDArrayInt = npt.NDArray[np.int_]
>>> a: NDArrayInt = np.arange(10)
>>> def func(a: npt.ArrayLike) -> npt.NDArray[Any]:
... return np.array(a)
""")
_docstrings = _parse_docstrings()

View File

@@ -0,0 +1,140 @@
from __future__ import annotations
import sys
from typing import (
Any,
overload,
Sequence,
TYPE_CHECKING,
Union,
TypeVar,
Generic,
)
from numpy import (
ndarray,
dtype,
generic,
bool_,
unsignedinteger,
integer,
floating,
complexfloating,
number,
timedelta64,
datetime64,
object_,
void,
str_,
bytes_,
)
from ._dtype_like import DTypeLike
if sys.version_info >= (3, 8):
from typing import Protocol
HAVE_PROTOCOL = True
else:
try:
from typing_extensions import Protocol
except ImportError:
HAVE_PROTOCOL = False
else:
HAVE_PROTOCOL = True
_T = TypeVar("_T")
_ScalarType = TypeVar("_ScalarType", bound=generic)
_DType = TypeVar("_DType", bound="dtype[Any]")
_DType_co = TypeVar("_DType_co", covariant=True, bound="dtype[Any]")
if TYPE_CHECKING or HAVE_PROTOCOL:
# The `_SupportsArray` protocol only cares about the default dtype
# (i.e. `dtype=None` or no `dtype` parameter at all) of the to-be returned
# array.
# Concrete implementations of the protocol are responsible for adding
# any and all remaining overloads
class _SupportsArray(Protocol[_DType_co]):
def __array__(self) -> ndarray[Any, _DType_co]: ...
else:
class _SupportsArray(Generic[_DType_co]):
pass
# TODO: Wait for support for recursive types
_NestedSequence = Union[
_T,
Sequence[_T],
Sequence[Sequence[_T]],
Sequence[Sequence[Sequence[_T]]],
Sequence[Sequence[Sequence[Sequence[_T]]]],
]
_RecursiveSequence = Sequence[Sequence[Sequence[Sequence[Sequence[Any]]]]]
# A union representing array-like objects; consists of two typevars:
# One representing types that can be parametrized w.r.t. `np.dtype`
# and another one for the rest
_ArrayLike = Union[
_NestedSequence[_SupportsArray[_DType]],
_NestedSequence[_T],
]
# TODO: support buffer protocols once
#
# https://bugs.python.org/issue27501
#
# is resolved. See also the mypy issue:
#
# https://github.com/python/typing/issues/593
ArrayLike = Union[
_RecursiveSequence,
_ArrayLike[
dtype,
Union[bool, int, float, complex, str, bytes]
],
]
# `ArrayLike<X>_co`: array-like objects that can be coerced into `X`
# given the casting rules `same_kind`
_ArrayLikeBool_co = _ArrayLike[
"dtype[bool_]",
bool,
]
_ArrayLikeUInt_co = _ArrayLike[
"dtype[Union[bool_, unsignedinteger[Any]]]",
bool,
]
_ArrayLikeInt_co = _ArrayLike[
"dtype[Union[bool_, integer[Any]]]",
Union[bool, int],
]
_ArrayLikeFloat_co = _ArrayLike[
"dtype[Union[bool_, integer[Any], floating[Any]]]",
Union[bool, int, float],
]
_ArrayLikeComplex_co = _ArrayLike[
"dtype[Union[bool_, integer[Any], floating[Any], complexfloating[Any, Any]]]",
Union[bool, int, float, complex],
]
_ArrayLikeNumber_co = _ArrayLike[
"dtype[Union[bool_, number[Any]]]",
Union[bool, int, float, complex],
]
_ArrayLikeTD64_co = _ArrayLike[
"dtype[Union[bool_, integer[Any], timedelta64]]",
Union[bool, int],
]
_ArrayLikeDT64_co = _NestedSequence[_SupportsArray["dtype[datetime64]"]]
_ArrayLikeObject_co = _NestedSequence[_SupportsArray["dtype[object_]"]]
_ArrayLikeVoid_co = _NestedSequence[_SupportsArray["dtype[void]"]]
_ArrayLikeStr_co = _ArrayLike[
"dtype[str_]",
str,
]
_ArrayLikeBytes_co = _ArrayLike[
"dtype[bytes_]",
bytes,
]
_ArrayLikeInt = _ArrayLike[
"dtype[integer[Any]]",
int,
]

View File

@@ -0,0 +1,364 @@
"""
A module with various ``typing.Protocol`` subclasses that implement
the ``__call__`` magic method.
See the `Mypy documentation`_ on protocols for more details.
.. _`Mypy documentation`: https://mypy.readthedocs.io/en/stable/protocols.html#callback-protocols
"""
from __future__ import annotations
import sys
from typing import (
Union,
TypeVar,
overload,
Any,
Tuple,
NoReturn,
TYPE_CHECKING,
)
from numpy import (
ndarray,
dtype,
generic,
bool_,
timedelta64,
number,
integer,
unsignedinteger,
signedinteger,
int8,
int_,
floating,
float64,
complexfloating,
complex128,
)
from ._nbit import _NBitInt, _NBitDouble
from ._scalars import (
_BoolLike_co,
_IntLike_co,
_FloatLike_co,
_ComplexLike_co,
_NumberLike_co,
)
from . import NBitBase
from ._array_like import ArrayLike
from ._generic_alias import NDArray
if sys.version_info >= (3, 8):
from typing import Protocol
HAVE_PROTOCOL = True
else:
try:
from typing_extensions import Protocol
except ImportError:
HAVE_PROTOCOL = False
else:
HAVE_PROTOCOL = True
if TYPE_CHECKING or HAVE_PROTOCOL:
_T1 = TypeVar("_T1")
_T2 = TypeVar("_T2")
_2Tuple = Tuple[_T1, _T1]
_NBit1 = TypeVar("_NBit1", bound=NBitBase)
_NBit2 = TypeVar("_NBit2", bound=NBitBase)
_IntType = TypeVar("_IntType", bound=integer)
_FloatType = TypeVar("_FloatType", bound=floating)
_NumberType = TypeVar("_NumberType", bound=number)
_NumberType_co = TypeVar("_NumberType_co", covariant=True, bound=number)
_GenericType_co = TypeVar("_GenericType_co", covariant=True, bound=generic)
class _BoolOp(Protocol[_GenericType_co]):
@overload
def __call__(self, __other: _BoolLike_co) -> _GenericType_co: ...
@overload # platform dependent
def __call__(self, __other: int) -> int_: ...
@overload
def __call__(self, __other: float) -> float64: ...
@overload
def __call__(self, __other: complex) -> complex128: ...
@overload
def __call__(self, __other: _NumberType) -> _NumberType: ...
class _BoolBitOp(Protocol[_GenericType_co]):
@overload
def __call__(self, __other: _BoolLike_co) -> _GenericType_co: ...
@overload # platform dependent
def __call__(self, __other: int) -> int_: ...
@overload
def __call__(self, __other: _IntType) -> _IntType: ...
class _BoolSub(Protocol):
# Note that `__other: bool_` is absent here
@overload
def __call__(self, __other: bool) -> NoReturn: ...
@overload # platform dependent
def __call__(self, __other: int) -> int_: ...
@overload
def __call__(self, __other: float) -> float64: ...
@overload
def __call__(self, __other: complex) -> complex128: ...
@overload
def __call__(self, __other: _NumberType) -> _NumberType: ...
class _BoolTrueDiv(Protocol):
@overload
def __call__(self, __other: Union[float, _IntLike_co]) -> float64: ...
@overload
def __call__(self, __other: complex) -> complex128: ...
@overload
def __call__(self, __other: _NumberType) -> _NumberType: ...
class _BoolMod(Protocol):
@overload
def __call__(self, __other: _BoolLike_co) -> int8: ...
@overload # platform dependent
def __call__(self, __other: int) -> int_: ...
@overload
def __call__(self, __other: float) -> float64: ...
@overload
def __call__(self, __other: _IntType) -> _IntType: ...
@overload
def __call__(self, __other: _FloatType) -> _FloatType: ...
class _BoolDivMod(Protocol):
@overload
def __call__(self, __other: _BoolLike_co) -> _2Tuple[int8]: ...
@overload # platform dependent
def __call__(self, __other: int) -> _2Tuple[int_]: ...
@overload
def __call__(self, __other: float) -> _2Tuple[floating[Union[_NBit1, _NBitDouble]]]: ...
@overload
def __call__(self, __other: _IntType) -> _2Tuple[_IntType]: ...
@overload
def __call__(self, __other: _FloatType) -> _2Tuple[_FloatType]: ...
class _TD64Div(Protocol[_NumberType_co]):
@overload
def __call__(self, __other: timedelta64) -> _NumberType_co: ...
@overload
def __call__(self, __other: _BoolLike_co) -> NoReturn: ...
@overload
def __call__(self, __other: _FloatLike_co) -> timedelta64: ...
class _IntTrueDiv(Protocol[_NBit1]):
@overload
def __call__(self, __other: bool) -> floating[_NBit1]: ...
@overload
def __call__(self, __other: int) -> floating[Union[_NBit1, _NBitInt]]: ...
@overload
def __call__(self, __other: float) -> floating[Union[_NBit1, _NBitDouble]]: ...
@overload
def __call__(
self, __other: complex
) -> complexfloating[Union[_NBit1, _NBitDouble], Union[_NBit1, _NBitDouble]]: ...
@overload
def __call__(self, __other: integer[_NBit2]) -> floating[Union[_NBit1, _NBit2]]: ...
class _UnsignedIntOp(Protocol[_NBit1]):
# NOTE: `uint64 + signedinteger -> float64`
@overload
def __call__(self, __other: bool) -> unsignedinteger[_NBit1]: ...
@overload
def __call__(
self, __other: Union[int, signedinteger[Any]]
) -> Any: ...
@overload
def __call__(self, __other: float) -> floating[Union[_NBit1, _NBitDouble]]: ...
@overload
def __call__(
self, __other: complex
) -> complexfloating[Union[_NBit1, _NBitDouble], Union[_NBit1, _NBitDouble]]: ...
@overload
def __call__(
self, __other: unsignedinteger[_NBit2]
) -> unsignedinteger[Union[_NBit1, _NBit2]]: ...
class _UnsignedIntBitOp(Protocol[_NBit1]):
@overload
def __call__(self, __other: bool) -> unsignedinteger[_NBit1]: ...
@overload
def __call__(self, __other: int) -> signedinteger[Any]: ...
@overload
def __call__(self, __other: signedinteger[Any]) -> signedinteger[Any]: ...
@overload
def __call__(
self, __other: unsignedinteger[_NBit2]
) -> unsignedinteger[Union[_NBit1, _NBit2]]: ...
class _UnsignedIntMod(Protocol[_NBit1]):
@overload
def __call__(self, __other: bool) -> unsignedinteger[_NBit1]: ...
@overload
def __call__(
self, __other: Union[int, signedinteger[Any]]
) -> Any: ...
@overload
def __call__(self, __other: float) -> floating[Union[_NBit1, _NBitDouble]]: ...
@overload
def __call__(
self, __other: unsignedinteger[_NBit2]
) -> unsignedinteger[Union[_NBit1, _NBit2]]: ...
class _UnsignedIntDivMod(Protocol[_NBit1]):
@overload
def __call__(self, __other: bool) -> _2Tuple[signedinteger[_NBit1]]: ...
@overload
def __call__(
self, __other: Union[int, signedinteger[Any]]
) -> _2Tuple[Any]: ...
@overload
def __call__(self, __other: float) -> _2Tuple[floating[Union[_NBit1, _NBitDouble]]]: ...
@overload
def __call__(
self, __other: unsignedinteger[_NBit2]
) -> _2Tuple[unsignedinteger[Union[_NBit1, _NBit2]]]: ...
class _SignedIntOp(Protocol[_NBit1]):
@overload
def __call__(self, __other: bool) -> signedinteger[_NBit1]: ...
@overload
def __call__(self, __other: int) -> signedinteger[Union[_NBit1, _NBitInt]]: ...
@overload
def __call__(self, __other: float) -> floating[Union[_NBit1, _NBitDouble]]: ...
@overload
def __call__(
self, __other: complex
) -> complexfloating[Union[_NBit1, _NBitDouble], Union[_NBit1, _NBitDouble]]: ...
@overload
def __call__(
self, __other: signedinteger[_NBit2]
) -> signedinteger[Union[_NBit1, _NBit2]]: ...
class _SignedIntBitOp(Protocol[_NBit1]):
@overload
def __call__(self, __other: bool) -> signedinteger[_NBit1]: ...
@overload
def __call__(self, __other: int) -> signedinteger[Union[_NBit1, _NBitInt]]: ...
@overload
def __call__(
self, __other: signedinteger[_NBit2]
) -> signedinteger[Union[_NBit1, _NBit2]]: ...
class _SignedIntMod(Protocol[_NBit1]):
@overload
def __call__(self, __other: bool) -> signedinteger[_NBit1]: ...
@overload
def __call__(self, __other: int) -> signedinteger[Union[_NBit1, _NBitInt]]: ...
@overload
def __call__(self, __other: float) -> floating[Union[_NBit1, _NBitDouble]]: ...
@overload
def __call__(
self, __other: signedinteger[_NBit2]
) -> signedinteger[Union[_NBit1, _NBit2]]: ...
class _SignedIntDivMod(Protocol[_NBit1]):
@overload
def __call__(self, __other: bool) -> _2Tuple[signedinteger[_NBit1]]: ...
@overload
def __call__(self, __other: int) -> _2Tuple[signedinteger[Union[_NBit1, _NBitInt]]]: ...
@overload
def __call__(self, __other: float) -> _2Tuple[floating[Union[_NBit1, _NBitDouble]]]: ...
@overload
def __call__(
self, __other: signedinteger[_NBit2]
) -> _2Tuple[signedinteger[Union[_NBit1, _NBit2]]]: ...
class _FloatOp(Protocol[_NBit1]):
@overload
def __call__(self, __other: bool) -> floating[_NBit1]: ...
@overload
def __call__(self, __other: int) -> floating[Union[_NBit1, _NBitInt]]: ...
@overload
def __call__(self, __other: float) -> floating[Union[_NBit1, _NBitDouble]]: ...
@overload
def __call__(
self, __other: complex
) -> complexfloating[Union[_NBit1, _NBitDouble], Union[_NBit1, _NBitDouble]]: ...
@overload
def __call__(
self, __other: Union[integer[_NBit2], floating[_NBit2]]
) -> floating[Union[_NBit1, _NBit2]]: ...
class _FloatMod(Protocol[_NBit1]):
@overload
def __call__(self, __other: bool) -> floating[_NBit1]: ...
@overload
def __call__(self, __other: int) -> floating[Union[_NBit1, _NBitInt]]: ...
@overload
def __call__(self, __other: float) -> floating[Union[_NBit1, _NBitDouble]]: ...
@overload
def __call__(
self, __other: Union[integer[_NBit2], floating[_NBit2]]
) -> floating[Union[_NBit1, _NBit2]]: ...
class _FloatDivMod(Protocol[_NBit1]):
@overload
def __call__(self, __other: bool) -> _2Tuple[floating[_NBit1]]: ...
@overload
def __call__(self, __other: int) -> _2Tuple[floating[Union[_NBit1, _NBitInt]]]: ...
@overload
def __call__(self, __other: float) -> _2Tuple[floating[Union[_NBit1, _NBitDouble]]]: ...
@overload
def __call__(
self, __other: Union[integer[_NBit2], floating[_NBit2]]
) -> _2Tuple[floating[Union[_NBit1, _NBit2]]]: ...
class _ComplexOp(Protocol[_NBit1]):
@overload
def __call__(self, __other: bool) -> complexfloating[_NBit1, _NBit1]: ...
@overload
def __call__(self, __other: int) -> complexfloating[Union[_NBit1, _NBitInt], Union[_NBit1, _NBitInt]]: ...
@overload
def __call__(
self, __other: Union[float, complex]
) -> complexfloating[Union[_NBit1, _NBitDouble], Union[_NBit1, _NBitDouble]]: ...
@overload
def __call__(
self,
__other: Union[
integer[_NBit2],
floating[_NBit2],
complexfloating[_NBit2, _NBit2],
]
) -> complexfloating[Union[_NBit1, _NBit2], Union[_NBit1, _NBit2]]: ...
class _NumberOp(Protocol):
def __call__(self, __other: _NumberLike_co) -> Any: ...
class _ComparisonOp(Protocol[_T1, _T2]):
@overload
def __call__(self, __other: _T1) -> bool_: ...
@overload
def __call__(self, __other: _T2) -> NDArray[bool_]: ...
else:
_BoolOp = Any
_BoolBitOp = Any
_BoolSub = Any
_BoolTrueDiv = Any
_BoolMod = Any
_BoolDivMod = Any
_TD64Div = Any
_IntTrueDiv = Any
_UnsignedIntOp = Any
_UnsignedIntBitOp = Any
_UnsignedIntMod = Any
_UnsignedIntDivMod = Any
_SignedIntOp = Any
_SignedIntBitOp = Any
_SignedIntMod = Any
_SignedIntDivMod = Any
_FloatOp = Any
_FloatMod = Any
_FloatDivMod = Any
_ComplexOp = Any
_NumberOp = Any
_ComparisonOp = Any

View File

@@ -0,0 +1,175 @@
import sys
from typing import Any, TYPE_CHECKING
if sys.version_info >= (3, 8):
from typing import Literal
HAVE_LITERAL = True
else:
try:
from typing_extensions import Literal
except ImportError:
HAVE_LITERAL = False
else:
HAVE_LITERAL = True
if TYPE_CHECKING or HAVE_LITERAL:
_BoolCodes = Literal["?", "=?", "<?", ">?", "bool", "bool_", "bool8"]
_UInt8Codes = Literal["uint8", "u1", "=u1", "<u1", ">u1"]
_UInt16Codes = Literal["uint16", "u2", "=u2", "<u2", ">u2"]
_UInt32Codes = Literal["uint32", "u4", "=u4", "<u4", ">u4"]
_UInt64Codes = Literal["uint64", "u8", "=u8", "<u8", ">u8"]
_Int8Codes = Literal["int8", "i1", "=i1", "<i1", ">i1"]
_Int16Codes = Literal["int16", "i2", "=i2", "<i2", ">i2"]
_Int32Codes = Literal["int32", "i4", "=i4", "<i4", ">i4"]
_Int64Codes = Literal["int64", "i8", "=i8", "<i8", ">i8"]
_Float16Codes = Literal["float16", "f2", "=f2", "<f2", ">f2"]
_Float32Codes = Literal["float32", "f4", "=f4", "<f4", ">f4"]
_Float64Codes = Literal["float64", "f8", "=f8", "<f8", ">f8"]
_Complex64Codes = Literal["complex64", "c8", "=c8", "<c8", ">c8"]
_Complex128Codes = Literal["complex128", "c16", "=c16", "<c16", ">c16"]
_ByteCodes = Literal["byte", "b", "=b", "<b", ">b"]
_ShortCodes = Literal["short", "h", "=h", "<h", ">h"]
_IntCCodes = Literal["intc", "i", "=i", "<i", ">i"]
_IntPCodes = Literal["intp", "int0", "p", "=p", "<p", ">p"]
_IntCodes = Literal["long", "int", "int_", "l", "=l", "<l", ">l"]
_LongLongCodes = Literal["longlong", "q", "=q", "<q", ">q"]
_UByteCodes = Literal["ubyte", "B", "=B", "<B", ">B"]
_UShortCodes = Literal["ushort", "H", "=H", "<H", ">H"]
_UIntCCodes = Literal["uintc", "I", "=I", "<I", ">I"]
_UIntPCodes = Literal["uintp", "uint0", "P", "=P", "<P", ">P"]
_UIntCodes = Literal["uint", "L", "=L", "<L", ">L"]
_ULongLongCodes = Literal["ulonglong", "Q", "=Q", "<Q", ">Q"]
_HalfCodes = Literal["half", "e", "=e", "<e", ">e"]
_SingleCodes = Literal["single", "f", "=f", "<f", ">f"]
_DoubleCodes = Literal["double", "float", "float_", "d", "=d", "<d", ">d"]
_LongDoubleCodes = Literal["longdouble", "longfloat", "g", "=g", "<g", ">g"]
_CSingleCodes = Literal["csingle", "singlecomplex", "F", "=F", "<F", ">F"]
_CDoubleCodes = Literal["cdouble", "complex", "complex_", "cfloat", "D", "=D", "<D", ">D"]
_CLongDoubleCodes = Literal["clongdouble", "clongfloat", "longcomplex", "G", "=G", "<G", ">G"]
_StrCodes = Literal["str", "str_", "str0", "unicode", "unicode_", "U", "=U", "<U", ">U"]
_BytesCodes = Literal["bytes", "bytes_", "bytes0", "S", "=S", "<S", ">S"]
_VoidCodes = Literal["void", "void0", "V", "=V", "<V", ">V"]
_ObjectCodes = Literal["object", "object_", "O", "=O", "<O", ">O"]
_DT64Codes = Literal[
"datetime64", "=datetime64", "<datetime64", ">datetime64",
"datetime64[Y]", "=datetime64[Y]", "<datetime64[Y]", ">datetime64[Y]",
"datetime64[M]", "=datetime64[M]", "<datetime64[M]", ">datetime64[M]",
"datetime64[W]", "=datetime64[W]", "<datetime64[W]", ">datetime64[W]",
"datetime64[D]", "=datetime64[D]", "<datetime64[D]", ">datetime64[D]",
"datetime64[h]", "=datetime64[h]", "<datetime64[h]", ">datetime64[h]",
"datetime64[m]", "=datetime64[m]", "<datetime64[m]", ">datetime64[m]",
"datetime64[s]", "=datetime64[s]", "<datetime64[s]", ">datetime64[s]",
"datetime64[ms]", "=datetime64[ms]", "<datetime64[ms]", ">datetime64[ms]",
"datetime64[us]", "=datetime64[us]", "<datetime64[us]", ">datetime64[us]",
"datetime64[ns]", "=datetime64[ns]", "<datetime64[ns]", ">datetime64[ns]",
"datetime64[ps]", "=datetime64[ps]", "<datetime64[ps]", ">datetime64[ps]",
"datetime64[fs]", "=datetime64[fs]", "<datetime64[fs]", ">datetime64[fs]",
"datetime64[as]", "=datetime64[as]", "<datetime64[as]", ">datetime64[as]",
"M", "=M", "<M", ">M",
"M8", "=M8", "<M8", ">M8",
"M8[Y]", "=M8[Y]", "<M8[Y]", ">M8[Y]",
"M8[M]", "=M8[M]", "<M8[M]", ">M8[M]",
"M8[W]", "=M8[W]", "<M8[W]", ">M8[W]",
"M8[D]", "=M8[D]", "<M8[D]", ">M8[D]",
"M8[h]", "=M8[h]", "<M8[h]", ">M8[h]",
"M8[m]", "=M8[m]", "<M8[m]", ">M8[m]",
"M8[s]", "=M8[s]", "<M8[s]", ">M8[s]",
"M8[ms]", "=M8[ms]", "<M8[ms]", ">M8[ms]",
"M8[us]", "=M8[us]", "<M8[us]", ">M8[us]",
"M8[ns]", "=M8[ns]", "<M8[ns]", ">M8[ns]",
"M8[ps]", "=M8[ps]", "<M8[ps]", ">M8[ps]",
"M8[fs]", "=M8[fs]", "<M8[fs]", ">M8[fs]",
"M8[as]", "=M8[as]", "<M8[as]", ">M8[as]",
]
_TD64Codes = Literal[
"timedelta64", "=timedelta64", "<timedelta64", ">timedelta64",
"timedelta64[Y]", "=timedelta64[Y]", "<timedelta64[Y]", ">timedelta64[Y]",
"timedelta64[M]", "=timedelta64[M]", "<timedelta64[M]", ">timedelta64[M]",
"timedelta64[W]", "=timedelta64[W]", "<timedelta64[W]", ">timedelta64[W]",
"timedelta64[D]", "=timedelta64[D]", "<timedelta64[D]", ">timedelta64[D]",
"timedelta64[h]", "=timedelta64[h]", "<timedelta64[h]", ">timedelta64[h]",
"timedelta64[m]", "=timedelta64[m]", "<timedelta64[m]", ">timedelta64[m]",
"timedelta64[s]", "=timedelta64[s]", "<timedelta64[s]", ">timedelta64[s]",
"timedelta64[ms]", "=timedelta64[ms]", "<timedelta64[ms]", ">timedelta64[ms]",
"timedelta64[us]", "=timedelta64[us]", "<timedelta64[us]", ">timedelta64[us]",
"timedelta64[ns]", "=timedelta64[ns]", "<timedelta64[ns]", ">timedelta64[ns]",
"timedelta64[ps]", "=timedelta64[ps]", "<timedelta64[ps]", ">timedelta64[ps]",
"timedelta64[fs]", "=timedelta64[fs]", "<timedelta64[fs]", ">timedelta64[fs]",
"timedelta64[as]", "=timedelta64[as]", "<timedelta64[as]", ">timedelta64[as]",
"m", "=m", "<m", ">m",
"m8", "=m8", "<m8", ">m8",
"m8[Y]", "=m8[Y]", "<m8[Y]", ">m8[Y]",
"m8[M]", "=m8[M]", "<m8[M]", ">m8[M]",
"m8[W]", "=m8[W]", "<m8[W]", ">m8[W]",
"m8[D]", "=m8[D]", "<m8[D]", ">m8[D]",
"m8[h]", "=m8[h]", "<m8[h]", ">m8[h]",
"m8[m]", "=m8[m]", "<m8[m]", ">m8[m]",
"m8[s]", "=m8[s]", "<m8[s]", ">m8[s]",
"m8[ms]", "=m8[ms]", "<m8[ms]", ">m8[ms]",
"m8[us]", "=m8[us]", "<m8[us]", ">m8[us]",
"m8[ns]", "=m8[ns]", "<m8[ns]", ">m8[ns]",
"m8[ps]", "=m8[ps]", "<m8[ps]", ">m8[ps]",
"m8[fs]", "=m8[fs]", "<m8[fs]", ">m8[fs]",
"m8[as]", "=m8[as]", "<m8[as]", ">m8[as]",
]
else:
_BoolCodes = Any
_UInt8Codes = Any
_UInt16Codes = Any
_UInt32Codes = Any
_UInt64Codes = Any
_Int8Codes = Any
_Int16Codes = Any
_Int32Codes = Any
_Int64Codes = Any
_Float16Codes = Any
_Float32Codes = Any
_Float64Codes = Any
_Complex64Codes = Any
_Complex128Codes = Any
_ByteCodes = Any
_ShortCodes = Any
_IntCCodes = Any
_IntPCodes = Any
_IntCodes = Any
_LongLongCodes = Any
_UByteCodes = Any
_UShortCodes = Any
_UIntCCodes = Any
_UIntPCodes = Any
_UIntCodes = Any
_ULongLongCodes = Any
_HalfCodes = Any
_SingleCodes = Any
_DoubleCodes = Any
_LongDoubleCodes = Any
_CSingleCodes = Any
_CDoubleCodes = Any
_CLongDoubleCodes = Any
_StrCodes = Any
_BytesCodes = Any
_VoidCodes = Any
_ObjectCodes = Any
_DT64Codes = Any
_TD64Codes = Any

View File

@@ -0,0 +1,251 @@
import sys
from typing import (
Any,
List,
Sequence,
Tuple,
Union,
Type,
TypeVar,
Generic,
TYPE_CHECKING,
)
import numpy as np
from ._shape import _ShapeLike
if sys.version_info >= (3, 8):
from typing import Protocol, TypedDict
HAVE_PROTOCOL = True
else:
try:
from typing_extensions import Protocol, TypedDict
except ImportError:
HAVE_PROTOCOL = False
else:
HAVE_PROTOCOL = True
from ._char_codes import (
_BoolCodes,
_UInt8Codes,
_UInt16Codes,
_UInt32Codes,
_UInt64Codes,
_Int8Codes,
_Int16Codes,
_Int32Codes,
_Int64Codes,
_Float16Codes,
_Float32Codes,
_Float64Codes,
_Complex64Codes,
_Complex128Codes,
_ByteCodes,
_ShortCodes,
_IntCCodes,
_IntPCodes,
_IntCodes,
_LongLongCodes,
_UByteCodes,
_UShortCodes,
_UIntCCodes,
_UIntPCodes,
_UIntCodes,
_ULongLongCodes,
_HalfCodes,
_SingleCodes,
_DoubleCodes,
_LongDoubleCodes,
_CSingleCodes,
_CDoubleCodes,
_CLongDoubleCodes,
_DT64Codes,
_TD64Codes,
_StrCodes,
_BytesCodes,
_VoidCodes,
_ObjectCodes,
)
_DType_co = TypeVar("_DType_co", covariant=True, bound=np.dtype)
_DTypeLikeNested = Any # TODO: wait for support for recursive types
if TYPE_CHECKING or HAVE_PROTOCOL:
# Mandatory keys
class _DTypeDictBase(TypedDict):
names: Sequence[str]
formats: Sequence[_DTypeLikeNested]
# Mandatory + optional keys
class _DTypeDict(_DTypeDictBase, total=False):
offsets: Sequence[int]
titles: Sequence[Any] # Only `str` elements are usable as indexing aliases, but all objects are legal
itemsize: int
aligned: bool
# A protocol for anything with the dtype attribute
class _SupportsDType(Protocol[_DType_co]):
@property
def dtype(self) -> _DType_co: ...
else:
_DTypeDict = Any
class _SupportsDType(Generic[_DType_co]):
pass
# Would create a dtype[np.void]
_VoidDTypeLike = Union[
# (flexible_dtype, itemsize)
Tuple[_DTypeLikeNested, int],
# (fixed_dtype, shape)
Tuple[_DTypeLikeNested, _ShapeLike],
# [(field_name, field_dtype, field_shape), ...]
#
# The type here is quite broad because NumPy accepts quite a wide
# range of inputs inside the list; see the tests for some
# examples.
List[Any],
# {'names': ..., 'formats': ..., 'offsets': ..., 'titles': ...,
# 'itemsize': ...}
_DTypeDict,
# (base_dtype, new_dtype)
Tuple[_DTypeLikeNested, _DTypeLikeNested],
]
# Anything that can be coerced into numpy.dtype.
# Reference: https://docs.scipy.org/doc/numpy/reference/arrays.dtypes.html
DTypeLike = Union[
np.dtype,
# default data type (float64)
None,
# array-scalar types and generic types
type, # TODO: enumerate these when we add type hints for numpy scalars
# anything with a dtype attribute
_SupportsDType[np.dtype],
# character codes, type strings or comma-separated fields, e.g., 'float64'
str,
_VoidDTypeLike,
]
# NOTE: while it is possible to provide the dtype as a dict of
# dtype-like objects (e.g. `{'field1': ..., 'field2': ..., ...}`),
# this syntax is officially discourged and
# therefore not included in the Union defining `DTypeLike`.
#
# See https://github.com/numpy/numpy/issues/16891 for more details.
# Aliases for commonly used dtype-like objects.
# Note that the precision of `np.number` subclasses is ignored herein.
_DTypeLikeBool = Union[
Type[bool],
Type[np.bool_],
"np.dtype[np.bool_]",
"_SupportsDType[np.dtype[np.bool_]]",
_BoolCodes,
]
_DTypeLikeUInt = Union[
Type[np.unsignedinteger],
"np.dtype[np.unsignedinteger]",
"_SupportsDType[np.dtype[np.unsignedinteger]]",
_UInt8Codes,
_UInt16Codes,
_UInt32Codes,
_UInt64Codes,
_UByteCodes,
_UShortCodes,
_UIntCCodes,
_UIntPCodes,
_UIntCodes,
_ULongLongCodes,
]
_DTypeLikeInt = Union[
Type[int],
Type[np.signedinteger],
"np.dtype[np.signedinteger]",
"_SupportsDType[np.dtype[np.signedinteger]]",
_Int8Codes,
_Int16Codes,
_Int32Codes,
_Int64Codes,
_ByteCodes,
_ShortCodes,
_IntCCodes,
_IntPCodes,
_IntCodes,
_LongLongCodes,
]
_DTypeLikeFloat = Union[
Type[float],
Type[np.floating],
"np.dtype[np.floating]",
"_SupportsDType[np.dtype[np.floating]]",
_Float16Codes,
_Float32Codes,
_Float64Codes,
_HalfCodes,
_SingleCodes,
_DoubleCodes,
_LongDoubleCodes,
]
_DTypeLikeComplex = Union[
Type[complex],
Type[np.complexfloating],
"np.dtype[np.complexfloating]",
"_SupportsDType[np.dtype[np.complexfloating]]",
_Complex64Codes,
_Complex128Codes,
_CSingleCodes,
_CDoubleCodes,
_CLongDoubleCodes,
]
_DTypeLikeDT64 = Union[
Type[np.timedelta64],
"np.dtype[np.timedelta64]",
"_SupportsDType[np.dtype[np.timedelta64]]",
_TD64Codes,
]
_DTypeLikeTD64 = Union[
Type[np.datetime64],
"np.dtype[np.datetime64]",
"_SupportsDType[np.dtype[np.datetime64]]",
_DT64Codes,
]
_DTypeLikeStr = Union[
Type[str],
Type[np.str_],
"np.dtype[np.str_]",
"_SupportsDType[np.dtype[np.str_]]",
_StrCodes,
]
_DTypeLikeBytes = Union[
Type[bytes],
Type[np.bytes_],
"np.dtype[np.bytes_]",
"_SupportsDType[np.dtype[np.bytes_]]",
_BytesCodes,
]
_DTypeLikeVoid = Union[
Type[np.void],
"np.dtype[np.void]",
"_SupportsDType[np.dtype[np.void]]",
_VoidCodes,
_VoidDTypeLike,
]
_DTypeLikeObject = Union[
type,
"np.dtype[np.object_]",
"_SupportsDType[np.dtype[np.object_]]",
_ObjectCodes,
]
_DTypeLikeComplex_co = Union[
_DTypeLikeBool,
_DTypeLikeUInt,
_DTypeLikeInt,
_DTypeLikeFloat,
_DTypeLikeComplex,
]

View File

@@ -0,0 +1,42 @@
"""A module with platform-specific extended precision `numpy.number` subclasses.
The subclasses are defined here (instead of ``__init__.pyi``) such
that they can be imported conditionally via the numpy's mypy plugin.
"""
from typing import TYPE_CHECKING, Any
import numpy as np
from . import (
_80Bit,
_96Bit,
_128Bit,
_256Bit,
)
if TYPE_CHECKING:
uint128 = np.unsignedinteger[_128Bit]
uint256 = np.unsignedinteger[_256Bit]
int128 = np.signedinteger[_128Bit]
int256 = np.signedinteger[_256Bit]
float80 = np.floating[_80Bit]
float96 = np.floating[_96Bit]
float128 = np.floating[_128Bit]
float256 = np.floating[_256Bit]
complex160 = np.complexfloating[_80Bit, _80Bit]
complex192 = np.complexfloating[_96Bit, _96Bit]
complex256 = np.complexfloating[_128Bit, _128Bit]
complex512 = np.complexfloating[_256Bit, _256Bit]
else:
uint128 = Any
uint256 = Any
int128 = Any
int256 = Any
float80 = Any
float96 = Any
float128 = Any
float256 = Any
complex160 = Any
complex192 = Any
complex256 = Any
complex512 = Any

View File

@@ -0,0 +1,208 @@
from __future__ import annotations
import sys
import types
from typing import (
Any,
ClassVar,
FrozenSet,
Generator,
Iterable,
Iterator,
List,
NoReturn,
Tuple,
Type,
TypeVar,
TYPE_CHECKING,
)
import numpy as np
__all__ = ["_GenericAlias", "NDArray"]
_T = TypeVar("_T", bound="_GenericAlias")
def _to_str(obj: object) -> str:
"""Helper function for `_GenericAlias.__repr__`."""
if obj is Ellipsis:
return '...'
elif isinstance(obj, type) and not isinstance(obj, _GENERIC_ALIAS_TYPE):
if obj.__module__ == 'builtins':
return obj.__qualname__
else:
return f'{obj.__module__}.{obj.__qualname__}'
else:
return repr(obj)
def _parse_parameters(args: Iterable[Any]) -> Generator[TypeVar, None, None]:
"""Search for all typevars and typevar-containing objects in `args`.
Helper function for `_GenericAlias.__init__`.
"""
for i in args:
if hasattr(i, "__parameters__"):
yield from i.__parameters__
elif isinstance(i, TypeVar):
yield i
def _reconstruct_alias(alias: _T, parameters: Iterator[TypeVar]) -> _T:
"""Recursivelly replace all typevars with those from `parameters`.
Helper function for `_GenericAlias.__getitem__`.
"""
args = []
for i in alias.__args__:
if isinstance(i, TypeVar):
value: Any = next(parameters)
elif isinstance(i, _GenericAlias):
value = _reconstruct_alias(i, parameters)
elif hasattr(i, "__parameters__"):
prm_tup = tuple(next(parameters) for _ in i.__parameters__)
value = i[prm_tup]
else:
value = i
args.append(value)
cls = type(alias)
return cls(alias.__origin__, tuple(args))
class _GenericAlias:
"""A python-based backport of the `types.GenericAlias` class.
E.g. for ``t = list[int]``, ``t.__origin__`` is ``list`` and
``t.__args__`` is ``(int,)``.
See Also
--------
:pep:`585`
The PEP responsible for introducing `types.GenericAlias`.
"""
__slots__ = ("__weakref__", "_origin", "_args", "_parameters", "_hash")
@property
def __origin__(self) -> type:
return super().__getattribute__("_origin")
@property
def __args__(self) -> Tuple[Any, ...]:
return super().__getattribute__("_args")
@property
def __parameters__(self) -> Tuple[TypeVar, ...]:
"""Type variables in the ``GenericAlias``."""
return super().__getattribute__("_parameters")
def __init__(self, origin: type, args: Any) -> None:
self._origin = origin
self._args = args if isinstance(args, tuple) else (args,)
self._parameters = tuple(_parse_parameters(args))
@property
def __call__(self) -> type:
return self.__origin__
def __reduce__(self: _T) -> Tuple[Type[_T], Tuple[type, Tuple[Any, ...]]]:
cls = type(self)
return cls, (self.__origin__, self.__args__)
def __mro_entries__(self, bases: Iterable[object]) -> Tuple[type]:
return (self.__origin__,)
def __dir__(self) -> List[str]:
"""Implement ``dir(self)``."""
cls = type(self)
dir_origin = set(dir(self.__origin__))
return sorted(cls._ATTR_EXCEPTIONS | dir_origin)
def __hash__(self) -> int:
"""Return ``hash(self)``."""
# Attempt to use the cached hash
try:
return super().__getattribute__("_hash")
except AttributeError:
self._hash: int = hash(self.__origin__) ^ hash(self.__args__)
return super().__getattribute__("_hash")
def __instancecheck__(self, obj: object) -> NoReturn:
"""Check if an `obj` is an instance."""
raise TypeError("isinstance() argument 2 cannot be a "
"parameterized generic")
def __subclasscheck__(self, cls: type) -> NoReturn:
"""Check if a `cls` is a subclass."""
raise TypeError("issubclass() argument 2 cannot be a "
"parameterized generic")
def __repr__(self) -> str:
"""Return ``repr(self)``."""
args = ", ".join(_to_str(i) for i in self.__args__)
origin = _to_str(self.__origin__)
return f"{origin}[{args}]"
def __getitem__(self: _T, key: Any) -> _T:
"""Return ``self[key]``."""
key_tup = key if isinstance(key, tuple) else (key,)
if len(self.__parameters__) == 0:
raise TypeError(f"There are no type variables left in {self}")
elif len(key_tup) > len(self.__parameters__):
raise TypeError(f"Too many arguments for {self}")
elif len(key_tup) < len(self.__parameters__):
raise TypeError(f"Too few arguments for {self}")
key_iter = iter(key_tup)
return _reconstruct_alias(self, key_iter)
def __eq__(self, value: object) -> bool:
"""Return ``self == value``."""
if not isinstance(value, _GENERIC_ALIAS_TYPE):
return NotImplemented
return (
self.__origin__ == value.__origin__ and
self.__args__ == value.__args__
)
_ATTR_EXCEPTIONS: ClassVar[FrozenSet[str]] = frozenset({
"__origin__",
"__args__",
"__parameters__",
"__mro_entries__",
"__reduce__",
"__reduce_ex__",
})
def __getattribute__(self, name: str) -> Any:
"""Return ``getattr(self, name)``."""
# Pull the attribute from `__origin__` unless its
# name is in `_ATTR_EXCEPTIONS`
cls = type(self)
if name in cls._ATTR_EXCEPTIONS:
return super().__getattribute__(name)
return getattr(self.__origin__, name)
# See `_GenericAlias.__eq__`
if sys.version_info >= (3, 9):
_GENERIC_ALIAS_TYPE = (_GenericAlias, types.GenericAlias)
else:
_GENERIC_ALIAS_TYPE = (_GenericAlias,)
ScalarType = TypeVar("ScalarType", bound=np.generic, covariant=True)
if TYPE_CHECKING:
NDArray = np.ndarray[Any, np.dtype[ScalarType]]
elif sys.version_info >= (3, 9):
_DType = types.GenericAlias(np.dtype, (ScalarType,))
NDArray = types.GenericAlias(np.ndarray, (Any, _DType))
else:
_DType = _GenericAlias(np.dtype, (ScalarType,))
NDArray = _GenericAlias(np.ndarray, (Any, _DType))

View File

@@ -0,0 +1,16 @@
"""A module with the precisions of platform-specific `~numpy.number`s."""
from typing import Any
# To-be replaced with a `npt.NBitBase` subclass by numpy's mypy plugin
_NBitByte = Any
_NBitShort = Any
_NBitIntC = Any
_NBitIntP = Any
_NBitInt = Any
_NBitLongLong = Any
_NBitHalf = Any
_NBitSingle = Any
_NBitDouble = Any
_NBitLongDouble = Any

View File

@@ -0,0 +1,30 @@
from typing import Union, Tuple, Any
import numpy as np
# NOTE: `_StrLike_co` and `_BytesLike_co` are pointless, as `np.str_` and
# `np.bytes_` are already subclasses of their builtin counterpart
_CharLike_co = Union[str, bytes]
# The 6 `<X>Like_co` type-aliases below represent all scalars that can be
# coerced into `<X>` (with the casting rule `same_kind`)
_BoolLike_co = Union[bool, np.bool_]
_UIntLike_co = Union[_BoolLike_co, np.unsignedinteger]
_IntLike_co = Union[_BoolLike_co, int, np.integer]
_FloatLike_co = Union[_IntLike_co, float, np.floating]
_ComplexLike_co = Union[_FloatLike_co, complex, np.complexfloating]
_TD64Like_co = Union[_IntLike_co, np.timedelta64]
_NumberLike_co = Union[int, float, complex, np.number, np.bool_]
_ScalarLike_co = Union[
int,
float,
complex,
str,
bytes,
np.generic,
]
# `_VoidLike_co` is technically not a scalar, but it's close enough
_VoidLike_co = Union[Tuple[Any, ...], np.void]

View File

@@ -0,0 +1,15 @@
import sys
from typing import Sequence, Tuple, Union, Any
if sys.version_info >= (3, 8):
from typing import SupportsIndex
else:
try:
from typing_extensions import SupportsIndex
except ImportError:
SupportsIndex = Any
_Shape = Tuple[int, ...]
# Anything that can be coerced to a shape tuple
_ShapeLike = Union[SupportsIndex, Sequence[SupportsIndex]]

View File

@@ -0,0 +1,405 @@
"""A module with private type-check-only `numpy.ufunc` subclasses.
The signatures of the ufuncs are too varied to reasonably type
with a single class. So instead, `ufunc` has been expanded into
four private subclasses, one for each combination of
`~ufunc.nin` and `~ufunc.nout`.
"""
from typing import (
Any,
Generic,
List,
Optional,
overload,
Tuple,
TypeVar,
Union,
)
from numpy import ufunc, _Casting, _OrderKACF
from numpy.typing import NDArray
from ._shape import _ShapeLike
from ._scalars import _ScalarLike_co
from ._array_like import ArrayLike, _ArrayLikeBool_co, _ArrayLikeInt_co
from ._dtype_like import DTypeLike
from typing_extensions import Literal, SupportsIndex
_T = TypeVar("_T")
_2Tuple = Tuple[_T, _T]
_3Tuple = Tuple[_T, _T, _T]
_4Tuple = Tuple[_T, _T, _T, _T]
_NTypes = TypeVar("_NTypes", bound=int)
_IDType = TypeVar("_IDType", bound=Any)
_NameType = TypeVar("_NameType", bound=str)
# NOTE: In reality `extobj` should be a length of list 3 containing an
# int, an int, and a callable, but there's no way to properly express
# non-homogenous lists.
# Use `Any` over `Union` to avoid issues related to lists invariance.
# NOTE: `reduce`, `accumulate`, `reduceat` and `outer` raise a ValueError for
# ufuncs that don't accept two input arguments and return one output argument.
# In such cases the respective methods are simply typed as `None`.
# NOTE: Similarly, `at` won't be defined for ufuncs that return
# multiple outputs; in such cases `at` is typed as `None`
# NOTE: If 2 output types are returned then `out` must be a
# 2-tuple of arrays. Otherwise `None` or a plain array are also acceptable
class _UFunc_Nin1_Nout1(ufunc, Generic[_NameType, _NTypes, _IDType]):
@property
def __name__(self) -> _NameType: ...
@property
def ntypes(self) -> _NTypes: ...
@property
def identity(self) -> _IDType: ...
@property
def nin(self) -> Literal[1]: ...
@property
def nout(self) -> Literal[1]: ...
@property
def nargs(self) -> Literal[2]: ...
@property
def signature(self) -> None: ...
@property
def reduce(self) -> None: ...
@property
def accumulate(self) -> None: ...
@property
def reduceat(self) -> None: ...
@property
def outer(self) -> None: ...
@overload
def __call__(
self,
__x1: _ScalarLike_co,
out: None = ...,
*,
where: Optional[_ArrayLikeBool_co] = ...,
casting: _Casting = ...,
order: _OrderKACF = ...,
dtype: DTypeLike = ...,
subok: bool = ...,
signature: Union[str, _2Tuple[Optional[str]]] = ...,
extobj: List[Any] = ...,
) -> Any: ...
@overload
def __call__(
self,
__x1: ArrayLike,
out: Union[None, NDArray[Any], Tuple[NDArray[Any]]] = ...,
*,
where: Optional[_ArrayLikeBool_co] = ...,
casting: _Casting = ...,
order: _OrderKACF = ...,
dtype: DTypeLike = ...,
subok: bool = ...,
signature: Union[str, _2Tuple[Optional[str]]] = ...,
extobj: List[Any] = ...,
) -> NDArray[Any]: ...
def at(
self,
__a: NDArray[Any],
__indices: _ArrayLikeInt_co,
) -> None: ...
class _UFunc_Nin2_Nout1(ufunc, Generic[_NameType, _NTypes, _IDType]):
@property
def __name__(self) -> _NameType: ...
@property
def ntypes(self) -> _NTypes: ...
@property
def identity(self) -> _IDType: ...
@property
def nin(self) -> Literal[2]: ...
@property
def nout(self) -> Literal[1]: ...
@property
def nargs(self) -> Literal[3]: ...
@property
def signature(self) -> None: ...
@overload
def __call__(
self,
__x1: _ScalarLike_co,
__x2: _ScalarLike_co,
out: None = ...,
*,
where: Optional[_ArrayLikeBool_co] = ...,
casting: _Casting = ...,
order: _OrderKACF = ...,
dtype: DTypeLike = ...,
subok: bool = ...,
signature: Union[str, _3Tuple[Optional[str]]] = ...,
extobj: List[Any] = ...,
) -> Any: ...
@overload
def __call__(
self,
__x1: ArrayLike,
__x2: ArrayLike,
out: Union[None, NDArray[Any], Tuple[NDArray[Any]]] = ...,
*,
where: Optional[_ArrayLikeBool_co] = ...,
casting: _Casting = ...,
order: _OrderKACF = ...,
dtype: DTypeLike = ...,
subok: bool = ...,
signature: Union[str, _3Tuple[Optional[str]]] = ...,
extobj: List[Any] = ...,
) -> NDArray[Any]: ...
def at(
self,
__a: NDArray[Any],
__indices: _ArrayLikeInt_co,
__b: ArrayLike,
) -> None: ...
def reduce(
self,
array: ArrayLike,
axis: Optional[_ShapeLike] = ...,
dtype: DTypeLike = ...,
out: Optional[NDArray[Any]] = ...,
keepdims: bool = ...,
initial: Any = ...,
where: _ArrayLikeBool_co = ...,
) -> Any: ...
def accumulate(
self,
array: ArrayLike,
axis: SupportsIndex = ...,
dtype: DTypeLike = ...,
out: Optional[NDArray[Any]] = ...,
) -> NDArray[Any]: ...
def reduceat(
self,
array: ArrayLike,
indices: _ArrayLikeInt_co,
axis: SupportsIndex = ...,
dtype: DTypeLike = ...,
out: Optional[NDArray[Any]] = ...,
) -> NDArray[Any]: ...
# Expand `**kwargs` into explicit keyword-only arguments
@overload
def outer(
self,
__A: _ScalarLike_co,
__B: _ScalarLike_co,
*,
out: None = ...,
where: Optional[_ArrayLikeBool_co] = ...,
casting: _Casting = ...,
order: _OrderKACF = ...,
dtype: DTypeLike = ...,
subok: bool = ...,
signature: Union[str, _3Tuple[Optional[str]]] = ...,
extobj: List[Any] = ...,
) -> Any: ...
@overload
def outer( # type: ignore[misc]
self,
__A: ArrayLike,
__B: ArrayLike,
*,
out: Union[None, NDArray[Any], Tuple[NDArray[Any]]] = ...,
where: Optional[_ArrayLikeBool_co] = ...,
casting: _Casting = ...,
order: _OrderKACF = ...,
dtype: DTypeLike = ...,
subok: bool = ...,
signature: Union[str, _3Tuple[Optional[str]]] = ...,
extobj: List[Any] = ...,
) -> NDArray[Any]: ...
class _UFunc_Nin1_Nout2(ufunc, Generic[_NameType, _NTypes, _IDType]):
@property
def __name__(self) -> _NameType: ...
@property
def ntypes(self) -> _NTypes: ...
@property
def identity(self) -> _IDType: ...
@property
def nin(self) -> Literal[1]: ...
@property
def nout(self) -> Literal[2]: ...
@property
def nargs(self) -> Literal[3]: ...
@property
def signature(self) -> None: ...
@property
def at(self) -> None: ...
@property
def reduce(self) -> None: ...
@property
def accumulate(self) -> None: ...
@property
def reduceat(self) -> None: ...
@property
def outer(self) -> None: ...
@overload
def __call__(
self,
__x1: _ScalarLike_co,
__out1: None = ...,
__out2: None = ...,
*,
where: Optional[_ArrayLikeBool_co] = ...,
casting: _Casting = ...,
order: _OrderKACF = ...,
dtype: DTypeLike = ...,
subok: bool = ...,
signature: Union[str, _3Tuple[Optional[str]]] = ...,
extobj: List[Any] = ...,
) -> _2Tuple[Any]: ...
@overload
def __call__(
self,
__x1: ArrayLike,
__out1: Optional[NDArray[Any]] = ...,
__out2: Optional[NDArray[Any]] = ...,
*,
out: _2Tuple[NDArray[Any]] = ...,
where: Optional[_ArrayLikeBool_co] = ...,
casting: _Casting = ...,
order: _OrderKACF = ...,
dtype: DTypeLike = ...,
subok: bool = ...,
signature: Union[str, _3Tuple[Optional[str]]] = ...,
extobj: List[Any] = ...,
) -> _2Tuple[NDArray[Any]]: ...
class _UFunc_Nin2_Nout2(ufunc, Generic[_NameType, _NTypes, _IDType]):
@property
def __name__(self) -> _NameType: ...
@property
def ntypes(self) -> _NTypes: ...
@property
def identity(self) -> _IDType: ...
@property
def nin(self) -> Literal[2]: ...
@property
def nout(self) -> Literal[2]: ...
@property
def nargs(self) -> Literal[4]: ...
@property
def signature(self) -> None: ...
@property
def at(self) -> None: ...
@property
def reduce(self) -> None: ...
@property
def accumulate(self) -> None: ...
@property
def reduceat(self) -> None: ...
@property
def outer(self) -> None: ...
@overload
def __call__(
self,
__x1: _ScalarLike_co,
__x2: _ScalarLike_co,
__out1: None = ...,
__out2: None = ...,
*,
where: Optional[_ArrayLikeBool_co] = ...,
casting: _Casting = ...,
order: _OrderKACF = ...,
dtype: DTypeLike = ...,
subok: bool = ...,
signature: Union[str, _4Tuple[Optional[str]]] = ...,
extobj: List[Any] = ...,
) -> _2Tuple[Any]: ...
@overload
def __call__(
self,
__x1: ArrayLike,
__x2: ArrayLike,
__out1: Optional[NDArray[Any]] = ...,
__out2: Optional[NDArray[Any]] = ...,
*,
out: _2Tuple[NDArray[Any]] = ...,
where: Optional[_ArrayLikeBool_co] = ...,
casting: _Casting = ...,
order: _OrderKACF = ...,
dtype: DTypeLike = ...,
subok: bool = ...,
signature: Union[str, _4Tuple[Optional[str]]] = ...,
extobj: List[Any] = ...,
) -> _2Tuple[NDArray[Any]]: ...
class _GUFunc_Nin2_Nout1(ufunc, Generic[_NameType, _NTypes, _IDType]):
@property
def __name__(self) -> _NameType: ...
@property
def ntypes(self) -> _NTypes: ...
@property
def identity(self) -> _IDType: ...
@property
def nin(self) -> Literal[2]: ...
@property
def nout(self) -> Literal[1]: ...
@property
def nargs(self) -> Literal[3]: ...
# NOTE: In practice the only gufunc in the main name is `matmul`,
# so we can use its signature here
@property
def signature(self) -> Literal["(n?,k),(k,m?)->(n?,m?)"]: ...
@property
def reduce(self) -> None: ...
@property
def accumulate(self) -> None: ...
@property
def reduceat(self) -> None: ...
@property
def outer(self) -> None: ...
@property
def at(self) -> None: ...
# Scalar for 1D array-likes; ndarray otherwise
@overload
def __call__(
self,
__x1: ArrayLike,
__x2: ArrayLike,
out: None = ...,
*,
casting: _Casting = ...,
order: _OrderKACF = ...,
dtype: DTypeLike = ...,
subok: bool = ...,
signature: Union[str, _3Tuple[Optional[str]]] = ...,
extobj: List[Any] = ...,
axes: List[_2Tuple[SupportsIndex]] = ...,
) -> Any: ...
@overload
def __call__(
self,
__x1: ArrayLike,
__x2: ArrayLike,
out: Union[NDArray[Any], Tuple[NDArray[Any]]],
*,
casting: _Casting = ...,
order: _OrderKACF = ...,
dtype: DTypeLike = ...,
subok: bool = ...,
signature: Union[str, _3Tuple[Optional[str]]] = ...,
extobj: List[Any] = ...,
axes: List[_2Tuple[SupportsIndex]] = ...,
) -> NDArray[Any]: ...

View File

@@ -0,0 +1,131 @@
"""A module containing `numpy`-specific plugins for mypy."""
from __future__ import annotations
import typing as t
import numpy as np
try:
import mypy.types
from mypy.types import Type
from mypy.plugin import Plugin, AnalyzeTypeContext
from mypy.nodes import MypyFile, ImportFrom, Statement
from mypy.build import PRI_MED
_HookFunc = t.Callable[[AnalyzeTypeContext], Type]
MYPY_EX: t.Optional[ModuleNotFoundError] = None
except ModuleNotFoundError as ex:
MYPY_EX = ex
__all__: t.List[str] = []
def _get_precision_dict() -> t.Dict[str, str]:
names = [
("_NBitByte", np.byte),
("_NBitShort", np.short),
("_NBitIntC", np.intc),
("_NBitIntP", np.intp),
("_NBitInt", np.int_),
("_NBitLongLong", np.longlong),
("_NBitHalf", np.half),
("_NBitSingle", np.single),
("_NBitDouble", np.double),
("_NBitLongDouble", np.longdouble),
]
ret = {}
for name, typ in names:
n: int = 8 * typ().dtype.itemsize
ret[f'numpy.typing._nbit.{name}'] = f"numpy._{n}Bit"
return ret
def _get_extended_precision_list() -> t.List[str]:
extended_types = [np.ulonglong, np.longlong, np.longdouble, np.clongdouble]
extended_names = {
"uint128",
"uint256",
"int128",
"int256",
"float80",
"float96",
"float128",
"float256",
"complex160",
"complex192",
"complex256",
"complex512",
}
return [i.__name__ for i in extended_types if i.__name__ in extended_names]
#: A dictionary mapping type-aliases in `numpy.typing._nbit` to
#: concrete `numpy.typing.NBitBase` subclasses.
_PRECISION_DICT: t.Final = _get_precision_dict()
#: A list with the names of all extended precision `np.number` subclasses.
_EXTENDED_PRECISION_LIST: t.Final = _get_extended_precision_list()
def _hook(ctx: AnalyzeTypeContext) -> Type:
"""Replace a type-alias with a concrete ``NBitBase`` subclass."""
typ, _, api = ctx
name = typ.name.split(".")[-1]
name_new = _PRECISION_DICT[f"numpy.typing._nbit.{name}"]
return api.named_type(name_new)
if t.TYPE_CHECKING or MYPY_EX is None:
def _index(iterable: t.Iterable[Statement], id: str) -> int:
"""Identify the first ``ImportFrom`` instance the specified `id`."""
for i, value in enumerate(iterable):
if getattr(value, "id", None) == id:
return i
else:
raise ValueError("Failed to identify a `ImportFrom` instance "
f"with the following id: {id!r}")
class _NumpyPlugin(Plugin):
"""A plugin for assigning platform-specific `numpy.number` precisions."""
def get_type_analyze_hook(self, fullname: str) -> t.Optional[_HookFunc]:
"""Set the precision of platform-specific `numpy.number` subclasses.
For example: `numpy.int_`, `numpy.longlong` and `numpy.longdouble`.
"""
if fullname in _PRECISION_DICT:
return _hook
return None
def get_additional_deps(self, file: MypyFile) -> t.List[t.Tuple[int, str, int]]:
"""Import platform-specific extended-precision `numpy.number` subclasses.
For example: `numpy.float96`, `numpy.float128` and `numpy.complex256`.
"""
ret = [(PRI_MED, file.fullname, -1)]
if file.fullname == "numpy":
# Import ONLY the extended precision types available to the
# platform in question
imports = ImportFrom(
"numpy.typing._extended_precision", 0,
names=[(v, v) for v in _EXTENDED_PRECISION_LIST],
)
imports.is_top_level = True
# Replace the much broader extended-precision import
# (defined in `numpy/__init__.pyi`) with a more specific one
for lst in [file.defs, file.imports]: # type: t.List[Statement]
i = _index(lst, "numpy.typing._extended_precision")
lst[i] = imports
return ret
def plugin(version: str) -> t.Type[_NumpyPlugin]:
"""An entry-point for mypy."""
return _NumpyPlugin
else:
def plugin(version: str) -> t.Type[_NumpyPlugin]:
"""An entry-point for mypy."""
raise MYPY_EX

View File

@@ -0,0 +1,12 @@
def configuration(parent_package='', top_path=None):
from numpy.distutils.misc_util import Configuration
config = Configuration('typing', parent_package, top_path)
config.add_subpackage('tests')
config.add_data_dir('tests/data')
config.add_data_files('*.pyi')
return config
if __name__ == '__main__':
from numpy.distutils.core import setup
setup(configuration=configuration)

View File

@@ -0,0 +1,120 @@
from typing import List, Any
import numpy as np
b_ = np.bool_()
dt = np.datetime64(0, "D")
td = np.timedelta64(0, "D")
AR_b: np.ndarray[Any, np.dtype[np.bool_]]
AR_u: np.ndarray[Any, np.dtype[np.uint32]]
AR_i: np.ndarray[Any, np.dtype[np.int64]]
AR_f: np.ndarray[Any, np.dtype[np.float64]]
AR_c: np.ndarray[Any, np.dtype[np.complex128]]
AR_m: np.ndarray[Any, np.dtype[np.timedelta64]]
AR_M: np.ndarray[Any, np.dtype[np.datetime64]]
ANY: Any
AR_LIKE_b: List[bool]
AR_LIKE_u: List[np.uint32]
AR_LIKE_i: List[int]
AR_LIKE_f: List[float]
AR_LIKE_c: List[complex]
AR_LIKE_m: List[np.timedelta64]
AR_LIKE_M: List[np.datetime64]
# Array subtraction
# NOTE: mypys `NoReturn` errors are, unfortunately, not that great
_1 = AR_b - AR_LIKE_b # E: Need type annotation
_2 = AR_LIKE_b - AR_b # E: Need type annotation
AR_f - AR_LIKE_m # E: Unsupported operand types
AR_f - AR_LIKE_M # E: Unsupported operand types
AR_c - AR_LIKE_m # E: Unsupported operand types
AR_c - AR_LIKE_M # E: Unsupported operand types
AR_m - AR_LIKE_f # E: Unsupported operand types
AR_M - AR_LIKE_f # E: Unsupported operand types
AR_m - AR_LIKE_c # E: Unsupported operand types
AR_M - AR_LIKE_c # E: Unsupported operand types
AR_m - AR_LIKE_M # E: Unsupported operand types
AR_LIKE_m - AR_M # E: Unsupported operand types
# array floor division
AR_M // AR_LIKE_b # E: Unsupported operand types
AR_M // AR_LIKE_u # E: Unsupported operand types
AR_M // AR_LIKE_i # E: Unsupported operand types
AR_M // AR_LIKE_f # E: Unsupported operand types
AR_M // AR_LIKE_c # E: Unsupported operand types
AR_M // AR_LIKE_m # E: Unsupported operand types
AR_M // AR_LIKE_M # E: Unsupported operand types
AR_b // AR_LIKE_M # E: Unsupported operand types
AR_u // AR_LIKE_M # E: Unsupported operand types
AR_i // AR_LIKE_M # E: Unsupported operand types
AR_f // AR_LIKE_M # E: Unsupported operand types
AR_c // AR_LIKE_M # E: Unsupported operand types
AR_m // AR_LIKE_M # E: Unsupported operand types
AR_M // AR_LIKE_M # E: Unsupported operand types
_3 = AR_m // AR_LIKE_b # E: Need type annotation
AR_m // AR_LIKE_c # E: Unsupported operand types
AR_b // AR_LIKE_m # E: Unsupported operand types
AR_u // AR_LIKE_m # E: Unsupported operand types
AR_i // AR_LIKE_m # E: Unsupported operand types
AR_f // AR_LIKE_m # E: Unsupported operand types
AR_c // AR_LIKE_m # E: Unsupported operand types
# Array multiplication
AR_b *= AR_LIKE_u # E: incompatible type
AR_b *= AR_LIKE_i # E: incompatible type
AR_b *= AR_LIKE_f # E: incompatible type
AR_b *= AR_LIKE_c # E: incompatible type
AR_b *= AR_LIKE_m # E: incompatible type
AR_u *= AR_LIKE_i # E: incompatible type
AR_u *= AR_LIKE_f # E: incompatible type
AR_u *= AR_LIKE_c # E: incompatible type
AR_u *= AR_LIKE_m # E: incompatible type
AR_i *= AR_LIKE_f # E: incompatible type
AR_i *= AR_LIKE_c # E: incompatible type
AR_i *= AR_LIKE_m # E: incompatible type
AR_f *= AR_LIKE_c # E: incompatible type
AR_f *= AR_LIKE_m # E: incompatible type
# Array power
AR_b **= AR_LIKE_b # E: incompatible type
AR_b **= AR_LIKE_u # E: incompatible type
AR_b **= AR_LIKE_i # E: incompatible type
AR_b **= AR_LIKE_f # E: incompatible type
AR_b **= AR_LIKE_c # E: incompatible type
AR_u **= AR_LIKE_i # E: incompatible type
AR_u **= AR_LIKE_f # E: incompatible type
AR_u **= AR_LIKE_c # E: incompatible type
AR_i **= AR_LIKE_f # E: incompatible type
AR_i **= AR_LIKE_c # E: incompatible type
AR_f **= AR_LIKE_c # E: incompatible type
# Scalars
b_ - b_ # E: No overload variant
dt + dt # E: Unsupported operand types
td - dt # E: Unsupported operand types
td % 1 # E: Unsupported operand types
td / dt # E: No overload
td % dt # E: Unsupported operand types
-b_ # E: Unsupported operand type
+b_ # E: Unsupported operand type

View File

@@ -0,0 +1,31 @@
import numpy as np
a: np.ndarray
generator = (i for i in range(10))
np.require(a, requirements=1) # E: No overload variant
np.require(a, requirements="TEST") # E: incompatible type
np.zeros("test") # E: incompatible type
np.zeros() # E: Missing positional argument
np.ones("test") # E: incompatible type
np.ones() # E: Missing positional argument
np.array(0, float, True) # E: Too many positional
np.linspace(None, 'bob') # E: No overload variant
np.linspace(0, 2, num=10.0) # E: No overload variant
np.linspace(0, 2, endpoint='True') # E: No overload variant
np.linspace(0, 2, retstep=b'False') # E: No overload variant
np.linspace(0, 2, dtype=0) # E: No overload variant
np.linspace(0, 2, axis=None) # E: No overload variant
np.logspace(None, 'bob') # E: Argument 1
np.logspace(0, 2, base=None) # E: Argument "base"
np.geomspace(None, 'bob') # E: Argument 1
np.stack(generator) # E: No overload variant
np.hstack({1, 2}) # E: incompatible type
np.vstack(1) # E: incompatible type

View File

@@ -0,0 +1,16 @@
import numpy as np
from numpy.typing import ArrayLike
class A:
pass
x1: ArrayLike = (i for i in range(10)) # E: Incompatible types in assignment
x2: ArrayLike = A() # E: Incompatible types in assignment
x3: ArrayLike = {1: "foo", 2: "bar"} # E: Incompatible types in assignment
scalar = np.int64(1)
scalar.__array__(dtype=np.float64) # E: No overload variant
array = np.array([1])
array.__array__(dtype=np.float64) # E: No overload variant

View File

@@ -0,0 +1,13 @@
from typing import Callable, Any
import numpy as np
AR: np.ndarray
func1: Callable[[Any], str]
func2: Callable[[np.integer[Any]], str]
np.array2string(AR, style=None) # E: Unexpected keyword argument
np.array2string(AR, legacy="1.14") # E: incompatible type
np.array2string(AR, sign="*") # E: incompatible type
np.array2string(AR, floatmode="default") # E: incompatible type
np.array2string(AR, formatter={"A": func1}) # E: incompatible type
np.array2string(AR, formatter={"float": func2}) # E: Incompatible types

View File

@@ -0,0 +1,14 @@
from typing import Any
import numpy as np
AR_i8: np.ndarray[Any, np.dtype[np.int64]]
ar_iter = np.lib.Arrayterator(AR_i8)
np.lib.Arrayterator(np.int64()) # E: incompatible type
ar_iter.shape = (10, 5) # E: is read-only
ar_iter[None] # E: Invalid index type
ar_iter[None, 1] # E: Invalid index type
ar_iter[np.intp()] # E: Invalid index type
ar_iter[np.intp(), ...] # E: Invalid index type
ar_iter[AR_i8] # E: Invalid index type
ar_iter[AR_i8, :] # E: Invalid index type

View File

@@ -0,0 +1,20 @@
import numpy as np
i8 = np.int64()
i4 = np.int32()
u8 = np.uint64()
b_ = np.bool_()
i = int()
f8 = np.float64()
b_ >> f8 # E: No overload variant
i8 << f8 # E: No overload variant
i | f8 # E: Unsupported operand types
i8 ^ f8 # E: No overload variant
u8 & f8 # E: No overload variant
~f8 # E: Unsupported operand type
# mypys' error message for `NoReturn` is unfortunately pretty bad
# TODO: Reenable this once we add support for numerical precision for `number`s
# a = u8 | 0 # E: Need type annotation

View File

@@ -0,0 +1,28 @@
from typing import Any
import numpy as np
AR_i: np.ndarray[Any, np.dtype[np.int64]]
AR_f: np.ndarray[Any, np.dtype[np.float64]]
AR_c: np.ndarray[Any, np.dtype[np.complex128]]
AR_m: np.ndarray[Any, np.dtype[np.timedelta64]]
AR_M: np.ndarray[Any, np.dtype[np.datetime64]]
AR_f > AR_m # E: Unsupported operand types
AR_c > AR_m # E: Unsupported operand types
AR_m > AR_f # E: Unsupported operand types
AR_m > AR_c # E: Unsupported operand types
AR_i > AR_M # E: Unsupported operand types
AR_f > AR_M # E: Unsupported operand types
AR_m > AR_M # E: Unsupported operand types
AR_M > AR_i # E: Unsupported operand types
AR_M > AR_f # E: Unsupported operand types
AR_M > AR_m # E: Unsupported operand types
# Unfortunately `NoReturn` errors are not the most descriptive
_1 = AR_i > str() # E: Need type annotation
_2 = AR_i > bytes() # E: Need type annotation
_3 = str() > AR_M # E: Need type annotation
_4 = bytes() > AR_M # E: Need type annotation

View File

@@ -0,0 +1,6 @@
import numpy as np
np.Inf = np.Inf # E: Cannot assign to final
np.ALLOW_THREADS = np.ALLOW_THREADS # E: Cannot assign to final
np.little_endian = np.little_endian # E: Cannot assign to final
np.UFUNC_PYVALS_NAME = np.UFUNC_PYVALS_NAME # E: Cannot assign to final

View File

@@ -0,0 +1,15 @@
from pathlib import Path
import numpy as np
path: Path
d1: np.DataSource
d1.abspath(path) # E: incompatible type
d1.abspath(b"...") # E: incompatible type
d1.exists(path) # E: incompatible type
d1.exists(b"...") # E: incompatible type
d1.open(path, "r") # E: incompatible type
d1.open(b"...", encoding="utf8") # E: incompatible type
d1.open(None, newline="/n") # E: incompatible type

View File

@@ -0,0 +1,20 @@
import numpy as np
class Test1:
not_dtype = np.dtype(float)
class Test2:
dtype = float
np.dtype(Test1()) # E: No overload variant of "dtype" matches
np.dtype(Test2()) # E: incompatible type
np.dtype( # E: No overload variant of "dtype" matches
{
"field1": (float, 1),
"field2": (int, 3),
}
)

View File

@@ -0,0 +1,15 @@
from typing import List, Any
import numpy as np
AR_i: np.ndarray[Any, np.dtype[np.int64]]
AR_f: np.ndarray[Any, np.dtype[np.float64]]
AR_m: np.ndarray[Any, np.dtype[np.timedelta64]]
AR_O: np.ndarray[Any, np.dtype[np.object_]]
AR_U: np.ndarray[Any, np.dtype[np.str_]]
np.einsum("i,i->i", AR_i, AR_m) # E: incompatible type
np.einsum("i,i->i", AR_O, AR_O) # E: incompatible type
np.einsum("i,i->i", AR_f, AR_f, dtype=np.int32) # E: incompatible type
np.einsum("i,i->i", AR_i, AR_i, dtype=np.timedelta64, casting="unsafe") # E: No overload variant
np.einsum("i,i->i", AR_i, AR_i, out=AR_U) # E: Value of type variable "_ArrayType" of "einsum" cannot be
np.einsum("i,i->i", AR_i, AR_i, out=AR_U, casting="unsafe") # E: No overload variant

View File

@@ -0,0 +1,25 @@
from typing import Any
import numpy as np
from numpy.typing import _SupportsArray
class Index:
def __index__(self) -> int:
...
a: "np.flatiter[np.ndarray]"
supports_array: _SupportsArray
a.base = Any # E: Property "base" defined in "flatiter" is read-only
a.coords = Any # E: Property "coords" defined in "flatiter" is read-only
a.index = Any # E: Property "index" defined in "flatiter" is read-only
a.copy(order='C') # E: Unexpected keyword argument
# NOTE: Contrary to `ndarray.__getitem__` its counterpart in `flatiter`
# does not accept objects with the `__array__` or `__index__` protocols;
# boolean indexing is just plain broken (gh-17175)
a[np.bool_()] # E: No overload variant of "__getitem__"
a[Index()] # E: No overload variant of "__getitem__"
a[supports_array] # E: No overload variant of "__getitem__"

View File

@@ -0,0 +1,154 @@
"""Tests for :mod:`numpy.core.fromnumeric`."""
import numpy as np
A = np.array(True, ndmin=2, dtype=bool)
A.setflags(write=False)
a = np.bool_(True)
np.take(a, None) # E: incompatible type
np.take(a, axis=1.0) # E: incompatible type
np.take(A, out=1) # E: incompatible type
np.take(A, mode="bob") # E: incompatible type
np.reshape(a, None) # E: Argument 2 to "reshape" has incompatible type
np.reshape(A, 1, order="bob") # E: Argument "order" to "reshape" has incompatible type
np.choose(a, None) # E: incompatible type
np.choose(a, out=1.0) # E: incompatible type
np.choose(A, mode="bob") # E: incompatible type
np.repeat(a, None) # E: Argument 2 to "repeat" has incompatible type
np.repeat(A, 1, axis=1.0) # E: Argument "axis" to "repeat" has incompatible type
np.swapaxes(A, None, 1) # E: Argument 2 to "swapaxes" has incompatible type
np.swapaxes(A, 1, [0]) # E: Argument 3 to "swapaxes" has incompatible type
np.transpose(A, axes=1.0) # E: Argument "axes" to "transpose" has incompatible type
np.partition(a, None) # E: Argument 2 to "partition" has incompatible type
np.partition(
a, 0, axis="bob" # E: Argument "axis" to "partition" has incompatible type
)
np.partition(
A, 0, kind="bob" # E: Argument "kind" to "partition" has incompatible type
)
np.partition(
A, 0, order=range(5) # E: Argument "order" to "partition" has incompatible type
)
np.argpartition(
a, None # E: incompatible type
)
np.argpartition(
a, 0, axis="bob" # E: incompatible type
)
np.argpartition(
A, 0, kind="bob" # E: incompatible type
)
np.argpartition(
A, 0, order=range(5) # E: Argument "order" to "argpartition" has incompatible type
)
np.sort(A, axis="bob") # E: Argument "axis" to "sort" has incompatible type
np.sort(A, kind="bob") # E: Argument "kind" to "sort" has incompatible type
np.sort(A, order=range(5)) # E: Argument "order" to "sort" has incompatible type
np.argsort(A, axis="bob") # E: Argument "axis" to "argsort" has incompatible type
np.argsort(A, kind="bob") # E: Argument "kind" to "argsort" has incompatible type
np.argsort(A, order=range(5)) # E: Argument "order" to "argsort" has incompatible type
np.argmax(A, axis="bob") # E: No overload variant of "argmax" matches argument type
np.argmax(A, kind="bob") # E: No overload variant of "argmax" matches argument type
np.argmin(A, axis="bob") # E: No overload variant of "argmin" matches argument type
np.argmin(A, kind="bob") # E: No overload variant of "argmin" matches argument type
np.searchsorted( # E: No overload variant of "searchsorted" matches argument type
A[0], 0, side="bob"
)
np.searchsorted( # E: No overload variant of "searchsorted" matches argument type
A[0], 0, sorter=1.0
)
np.resize(A, 1.0) # E: Argument 2 to "resize" has incompatible type
np.squeeze(A, 1.0) # E: No overload variant of "squeeze" matches argument type
np.diagonal(A, offset=None) # E: Argument "offset" to "diagonal" has incompatible type
np.diagonal(A, axis1="bob") # E: Argument "axis1" to "diagonal" has incompatible type
np.diagonal(A, axis2=[]) # E: Argument "axis2" to "diagonal" has incompatible type
np.trace(A, offset=None) # E: Argument "offset" to "trace" has incompatible type
np.trace(A, axis1="bob") # E: Argument "axis1" to "trace" has incompatible type
np.trace(A, axis2=[]) # E: Argument "axis2" to "trace" has incompatible type
np.ravel(a, order="bob") # E: Argument "order" to "ravel" has incompatible type
np.compress(
[True], A, axis=1.0 # E: Argument "axis" to "compress" has incompatible type
)
np.clip(a, 1, 2, out=1) # E: No overload variant of "clip" matches argument type
np.clip(1, None, None) # E: No overload variant of "clip" matches argument type
np.sum(a, axis=1.0) # E: incompatible type
np.sum(a, keepdims=1.0) # E: incompatible type
np.sum(a, initial=[1]) # E: incompatible type
np.all(a, axis=1.0) # E: No overload variant
np.all(a, keepdims=1.0) # E: No overload variant
np.all(a, out=1.0) # E: No overload variant
np.any(a, axis=1.0) # E: No overload variant
np.any(a, keepdims=1.0) # E: No overload variant
np.any(a, out=1.0) # E: No overload variant
np.cumsum(a, axis=1.0) # E: incompatible type
np.cumsum(a, dtype=1.0) # E: incompatible type
np.cumsum(a, out=1.0) # E: incompatible type
np.ptp(a, axis=1.0) # E: incompatible type
np.ptp(a, keepdims=1.0) # E: incompatible type
np.ptp(a, out=1.0) # E: incompatible type
np.amax(a, axis=1.0) # E: incompatible type
np.amax(a, keepdims=1.0) # E: incompatible type
np.amax(a, out=1.0) # E: incompatible type
np.amax(a, initial=[1.0]) # E: incompatible type
np.amax(a, where=[1.0]) # E: incompatible type
np.amin(a, axis=1.0) # E: incompatible type
np.amin(a, keepdims=1.0) # E: incompatible type
np.amin(a, out=1.0) # E: incompatible type
np.amin(a, initial=[1.0]) # E: incompatible type
np.amin(a, where=[1.0]) # E: incompatible type
np.prod(a, axis=1.0) # E: incompatible type
np.prod(a, out=False) # E: incompatible type
np.prod(a, keepdims=1.0) # E: incompatible type
np.prod(a, initial=int) # E: incompatible type
np.prod(a, where=1.0) # E: incompatible type
np.cumprod(a, axis=1.0) # E: Argument "axis" to "cumprod" has incompatible type
np.cumprod(a, out=False) # E: Argument "out" to "cumprod" has incompatible type
np.size(a, axis=1.0) # E: Argument "axis" to "size" has incompatible type
np.around(a, decimals=1.0) # E: incompatible type
np.around(a, out=type) # E: incompatible type
np.mean(a, axis=1.0) # E: incompatible type
np.mean(a, out=False) # E: incompatible type
np.mean(a, keepdims=1.0) # E: incompatible type
np.std(a, axis=1.0) # E: incompatible type
np.std(a, out=False) # E: incompatible type
np.std(a, ddof='test') # E: incompatible type
np.std(a, keepdims=1.0) # E: incompatible type
np.var(a, axis=1.0) # E: incompatible type
np.var(a, out=False) # E: incompatible type
np.var(a, ddof='test') # E: incompatible type
np.var(a, keepdims=1.0) # E: incompatible type

View File

@@ -0,0 +1,14 @@
from typing import List
import numpy as np
AR_LIKE_i: List[int]
AR_LIKE_f: List[float]
np.unravel_index(AR_LIKE_f, (1, 2, 3)) # E: incompatible type
np.ravel_multi_index(AR_LIKE_i, (1, 2, 3), mode="bob") # E: No overload variant
np.mgrid[1] # E: Invalid index type
np.mgrid[...] # E: Invalid index type
np.ogrid[1] # E: Invalid index type
np.ogrid[...] # E: Invalid index type
np.fill_diagonal(AR_LIKE_f, 2) # E: incompatible type
np.diag_indices(1.0) # E: incompatible type

View File

@@ -0,0 +1,13 @@
import numpy as np
np.deprecate(1) # E: No overload variant
np.deprecate_with_doc(1) # E: incompatible type
np.byte_bounds(1) # E: incompatible type
np.who(1) # E: incompatible type
np.lookfor(None) # E: incompatible type
np.safe_eval(None) # E: incompatible type

View File

@@ -0,0 +1,6 @@
from numpy.lib import NumpyVersion
version: NumpyVersion
NumpyVersion(b"1.8.0") # E: incompatible type
version >= b"1.8.0" # E: Unsupported operand types

View File

@@ -0,0 +1,19 @@
import numpy as np
np.testing.bob # E: Module has no attribute
np.bob # E: Module has no attribute
# Stdlib modules in the namespace by accident
np.warnings # E: Module has no attribute
np.sys # E: Module has no attribute
np.os # E: Module has no attribute
np.math # E: Module has no attribute
# Public sub-modules that are not imported to their parent module by default;
# e.g. one must first execute `import numpy.lib.recfunctions`
np.lib.recfunctions # E: Module has no attribute
np.ma.mrecords # E: Module has no attribute
np.__NUMPY_SETUP__ # E: Module has no attribute
np.__deprecated_attrs__ # E: Module has no attribute
np.__expired_functions__ # E: Module has no attribute

View File

@@ -0,0 +1,11 @@
import numpy as np
# Ban setting dtype since mutating the type of the array in place
# makes having ndarray be generic over dtype impossible. Generally
# users should use `ndarray.view` in this situation anyway. See
#
# https://github.com/numpy/numpy-stubs/issues/7
#
# for more context.
float_array = np.array([1.0])
float_array.dtype = np.bool_ # E: Property "dtype" defined in "ndarray" is read-only

View File

@@ -0,0 +1,37 @@
"""
Tests for miscellaneous (non-magic) ``np.ndarray``/``np.generic`` methods.
More extensive tests are performed for the methods'
function-based counterpart in `../from_numeric.py`.
"""
from typing import Any
import numpy as np
f8: np.float64
AR_f8: np.ndarray[Any, np.dtype[np.float64]]
AR_M: np.ndarray[Any, np.dtype[np.datetime64]]
AR_b: np.ndarray[Any, np.dtype[np.bool_]]
ctypes_obj = AR_f8.ctypes
reveal_type(ctypes_obj.get_data()) # E: has no attribute
reveal_type(ctypes_obj.get_shape()) # E: has no attribute
reveal_type(ctypes_obj.get_strides()) # E: has no attribute
reveal_type(ctypes_obj.get_as_parameter()) # E: has no attribute
f8.argpartition(0) # E: has no attribute
f8.diagonal() # E: has no attribute
f8.dot(1) # E: has no attribute
f8.nonzero() # E: has no attribute
f8.partition(0) # E: has no attribute
f8.put(0, 2) # E: has no attribute
f8.setfield(2, np.float64) # E: has no attribute
f8.sort() # E: has no attribute
f8.trace() # E: has no attribute
AR_M.__int__() # E: Invalid self argument
AR_M.__float__() # E: Invalid self argument
AR_M.__complex__() # E: Invalid self argument
AR_b.__index__() # E: Invalid self argument

View File

@@ -0,0 +1,13 @@
import numpy as np
# Techincally this works, but probably shouldn't. See
#
# https://github.com/numpy/numpy/issues/16366
#
np.maximum_sctype(1) # E: incompatible type "int"
np.issubsctype(1, np.int64) # E: incompatible type "int"
np.issubdtype(1, np.int64) # E: incompatible type "int"
np.find_common_type(np.int64, np.int64) # E: incompatible type "Type[signedinteger[Any]]"

View File

@@ -0,0 +1,61 @@
import numpy as np
from typing import Any, List
SEED_FLOAT: float = 457.3
SEED_ARR_FLOAT: np.ndarray[Any, np.dtype[np.float64]] = np.array([1.0, 2, 3, 4])
SEED_ARRLIKE_FLOAT: List[float] = [1.0, 2.0, 3.0, 4.0]
SEED_SEED_SEQ: np.random.SeedSequence = np.random.SeedSequence(0)
SEED_STR: str = "String seeding not allowed"
# default rng
np.random.default_rng(SEED_FLOAT) # E: incompatible type
np.random.default_rng(SEED_ARR_FLOAT) # E: incompatible type
np.random.default_rng(SEED_ARRLIKE_FLOAT) # E: incompatible type
np.random.default_rng(SEED_STR) # E: incompatible type
# Seed Sequence
np.random.SeedSequence(SEED_FLOAT) # E: incompatible type
np.random.SeedSequence(SEED_ARR_FLOAT) # E: incompatible type
np.random.SeedSequence(SEED_ARRLIKE_FLOAT) # E: incompatible type
np.random.SeedSequence(SEED_SEED_SEQ) # E: incompatible type
np.random.SeedSequence(SEED_STR) # E: incompatible type
seed_seq: np.random.bit_generator.SeedSequence = np.random.SeedSequence()
seed_seq.spawn(11.5) # E: incompatible type
seed_seq.generate_state(3.14) # E: incompatible type
seed_seq.generate_state(3, np.uint8) # E: incompatible type
seed_seq.generate_state(3, "uint8") # E: incompatible type
seed_seq.generate_state(3, "u1") # E: incompatible type
seed_seq.generate_state(3, np.uint16) # E: incompatible type
seed_seq.generate_state(3, "uint16") # E: incompatible type
seed_seq.generate_state(3, "u2") # E: incompatible type
seed_seq.generate_state(3, np.int32) # E: incompatible type
seed_seq.generate_state(3, "int32") # E: incompatible type
seed_seq.generate_state(3, "i4") # E: incompatible type
# Bit Generators
np.random.MT19937(SEED_FLOAT) # E: incompatible type
np.random.MT19937(SEED_ARR_FLOAT) # E: incompatible type
np.random.MT19937(SEED_ARRLIKE_FLOAT) # E: incompatible type
np.random.MT19937(SEED_STR) # E: incompatible type
np.random.PCG64(SEED_FLOAT) # E: incompatible type
np.random.PCG64(SEED_ARR_FLOAT) # E: incompatible type
np.random.PCG64(SEED_ARRLIKE_FLOAT) # E: incompatible type
np.random.PCG64(SEED_STR) # E: incompatible type
np.random.Philox(SEED_FLOAT) # E: incompatible type
np.random.Philox(SEED_ARR_FLOAT) # E: incompatible type
np.random.Philox(SEED_ARRLIKE_FLOAT) # E: incompatible type
np.random.Philox(SEED_STR) # E: incompatible type
np.random.SFC64(SEED_FLOAT) # E: incompatible type
np.random.SFC64(SEED_ARR_FLOAT) # E: incompatible type
np.random.SFC64(SEED_ARRLIKE_FLOAT) # E: incompatible type
np.random.SFC64(SEED_STR) # E: incompatible type
# Generator
np.random.Generator(None) # E: incompatible type
np.random.Generator(12333283902830213) # E: incompatible type
np.random.Generator("OxFEEDF00D") # E: incompatible type
np.random.Generator([123, 234]) # E: incompatible type
np.random.Generator(np.array([123, 234], dtype="u4")) # E: incompatible type

View File

@@ -0,0 +1,94 @@
import sys
import numpy as np
f2: np.float16
f8: np.float64
c8: np.complex64
# Construction
np.float32(3j) # E: incompatible type
# Technically the following examples are valid NumPy code. But they
# are not considered a best practice, and people who wish to use the
# stubs should instead do
#
# np.array([1.0, 0.0, 0.0], dtype=np.float32)
# np.array([], dtype=np.complex64)
#
# See e.g. the discussion on the mailing list
#
# https://mail.python.org/pipermail/numpy-discussion/2020-April/080566.html
#
# and the issue
#
# https://github.com/numpy/numpy-stubs/issues/41
#
# for more context.
np.float32([1.0, 0.0, 0.0]) # E: incompatible type
np.complex64([]) # E: incompatible type
np.complex64(1, 2) # E: Too many arguments
# TODO: protocols (can't check for non-existent protocols w/ __getattr__)
np.datetime64(0) # E: non-matching overload
class A:
def __float__(self):
return 1.0
np.int8(A()) # E: incompatible type
np.int16(A()) # E: incompatible type
np.int32(A()) # E: incompatible type
np.int64(A()) # E: incompatible type
np.uint8(A()) # E: incompatible type
np.uint16(A()) # E: incompatible type
np.uint32(A()) # E: incompatible type
np.uint64(A()) # E: incompatible type
np.void("test") # E: incompatible type
np.generic(1) # E: Cannot instantiate abstract class
np.number(1) # E: Cannot instantiate abstract class
np.integer(1) # E: Cannot instantiate abstract class
np.inexact(1) # E: Cannot instantiate abstract class
np.character("test") # E: Cannot instantiate abstract class
np.flexible(b"test") # E: Cannot instantiate abstract class
np.float64(value=0.0) # E: Unexpected keyword argument
np.int64(value=0) # E: Unexpected keyword argument
np.uint64(value=0) # E: Unexpected keyword argument
np.complex128(value=0.0j) # E: Unexpected keyword argument
np.str_(value='bob') # E: No overload variant
np.bytes_(value=b'test') # E: No overload variant
np.void(value=b'test') # E: Unexpected keyword argument
np.bool_(value=True) # E: Unexpected keyword argument
np.datetime64(value="2019") # E: No overload variant
np.timedelta64(value=0) # E: Unexpected keyword argument
np.bytes_(b"hello", encoding='utf-8') # E: No overload variant
np.str_("hello", encoding='utf-8') # E: No overload variant
complex(np.bytes_("1")) # E: No overload variant
f8.item(1) # E: incompatible type
f8.item((0, 1)) # E: incompatible type
f8.squeeze(axis=1) # E: incompatible type
f8.squeeze(axis=(0, 1)) # E: incompatible type
f8.transpose(1) # E: incompatible type
def func(a: np.float32) -> None: ...
func(f2) # E: incompatible type
func(f8) # E: incompatible type
round(c8) # E: No overload variant
c8.__getnewargs__() # E: Invalid self argument
f2.__getnewargs__() # E: Invalid self argument
f2.is_integer() # E: Invalid self argument
f2.hex() # E: Invalid self argument
np.float16.fromhex("0x0.0p+0") # E: Invalid self argument
f2.__trunc__() # E: Invalid self argument
f2.__getformat__("float") # E: Invalid self argument

View File

@@ -0,0 +1,21 @@
"""Typing tests for `numpy.core._ufunc_config`."""
import numpy as np
def func1(a: str, b: int, c: float) -> None: ...
def func2(a: str, *, b: int) -> None: ...
class Write1:
def write1(self, a: str) -> None: ...
class Write2:
def write(self, a: str, b: str) -> None: ...
class Write3:
def write(self, *, a: str) -> None: ...
np.seterrcall(func1) # E: Argument 1 to "seterrcall" has incompatible type
np.seterrcall(func2) # E: Argument 1 to "seterrcall" has incompatible type
np.seterrcall(Write1()) # E: Argument 1 to "seterrcall" has incompatible type
np.seterrcall(Write2()) # E: Argument 1 to "seterrcall" has incompatible type
np.seterrcall(Write3()) # E: Argument 1 to "seterrcall" has incompatible type

View File

@@ -0,0 +1,21 @@
from typing import List, Any
import numpy as np
AR_c: np.ndarray[Any, np.dtype[np.complex128]]
AR_m: np.ndarray[Any, np.dtype[np.timedelta64]]
AR_M: np.ndarray[Any, np.dtype[np.datetime64]]
AR_O: np.ndarray[Any, np.dtype[np.object_]]
np.fix(AR_c) # E: incompatible type
np.fix(AR_m) # E: incompatible type
np.fix(AR_M) # E: incompatible type
np.isposinf(AR_c) # E: incompatible type
np.isposinf(AR_m) # E: incompatible type
np.isposinf(AR_M) # E: incompatible type
np.isposinf(AR_O) # E: incompatible type
np.isneginf(AR_c) # E: incompatible type
np.isneginf(AR_m) # E: incompatible type
np.isneginf(AR_M) # E: incompatible type
np.isneginf(AR_O) # E: incompatible type

View File

@@ -0,0 +1,41 @@
import numpy as np
import numpy.typing as npt
AR_f8: npt.NDArray[np.float64]
np.sin.nin + "foo" # E: Unsupported operand types
np.sin(1, foo="bar") # E: No overload variant
np.abs(None) # E: No overload variant
np.add(1, 1, 1) # E: No overload variant
np.add(1, 1, axis=0) # E: No overload variant
np.matmul(AR_f8, AR_f8, where=True) # E: No overload variant
np.frexp(AR_f8, out=None) # E: No overload variant
np.frexp(AR_f8, out=AR_f8) # E: No overload variant
np.absolute.outer() # E: "None" not callable
np.frexp.outer() # E: "None" not callable
np.divmod.outer() # E: "None" not callable
np.matmul.outer() # E: "None" not callable
np.absolute.reduceat() # E: "None" not callable
np.frexp.reduceat() # E: "None" not callable
np.divmod.reduceat() # E: "None" not callable
np.matmul.reduceat() # E: "None" not callable
np.absolute.reduce() # E: "None" not callable
np.frexp.reduce() # E: "None" not callable
np.divmod.reduce() # E: "None" not callable
np.matmul.reduce() # E: "None" not callable
np.absolute.accumulate() # E: "None" not callable
np.frexp.accumulate() # E: "None" not callable
np.divmod.accumulate() # E: "None" not callable
np.matmul.accumulate() # E: "None" not callable
np.frexp.at() # E: "None" not callable
np.divmod.at() # E: "None" not callable
np.matmul.at() # E: "None" not callable

View File

@@ -0,0 +1,7 @@
import numpy as np
np.AxisError(1.0) # E: Argument 1 to "AxisError" has incompatible type
np.AxisError(1, ndim=2.0) # E: Argument "ndim" to "AxisError" has incompatible type
np.AxisError(
2, msg_prefix=404 # E: Argument "msg_prefix" to "AxisError" has incompatible type
)

View File

@@ -0,0 +1,17 @@
import numpy as np
reveal_type(np.uint128())
reveal_type(np.uint256())
reveal_type(np.int128())
reveal_type(np.int256())
reveal_type(np.float80())
reveal_type(np.float96())
reveal_type(np.float128())
reveal_type(np.float256())
reveal_type(np.complex160())
reveal_type(np.complex192())
reveal_type(np.complex256())
reveal_type(np.complex512())

View File

@@ -0,0 +1,9 @@
[mypy]
plugins = numpy.typing.mypy_plugin
show_absolute_path = True
[mypy-numpy]
ignore_errors = True
[mypy-numpy.*]
ignore_errors = True

Some files were not shown because too many files have changed in this diff Show More