You're Invited:Meet the Socket Team at RSAC and BSidesSF 2026, March 23–26.RSVP
Socket
Book a DemoSign in
Socket

linopy

Package Overview
Dependencies
Maintainers
2
Versions
64
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

linopy - pypi Package Compare versions

Comparing version
0.5.7
to
0.5.8
doc/solve-on-oetc.nblink

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

+1
-0

@@ -27,2 +27,3 @@ ci:

- id: blackdoc
additional_dependencies: ['black==24.8.0']
- repo: https://github.com/codespell-project/codespell

@@ -29,0 +30,0 @@ rev: v2.4.1

+21
-10

@@ -124,4 +124,15 @@ .. currentmodule:: linopy

Solver utilities
=================
.. autosummary::
:toctree: generated/
solvers.available_solvers
solvers.quadratic_solvers
solvers.Solver
Solvers
========
=======

@@ -131,12 +142,12 @@ .. autosummary::

solvers.run_cbc
solvers.run_glpk
solvers.run_highs
solvers.run_cplex
solvers.run_gurobi
solvers.run_xpress
solvers.run_mosek
solvers.run_mindopt
solvers.run_copt
solvers.CBC
solvers.Cplex
solvers.GLPK
solvers.Gurobi
solvers.Highs
solvers.Mosek
solvers.SCIP
solvers.Xpress
Solving

@@ -143,0 +154,0 @@ ========

@@ -93,4 +93,12 @@ # Configuration file for the Sphinx documentation builder.

nbsphinx_allow_errors = False
nbsphinx_allow_errors = True
nbsphinx_execute = "auto"
nbsphinx_execute_arguments = [
"--InlineBackend.figure_formats={'svg', 'pdf'}",
"--InlineBackend.rc={'figure.dpi': 96}",
]
# Exclude notebooks that require credentials or special setup
nbsphinx_execute_never = ["**/solve-on-oetc*"]
# -- Options for HTML output -------------------------------------------------

@@ -97,0 +105,0 @@

@@ -117,2 +117,3 @@ .. linopy documentation master file, created by

solve-on-remote
solve-on-oetc
migrating-from-pyomo

@@ -119,0 +120,0 @@ gurobi-double-logging

@@ -15,4 +15,9 @@ # Minimal makefile for Sphinx documentation

.PHONY: help Makefile
.PHONY: help clean Makefile
# Clean target that removes build directory
clean:
@echo "Removing build directory..."
rm -rf "$(BUILDDIR)"
# Catch-all target: route all unknown targets to Sphinx using the new

@@ -19,0 +24,0 @@ # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).

@@ -5,5 +5,34 @@ Release Notes

.. Upcoming Version
.. ----------------
.. * Improved constraint equality check in `linopy.testing.assert_conequal` to less strict optionally
Version 0.5.8
--------------
* Replace pandas-based LP file writing with polars implementation for significantly improved performance on large models
* Consolidate "lp" and "lp-polars" io_api options - both now use the optimized polars backend
* Reduced memory usage and faster file I/O operations when exporting models to LP format
* Minor bugfix for multiplying variables with numpy type constants
* Harmonize dtypes before concatenation in lp file writing to avoid dtype mismatch errors. This error occurred when creating and storing models in netcdf format using windows machines and loading and solving them on linux machines.
* Add option to use polars series as constant input
* Fix expression merge to explicitly use outer join when combining expressions with disjoint coordinates for consistent behavior across xarray versions
* Adding xpress postsolve if necessary
* Handle ImportError in xpress import
* Fetch and display OETC worker error logs
* Fix windows permission error when dumping model file
* Performance improvements for xpress solver using C interface
Version 0.5.7
--------------
* Removed deprecated future warning for scalar get item operations
* Silenced version output from the HiGHS solver
* Mosek: Remove explicit use of Env, use global env instead
* Objectives can now be created from variables via `linopy.Model.add_objective`
* Added integration with OETC platform (refactored implementation)
* Add error message if highspy is not installed
* Fix MindOpt floating release issue
* Made merge expressions function infer class without triggering warnings
* Improved testing coverage
* Fix pypsa-eur environment path in CI
Version 0.5.6

@@ -14,5 +43,2 @@ --------------

* Gurobi: Pass dictionary as env argument `env={...}` through to gurobi env creation
* Added integration with OETC platform
* Mosek: Remove explicit use of Env, use global env instead
* Objectives can now be created from variables via `linopy.Model.add_objective`.

@@ -37,3 +63,3 @@ **Breaking Changes**

* Remove default highs log file when `log_fn=None` and `io_api="direct"`. This caused `log_file` in
`solver_options` to be ignored.
`solver_options` to be ignored.
* Fix the parsing of solutions returned by the CBC solver when setting a MIP duality

@@ -66,4 +92,4 @@ gap tolerance.

* Fix the multiplication with of zero dimensional numpy arrays with linopy objects.
This is mainly affecting operations where single numerical items from pandas objects
are selected and used for multiplication.
This is mainly affecting operations where single numerical items from pandas objects
are selected and used for multiplication.

@@ -201,3 +227,3 @@ Version 0.5.1

* `nan`s in constants is now handled more consistently. These are ignored when in the addition of expressions (effectively filled by zero). In a future version, this might change to align the propagation of `nan`s with tools like numpy/pandas/xarray.
* ``nan`` s in constants is now handled more consistently. These are ignored when in the addition of expressions (effectively filled by zero). In a future version, this might change to align the propagation of ``nan`` s with tools like numpy/pandas/xarray.

@@ -204,0 +230,0 @@ * Up to now the `rhs` argument in the `add_constraints` function was not supporting an expression as an input type. This is now added.

Metadata-Version: 2.4
Name: linopy
Version: 0.5.7
Version: 0.5.8
Summary: Linear optimization with N-D labeled arrays in Python

@@ -43,2 +43,4 @@ Author-email: Fabian Hofmann <fabianmarikhofmann@gmail.com>

Requires-Dist: nbsphinx-link==1.3.0; extra == "docs"
Requires-Dist: docutils<0.21; extra == "docs"
Requires-Dist: numpy<2; extra == "docs"
Requires-Dist: gurobipy==11.0.2; extra == "docs"

@@ -45,0 +47,0 @@ Requires-Dist: ipykernel==6.29.5; extra == "docs"

@@ -39,2 +39,4 @@ scipy

nbsphinx-link==1.3.0
docutils<0.21
numpy<2
gurobipy==11.0.2

@@ -41,0 +43,0 @@ ipykernel==6.29.5

@@ -99,2 +99,3 @@ .git-blame-ignore-revs

doc/release_notes.rst
doc/solve-on-oetc.nblink
doc/solve-on-remote.nblink

@@ -114,2 +115,3 @@ doc/syntax.rst

examples/migrating-from-pyomo.ipynb
examples/solve-on-oetc.ipynb
examples/solve-on-remote.ipynb

@@ -116,0 +118,0 @@ examples/testing-framework.ipynb

@@ -257,3 +257,7 @@ #!/usr/bin/env python3

arr = numpy_to_dataarray(arr, coords=coords, dims=dims, **kwargs)
elif isinstance(arr, np.number | int | float | str | bool | list):
elif isinstance(arr, pl.Series):
arr = numpy_to_dataarray(arr.to_numpy(), coords=coords, dims=dims, **kwargs)
elif isinstance(arr, np.number):
arr = DataArray(float(arr), coords=coords, dims=dims, **kwargs)
elif isinstance(arr, int | float | str | bool | list):
arr = DataArray(arr, coords=coords, dims=dims, **kwargs)

@@ -271,2 +275,3 @@

DataArray,
pl.Series,
]

@@ -366,7 +371,7 @@ supported_types_str = ", ".join([t.__name__ for t in supported_types])

"""
Checks if the given DataFrame contains any null values and raises a ValueError if it does.
Checks if the given DataFrame contains any null or NaN values and raises a ValueError if it does.
Args:
----
df (pl.DataFrame): The DataFrame to check for null values.
df (pl.DataFrame): The DataFrame to check for null or NaN values.
name (str): The name of the data container being checked.

@@ -376,11 +381,24 @@

------
ValueError: If the DataFrame contains null values,
a ValueError is raised with a message indicating the name of the constraint and the fields containing null values.
ValueError: If the DataFrame contains null or NaN values,
a ValueError is raised with a message indicating the name of the constraint and the fields containing null/NaN values.
"""
# Check for null values in all columns
has_nulls = df.select(pl.col("*").is_null().any())
null_columns = [col for col in has_nulls.columns if has_nulls[col][0]]
if null_columns:
raise ValueError(f"{name} contains nan's in field(s) {null_columns}")
# Check for NaN values only in numeric columns (avoid enum/categorical columns)
numeric_cols = [
col for col, dtype in zip(df.columns, df.dtypes) if dtype.is_numeric()
]
nan_columns = []
if numeric_cols:
has_nans = df.select(pl.col(numeric_cols).is_nan().any())
nan_columns = [col for col in has_nans.columns if has_nans[col][0]]
invalid_columns = list(set(null_columns + nan_columns))
if invalid_columns:
raise ValueError(f"{name} contains nan's in field(s) {invalid_columns}")
def filter_nulls_polars(df: pl.DataFrame) -> pl.DataFrame:

@@ -387,0 +405,0 @@ """

@@ -634,3 +634,3 @@ """

df = pl.concat([short, long], how="diagonal").sort(["labels", "rhs"])
df = pl.concat([short, long], how="diagonal_relaxed").sort(["labels", "rhs"])
# delete subsequent non-null rhs (happens is all vars per label are -1)

@@ -637,0 +637,0 @@ is_non_null = df["rhs"].is_not_null()

@@ -11,4 +11,4 @@ #!/usr/bin/env python3

import time
from collections.abc import Callable, Iterable
from io import BufferedWriter, TextIOWrapper
from collections.abc import Callable
from io import BufferedWriter
from pathlib import Path

@@ -23,3 +23,2 @@ from tempfile import TemporaryDirectory

from numpy import ones_like, zeros_like
from pandas.core.frame import DataFrame
from scipy.sparse import tril, triu

@@ -47,12 +46,2 @@ from tqdm import tqdm

def handle_batch(batch: list[str], f: TextIOWrapper, batch_size: int) -> list[str]:
"""
Write out a batch to a file and reset the batch.
"""
if len(batch) >= batch_size:
f.writelines(batch) # write out a batch
batch = [] # reset batch
return batch
name_sanitizer = str.maketrans("-+*^[] ", "_______")

@@ -75,5 +64,6 @@

def get_printers(
def get_printers_scalar(
m: Model, explicit_coordinate_names: bool = False
) -> tuple[Callable, Callable]:
"""Get printer functions for scalar values (non-polars)."""
if explicit_coordinate_names:

@@ -103,5 +93,6 @@

def get_printers_polars(
def get_printers(
m: Model, explicit_coordinate_names: bool = False
) -> tuple[Callable, Callable]:
"""Get printer functions for polars dataframes."""
if explicit_coordinate_names:

@@ -119,3 +110,3 @@

def print_variable_polars(series: pl.Series) -> tuple[pl.Expr, pl.Series]:
def print_variable_series(series: pl.Series) -> tuple[pl.Expr, pl.Series]:
return pl.lit(" "), series.map_elements(

@@ -125,3 +116,3 @@ print_variable, return_dtype=pl.String

def print_constraint_polars(series: pl.Series) -> tuple[pl.Expr, pl.Series]:
def print_constraint_series(series: pl.Series) -> tuple[pl.Expr, pl.Series]:
return pl.lit(None), series.map_elements(

@@ -131,380 +122,16 @@ print_constraint, return_dtype=pl.String

return print_variable_polars, print_constraint_polars
return print_variable_series, print_constraint_series
else:
def print_variable_polars(series: pl.Series) -> tuple[pl.Expr, pl.Series]:
def print_variable_series(series: pl.Series) -> tuple[pl.Expr, pl.Series]:
return pl.lit(" x").alias("x"), series.cast(pl.String)
def print_constraint_polars(series: pl.Series) -> tuple[pl.Expr, pl.Series]:
def print_constraint_series(series: pl.Series) -> tuple[pl.Expr, pl.Series]:
return pl.lit("c").alias("c"), series.cast(pl.String)
return print_variable_polars, print_constraint_polars
return print_variable_series, print_constraint_series
def objective_write_linear_terms(
df: DataFrame,
f: TextIOWrapper,
batch: list[str],
batch_size: int,
print_variable: Callable,
) -> list[str]:
"""
Write the linear terms of the objective to a file.
"""
coeffs = df.coeffs.values
vars = df.vars.values
for idx in range(len(coeffs)):
coeff = coeffs[idx]
var = vars[idx]
name = print_variable(var)
batch.append(f"{coeff:+.12g} {name}\n")
batch = handle_batch(batch, f, batch_size)
return batch
def objective_write_quad_terms(
quadratic: DataFrame,
f: TextIOWrapper,
batch: list[str],
batch_size: int,
print_variable: Callable,
) -> list[str]:
"""
Write the cross terms of the objective to a file.
"""
coeffs = quadratic.coeffs.values
vars1 = quadratic.vars1.values
vars2 = quadratic.vars2.values
for idx in range(len(coeffs)):
coeff = coeffs[idx]
var1 = vars1[idx]
var2 = vars2[idx]
name1 = print_variable(var1)
name2 = print_variable(var2)
batch.append(f"{coeff:+.12g} {name1} * {name2}\n")
batch = handle_batch(batch, f, batch_size)
return batch
def objective_to_file(
m: Model,
f: TextIOWrapper,
progress: bool = False,
batch_size: int = 10000,
explicit_coordinate_names: bool = False,
) -> None:
"""
Write out the objective of a model to a lp file.
"""
if progress:
logger.info("Writing objective.")
print_variable, _ = get_printers(
m, explicit_coordinate_names=explicit_coordinate_names
)
sense = m.objective.sense
f.write(f"{sense}\n\nobj:\n\n")
df = m.objective.flat
if np.isnan(df.coeffs).any():
logger.warning(
"Objective coefficients are missing (nan) where variables are not (-1)."
)
if m.is_linear:
batch = objective_write_linear_terms(df, f, [], batch_size, print_variable)
elif m.is_quadratic:
is_linear = (df.vars1 == -1) | (df.vars2 == -1)
linear = df[is_linear]
linear = linear.assign(
vars=linear.vars1.where(linear.vars1 != -1, linear.vars2)
)
batch = objective_write_linear_terms(linear, f, [], batch_size, print_variable)
if not is_linear.all():
batch.append("+ [\n")
quadratic = df[~is_linear]
quadratic = quadratic.assign(coeffs=2 * quadratic.coeffs)
batch = objective_write_quad_terms(
quadratic, f, batch, batch_size, print_variable
)
batch.append("] / 2\n")
if batch: # write the remaining lines
f.writelines(batch)
def constraints_to_file(
m: Model,
f: TextIOWrapper,
progress: bool = False,
batch_size: int = 50_000,
slice_size: int = 100_000,
explicit_coordinate_names: bool = False,
) -> None:
if not len(m.constraints):
return
print_variable, print_constraint = get_printers(
m, explicit_coordinate_names=explicit_coordinate_names
)
f.write("\n\ns.t.\n\n")
names: Iterable = m.constraints
if progress:
names = tqdm(
list(names),
desc="Writing constraints.",
colour=TQDM_COLOR,
)
batch = []
for name in names:
con = m.constraints[name]
for con_slice in con.iterate_slices(slice_size):
df = con_slice.flat
labels = df.labels.values
vars = df.vars.values
coeffs = df.coeffs.values
rhs = df.rhs.values
sign = df.sign.values
len_df = len(df) # compute length once
if not len_df:
continue
# write out the start to enable a fast loop afterwards
idx = 0
label = labels[idx]
coeff = coeffs[idx]
var = vars[idx]
name = print_variable(var)
cname = print_constraint(label)
batch.append(f"{cname}:\n{coeff:+.12g} {name}\n")
prev_label = label
prev_sign = sign[idx]
prev_rhs = rhs[idx]
for idx in range(1, len_df):
label = labels[idx]
coeff = coeffs[idx]
var = vars[idx]
name = print_variable(var)
if label != prev_label:
cname = print_constraint(label)
batch.append(
f"{prev_sign} {prev_rhs:+.12g}\n\n{cname}:\n{coeff:+.12g} {name}\n"
)
prev_sign = sign[idx]
prev_rhs = rhs[idx]
else:
batch.append(f"{coeff:+.12g} {name}\n")
batch = handle_batch(batch, f, batch_size)
prev_label = label
batch.append(f"{prev_sign} {prev_rhs:+.12g}\n")
if batch: # write the remaining lines
f.writelines(batch)
def bounds_to_file(
m: Model,
f: TextIOWrapper,
progress: bool = False,
batch_size: int = 10000,
slice_size: int = 100_000,
explicit_coordinate_names: bool = False,
) -> None:
"""
Write out variables of a model to a lp file.
"""
names: Iterable = list(m.variables.continuous) + list(m.variables.integers)
if not len(list(names)):
return
print_variable, _ = get_printers(
m, explicit_coordinate_names=explicit_coordinate_names
)
f.write("\n\nbounds\n\n")
if progress:
names = tqdm(
list(names),
desc="Writing continuous variables.",
colour=TQDM_COLOR,
)
batch = [] # to store batch of lines
for name in names:
var = m.variables[name]
for var_slice in var.iterate_slices(slice_size):
df = var_slice.flat
labels = df.labels.values
lowers = df.lower.values
uppers = df.upper.values
for idx in range(len(df)):
label = labels[idx]
lower = lowers[idx]
upper = uppers[idx]
name = print_variable(label)
batch.append(f"{lower:+.12g} <= {name} <= {upper:+.12g}\n")
batch = handle_batch(batch, f, batch_size)
if batch: # write the remaining lines
f.writelines(batch)
def binaries_to_file(
m: Model,
f: TextIOWrapper,
progress: bool = False,
batch_size: int = 1000,
slice_size: int = 100_000,
explicit_coordinate_names: bool = False,
) -> None:
"""
Write out binaries of a model to a lp file.
"""
names: Iterable = m.variables.binaries
if not len(list(names)):
return
print_variable, _ = get_printers(
m, explicit_coordinate_names=explicit_coordinate_names
)
f.write("\n\nbinary\n\n")
if progress:
names = tqdm(
list(names),
desc="Writing binary variables.",
colour=TQDM_COLOR,
)
batch = [] # to store batch of lines
for name in names:
var = m.variables[name]
for var_slice in var.iterate_slices(slice_size):
df = var_slice.flat
for label in df.labels.values:
name = print_variable(label)
batch.append(f"{name}\n")
batch = handle_batch(batch, f, batch_size)
if batch: # write the remaining lines
f.writelines(batch)
def integers_to_file(
m: Model,
f: TextIOWrapper,
progress: bool = False,
batch_size: int = 1000,
slice_size: int = 100_000,
integer_label: str = "general",
explicit_coordinate_names: bool = False,
) -> None:
"""
Write out integers of a model to a lp file.
"""
names: Iterable = m.variables.integers
if not len(list(names)):
return
print_variable, _ = get_printers(
m, explicit_coordinate_names=explicit_coordinate_names
)
f.write(f"\n\n{integer_label}\n\n")
if progress:
names = tqdm(
list(names),
desc="Writing integer variables.",
colour=TQDM_COLOR,
)
batch = [] # to store batch of lines
for name in names:
var = m.variables[name]
for var_slice in var.iterate_slices(slice_size):
df = var_slice.flat
for label in df.labels.values:
name = print_variable(label)
batch.append(f"{name}\n")
batch = handle_batch(batch, f, batch_size)
if batch: # write the remaining lines
f.writelines(batch)
def to_lp_file(
m: Model,
fn: Path,
integer_label: str,
slice_size: int = 10_000_000,
progress: bool = True,
explicit_coordinate_names: bool = False,
) -> None:
batch_size = 5000
with open(fn, mode="w") as f:
start = time.time()
if isinstance(f, int):
raise ValueError("File not found.")
objective_to_file(
m, f, progress=progress, explicit_coordinate_names=explicit_coordinate_names
)
constraints_to_file(
m,
f=f,
progress=progress,
batch_size=batch_size,
slice_size=slice_size,
explicit_coordinate_names=explicit_coordinate_names,
)
bounds_to_file(
m,
f=f,
progress=progress,
batch_size=batch_size,
slice_size=slice_size,
explicit_coordinate_names=explicit_coordinate_names,
)
binaries_to_file(
m,
f=f,
progress=progress,
batch_size=batch_size,
slice_size=slice_size,
explicit_coordinate_names=explicit_coordinate_names,
)
integers_to_file(
m,
integer_label=integer_label,
f=f,
progress=progress,
batch_size=batch_size,
slice_size=slice_size,
explicit_coordinate_names=explicit_coordinate_names,
)
f.write("end\n")
logger.info(f" Writing time: {round(time.time() - start, 2)}s")
def objective_write_linear_terms_polars(
f: BufferedWriter, df: pl.DataFrame, print_variable: Callable

@@ -523,3 +150,3 @@ ) -> None:

def objective_write_quadratic_terms_polars(
def objective_write_quadratic_terms(
f: BufferedWriter, df: pl.DataFrame, print_variable: Callable

@@ -542,3 +169,3 @@ ) -> None:

def objective_to_file_polars(
def objective_to_file(
m: Model,

@@ -555,3 +182,3 @@ f: BufferedWriter,

print_variable, _ = get_printers_polars(
print_variable, _ = get_printers(
m, explicit_coordinate_names=explicit_coordinate_names

@@ -565,3 +192,3 @@ )

if m.is_linear:
objective_write_linear_terms_polars(f, df, print_variable)
objective_write_linear_terms(f, df, print_variable)

@@ -576,9 +203,9 @@ elif m.is_quadratic:

)
objective_write_linear_terms_polars(f, linear_terms, print_variable)
objective_write_linear_terms(f, linear_terms, print_variable)
quads = df.filter(pl.col("vars1").ne(-1) & pl.col("vars2").ne(-1))
objective_write_quadratic_terms_polars(f, quads, print_variable)
objective_write_quadratic_terms(f, quads, print_variable)
def bounds_to_file_polars(
def bounds_to_file(
m: Model,

@@ -597,3 +224,3 @@ f: BufferedWriter,

print_variable, _ = get_printers_polars(
print_variable, _ = get_printers(
m, explicit_coordinate_names=explicit_coordinate_names

@@ -632,3 +259,3 @@ )

def binaries_to_file_polars(
def binaries_to_file(
m: Model,

@@ -647,3 +274,3 @@ f: BufferedWriter,

print_variable, _ = get_printers_polars(
print_variable, _ = get_printers(
m, explicit_coordinate_names=explicit_coordinate_names

@@ -676,3 +303,3 @@ )

def integers_to_file_polars(
def integers_to_file(
m: Model,

@@ -692,3 +319,3 @@ f: BufferedWriter,

print_variable, _ = get_printers_polars(
print_variable, _ = get_printers(
m, explicit_coordinate_names=explicit_coordinate_names

@@ -721,3 +348,3 @@ )

def constraints_to_file_polars(
def constraints_to_file(
m: Model,

@@ -733,3 +360,3 @@ f: BufferedWriter,

print_variable, print_constraint = get_printers_polars(
print_variable, print_constraint = get_printers(
m, explicit_coordinate_names=explicit_coordinate_names

@@ -754,17 +381,44 @@ )

# df = df.lazy()
# filter out repeated label values
df = df.with_columns(
pl.when(pl.col("labels").is_first_distinct())
.then(pl.col("labels"))
.otherwise(pl.lit(None))
.alias("labels")
if df.height == 0:
continue
# Ensure each constraint has both coefficient and RHS terms
analysis = df.group_by("labels").agg(
[
pl.col("coeffs").is_not_null().sum().alias("coeff_rows"),
pl.col("sign").is_not_null().sum().alias("rhs_rows"),
]
)
row_labels = print_constraint(pl.col("labels"))
valid = analysis.filter(
(pl.col("coeff_rows") > 0) & (pl.col("rhs_rows") > 0)
)
if valid.height == 0:
continue
# Keep only constraints that have both parts
df = df.join(valid.select("labels"), on="labels", how="inner")
# Sort by labels and mark first/last occurrences
df = df.sort("labels").with_columns(
[
pl.when(pl.col("labels").is_first_distinct())
.then(pl.col("labels"))
.otherwise(pl.lit(None))
.alias("labels_first"),
(pl.col("labels") != pl.col("labels").shift(-1))
.fill_null(True)
.alias("is_last_in_group"),
]
)
row_labels = print_constraint(pl.col("labels_first"))
col_labels = print_variable(pl.col("vars"))
columns = [
pl.when(pl.col("labels").is_not_null()).then(row_labels[0]),
pl.when(pl.col("labels").is_not_null()).then(row_labels[1]),
pl.when(pl.col("labels").is_not_null()).then(pl.lit(":\n")).alias(":"),
pl.when(pl.col("labels_first").is_not_null()).then(row_labels[0]),
pl.when(pl.col("labels_first").is_not_null()).then(row_labels[1]),
pl.when(pl.col("labels_first").is_not_null())
.then(pl.lit(":\n"))
.alias(":"),
pl.when(pl.col("coeffs") >= 0).then(pl.lit("+")),

@@ -774,5 +428,5 @@ pl.col("coeffs").cast(pl.String),

pl.when(pl.col("vars").is_not_null()).then(col_labels[1]),
"sign",
pl.lit(" "),
pl.col("rhs").cast(pl.String),
pl.when(pl.col("is_last_in_group")).then(pl.col("sign")),
pl.when(pl.col("is_last_in_group")).then(pl.lit(" ")),
pl.when(pl.col("is_last_in_group")).then(pl.col("rhs").cast(pl.String)),
]

@@ -792,3 +446,3 @@

def to_lp_file_polars(
def to_lp_file(
m: Model,

@@ -804,6 +458,6 @@ fn: Path,

objective_to_file_polars(
objective_to_file(
m, f, progress=progress, explicit_coordinate_names=explicit_coordinate_names
)
constraints_to_file_polars(
constraints_to_file(
m,

@@ -815,3 +469,3 @@ f=f,

)
bounds_to_file_polars(
bounds_to_file(
m,

@@ -823,3 +477,3 @@ f=f,

)
binaries_to_file_polars(
binaries_to_file(
m,

@@ -831,3 +485,3 @@ f=f,

)
integers_to_file_polars(
integers_to_file(
m,

@@ -870,3 +524,3 @@ integer_label=integer_label,

if io_api == "lp":
if io_api == "lp" or io_api == "lp-polars":
to_lp_file(

@@ -880,11 +534,2 @@ m,

)
elif io_api == "lp-polars":
to_lp_file_polars(
m,
fn,
integer_label,
slice_size=slice_size,
progress=progress,
explicit_coordinate_names=explicit_coordinate_names,
)

@@ -929,3 +574,3 @@ elif io_api == "mps":

print_variable, print_constraint = get_printers(
print_variable, print_constraint = get_printers_scalar(
m, explicit_coordinate_names=explicit_coordinate_names

@@ -1043,3 +688,3 @@ )

print_variable, print_constraint = get_printers(
print_variable, print_constraint = get_printers_scalar(
m, explicit_coordinate_names=explicit_coordinate_names

@@ -1095,3 +740,3 @@ )

print_variable, print_constraint = get_printers(
print_variable, print_constraint = get_printers_scalar(
m, explicit_coordinate_names=explicit_coordinate_names

@@ -1098,0 +743,0 @@ )

@@ -62,3 +62,8 @@ """

from linopy.remote import OetcHandler, RemoteHandler
from linopy.solvers import IO_APIS, available_solvers, quadratic_solvers
from linopy.solvers import (
IO_APIS,
NO_SOLUTION_FILE_SOLVERS,
available_solvers,
quadratic_solvers,
)
from linopy.types import (

@@ -917,2 +922,3 @@ ConstantLike,

linear expression is built with the function LinearExpression.from_tuples.
* coefficients : int/float/array_like

@@ -1195,3 +1201,7 @@ The coefficient(s) in the term, if the coefficients array

if solution_fn is None:
solution_fn = self.get_solution_file()
if solver_name in NO_SOLUTION_FILE_SOLVERS and not keep_files:
# these (solver, keep_files=False) combos do not need a solution file
solution_fn = None
else:
solution_fn = self.get_solution_file()

@@ -1198,0 +1208,0 @@ if sanitize_zeros:

@@ -271,2 +271,43 @@ import base64

def _get_job_logs(self, job_uuid: str) -> str:
"""
Fetch logs for a compute job.
Args:
job_uuid: UUID of the job to fetch logs for
Returns:
str: The job logs content as a string
Raises:
Exception: If fetching logs fails
"""
try:
logger.info(f"OETC - Fetching logs for job {job_uuid}...")
response = requests.get(
f"{self.settings.orchestrator_server_url}/compute-job/{job_uuid}/get-logs",
headers={
"Authorization": f"{self.jwt.token_type} {self.jwt.token}",
"Content-Type": "application/json",
},
timeout=30,
)
response.raise_for_status()
logs_data = response.json()
# Extract content from the response structure
logs_content = logs_data.get("content", "")
logger.info(f"OETC - Successfully fetched logs for job {job_uuid}")
return logs_content
except RequestException as e:
logger.warning(f"OETC - Failed to fetch logs for job {job_uuid}: {e}")
return f"[Unable to fetch logs: {e}]"
except Exception as e:
logger.warning(f"OETC - Error fetching logs for job {job_uuid}: {e}")
return f"[Error fetching logs: {e}]"
def wait_and_get_job_data(

@@ -346,3 +387,10 @@ self,

elif job_result.status == "RUNTIME_ERROR":
error_msg = f"Job failed during execution (status: {job_result.status}). Please check the OETC logs for details."
# Fetch and display logs
logs = self._get_job_logs(job_uuid)
logger.error(f"OETC - Job {job_uuid} logs:\n{logs}")
error_msg = (
f"Job failed during execution (status: {job_result.status}).\n"
f"Logs:\n{logs}"
)
logger.error(f"OETC Error: {error_msg}")

@@ -501,2 +549,3 @@ raise Exception(error_msg)

with tempfile.NamedTemporaryFile(prefix="linopy-", suffix=".nc") as fn:
fn.file.close()
model.to_netcdf(fn.name)

@@ -503,0 +552,0 @@ input_file_name = self._upload_file_to_gcp(fn.name)

@@ -8,3 +8,3 @@ from __future__ import annotations

import numpy
import numpy.typing
import polars as pl
from pandas import DataFrame, Index, Series

@@ -41,2 +41,3 @@ from xarray import DataArray

DataFrame,
pl.Series,
]

@@ -43,0 +44,0 @@ SignLike = Union[str, numpy.ndarray, DataArray, Series, DataFrame] # noqa: UP007

@@ -290,8 +290,3 @@ """

self,
coefficient: int
| float
| pd.Series
| pd.DataFrame
| np.ndarray
| DataArray = 1,
coefficient: ConstantLike = 1,
) -> expressions.LinearExpression:

@@ -298,0 +293,0 @@ """

@@ -31,5 +31,5 @@ # file generated by setuptools-scm

__version__ = version = '0.5.7'
__version_tuple__ = version_tuple = (0, 5, 7)
__version__ = version = '0.5.8'
__version_tuple__ = version_tuple = (0, 5, 8)
__commit_id__ = commit_id = 'g397152d70'
__commit_id__ = commit_id = 'g6d1302e67'
Metadata-Version: 2.4
Name: linopy
Version: 0.5.7
Version: 0.5.8
Summary: Linear optimization with N-D labeled arrays in Python

@@ -43,2 +43,4 @@ Author-email: Fabian Hofmann <fabianmarikhofmann@gmail.com>

Requires-Dist: nbsphinx-link==1.3.0; extra == "docs"
Requires-Dist: docutils<0.21; extra == "docs"
Requires-Dist: numpy<2; extra == "docs"
Requires-Dist: gurobipy==11.0.2; extra == "docs"

@@ -45,0 +47,0 @@ Requires-Dist: ipykernel==6.29.5; extra == "docs"

@@ -56,2 +56,4 @@ [build-system]

"nbsphinx-link==1.3.0",
"docutils<0.21",
"numpy<2",
"gurobipy==11.0.2",

@@ -58,0 +60,0 @@ "ipykernel==6.29.5",

@@ -10,2 +10,3 @@ #!/usr/bin/env python3

import pandas as pd
import polars as pl
import pytest

@@ -124,2 +125,12 @@ import xarray as xr

def test_as_dataarray_with_pl_series_dims_default() -> None:
target_dim = "dim_0"
target_index = [0, 1, 2]
s = pl.Series([1, 2, 3])
da = as_dataarray(s)
assert isinstance(da, DataArray)
assert da.dims == (target_dim,)
assert list(da.coords[target_dim].values) == target_index
def test_as_dataarray_dataframe_dims_default() -> None:

@@ -374,2 +385,10 @@ target_dims = ("dim_0", "dim_1")

def test_as_dataarray_with_np_number() -> None:
num = np.float64(1)
da = as_dataarray(num, dims=["dim1"], coords=[["a"]])
assert isinstance(da, DataArray)
assert da.dims == ("dim1",)
assert list(da.coords["dim1"].values) == ["a"]
def test_as_dataarray_with_number_default_dims_coords() -> None:

@@ -376,0 +395,0 @@ num = 1

@@ -114,2 +114,9 @@ #!/usr/bin/env python3

def test_linexpr_from_constant_pl_series(m: Model) -> None:
const = pl.Series([1, 2])
expr = LinearExpression(const, m)
assert (expr.const == const.to_numpy()).all()
assert expr.nterm == 0
def test_linexpr_from_constant_pandas_series(m: Model) -> None:

@@ -116,0 +123,0 @@ const = pd.Series([1, 2], index=pd.RangeIndex(2, name="dim_0"))

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display