
Security News
Open Source CAI Framework Handles Pen Testing Tasks up to 3,600Ć Faster Than Humans
CAI is a new open source AI framework that automates penetration testing tasks like scanning and exploitation up to 3,600Ć faster than humans.
Discord
|
Documentation
|
User Guide
|
Want to Contribute?
pip install polars-ds
PDS is a modern data science package that
It stands on the shoulders of the great Polars dataframe. You can see examples. Here are some highlights!
import polars as pl
import polars_ds as pds
# Parallel evaluation of multiple ML metrics on different segments of data
df.lazy().group_by("segments").agg(
# any other metrics you want in here
pds.query_roc_auc("actual", "predicted").alias("roc_auc"),
pds.query_log_loss("actual", "predicted").alias("log_loss"),
).collect()
shape: (2, 3)
āāāāāāāāāāāā¬āāāāāāāāāāā¬āāāāāāāāāāā
ā segments ā roc_auc ā log_loss ā
ā --- ā --- ā --- ā
ā str ā f64 ā f64 ā
āāāāāāāāāāāāŖāāāāāāāāāāāŖāāāāāāāāāāā”
ā a ā 0.497745 ā 1.006438 ā
ā b ā 0.498801 ā 0.997226 ā
āāāāāāāāāāāā“āāāāāāāāāāā“āāāāāāāāāāā
import polars_ds as pds
from polars_ds.modeling.transforms import polynomial_features
# If you want the underlying computation to be done in f32, set pds.config.LIN_REG_EXPR_F64 = False
df.select(
pds.lin_reg_report(
*(
["x1", "x2", "x3"] +
polynomial_features(["x1", "x2", "x3"], degree = 2, interaction_only=True)
)
, target = "target"
, add_bias = False
).alias("result")
).unnest("result")
āāāāāāāāāāāā¬āāāāāāāāāāāā¬āāāāāāāāāāā¬āāāāāāāāāāāā¬āāāāāāāā¬āāāāāāāāāāāā¬āāāāāāāāāāā¬āāāāāāāāāāā¬āāāāāāāāāāā
ā features ā beta ā std_err ā t ā p>|t| ā 0.025 ā 0.975 ā r2 ā adj_r2 ā
ā --- ā --- ā --- ā --- ā --- ā --- ā --- ā --- ā --- ā
ā str ā f64 ā f64 ā f64 ā f64 ā f64 ā f64 ā f64 ā f64 ā
āāāāāāāāāāāāŖāāāāāāāāāāāāŖāāāāāāāāāāāŖāāāāāāāāāāāāŖāāāāāāāāŖāāāāāāāāāāāāŖāāāāāāāāāāāŖāāāāāāāāāāāŖāāāāāāāāāāā”
ā x1 ā 0.26332 ā 0.000315 ā 835.68677 ā 0.0 ā 0.262703 ā 0.263938 ā 0.971087 ā 0.971085 ā
ā ā ā ā 8 ā ā ā ā ā ā
ā x2 ā 0.413824 ā 0.000311 ā 1331.9883 ā 0.0 ā 0.413216 ā 0.414433 ā 0.971087 ā 0.971085 ā
ā ā ā ā 32 ā ā ā ā ā ā
ā x3 ā 0.113688 ā 0.000315 ā 361.29924 ā 0.0 ā 0.113072 ā 0.114305 ā 0.971087 ā 0.971085 ā
ā x1*x2 ā -0.097272 ā 0.000543 ā -179.0377 ā 0.0 ā -0.098337 ā -0.09620 ā 0.971087 ā 0.971085 ā
ā ā ā ā 76 ā ā ā 7 ā ā ā
ā x1*x3 ā -0.097266 ā 0.000542 ā -179.4486 ā 0.0 ā -0.098329 ā -0.09620 ā 0.971087 ā 0.971085 ā
ā ā ā ā 32 ā ā ā 4 ā ā ā
ā x2*x3 ā -0.097987 ā 0.000542 ā -180.7579 ā 0.0 ā -0.099049 ā -0.09692 ā 0.971087 ā 0.971085 ā
ā ā ā ā 6 ā ā ā 4 ā ā ā
āāāāāāāāāāāā“āāāāāāāāāāāā“āāāāāāāāāāā“āāāāāāāāāāāā“āāāāāāāā“āāāāāāāāāāāā“āāāāāāāāāāā“āāāāāāāāāāā“āāāāāāāāāāā
See SKLEARN_COMPATIBILITY for more details.
import polars as pl
import polars.selectors as cs
from polars_ds.pipeline import Pipeline, Blueprint
bp = (
Blueprint(df, name = "example", target = "approved", lowercase=True) # You can optionally
.filter(
"city_category is not null" # or equivalently, you can do: pl.col("city_category").is_not_null()
)
.linear_impute(features = ["var1", "existing_emi"], target = "loan_period")
.impute(["existing_emi"], method = "median")
.append_expr( # generate some features
pl.col("existing_emi").log1p().alias("existing_emi_log1p"),
pl.col("loan_amount").log1p().alias("loan_amount_log1p"),
pl.col("loan_amount").clip(lower_bound = 0, upper_bound = 1000).alias("loan_amount_clipped"),
pl.col("loan_amount").sqrt().alias("loan_amount_sqrt"),
pl.col("loan_amount").shift(-1).alias("loan_amount_lag_1") # any kind of lag transform
)
.scale( # target is numerical, but will be excluded automatically because bp is initialzied with a target
cs.numeric().exclude(["var1", "existing_emi_log1p"]), method = "standard"
) # Scale the columns up to this point. The columns below won't be scaled
.append_expr(
# Add missing flags
pl.col("employer_category1").is_null().cast(pl.UInt8).alias("employer_category1_is_missing")
)
.one_hot_encode("gender", drop_first=True)
.woe_encode("city_category") # No need to specify target because we initialized bp with a target
.target_encode("employer_category1", min_samples_leaf = 20, smoothing = 10.0) # same as above
)
print(bp)
pipe:Pipeline = bp.materialize()
# Check out the result in our example notebooks! (examples/pipeline.ipynb)
df_transformed = pipe.transform(df)
df_transformed.head()
Get all neighbors within radius r, call them best friends, and count the number. Due to limitations, this currently doesn't preserve the index, and is not fast when k or dimension of data is large.
df.select(
pl.col("id"),
pds.query_radius_ptwise(
pl.col("var1"), pl.col("var2"), pl.col("var3"), # Columns used as the coordinates in 3d space
index = pl.col("id"),
r = 0.1,
dist = "sql2", # squared l2
parallel = True
).alias("best friends"),
).with_columns( # -1 to remove the point itself
(pl.col("best friends").list.len() - 1).alias("best friends count")
).head()
shape: (5, 3)
āāāāāāā¬āāāāāāāāāāāāāāāāāāāā¬āāāāāāāāāāāāāāāāāāāāā
ā id ā best friends ā best friends count ā
ā --- ā --- ā --- ā
ā u32 ā list[u32] ā u32 ā
āāāāāāāŖāāāāāāāāāāāāāāāāāāāāŖāāāāāāāāāāāāāāāāāāāāā”
ā 0 ā [0, 811, ⦠1435] ā 152 ā
ā 1 ā [1, 953, ⦠1723] ā 159 ā
ā 2 ā [2, 355, ⦠835] ā 243 ā
ā 3 ā [3, 102, ⦠1129] ā 110 ā
ā 4 ā [4, 1280, ⦠1543] ā 226 ā
āāāāāāā“āāāāāāāāāāāāāāāāāāāā“āāāāāāāāāāāāāāāāāāāāā
df.select( # Column "word", compared to string in pl.lit(). It also supports column vs column comparison
pds.str_leven("word", pl.lit("asasasa"), return_sim=True).alias("Levenshtein"),
pds.str_osa("word", pl.lit("apples"), return_sim=True).alias("Optimal String Alignment"),
pds.str_jw("word", pl.lit("apples")).alias("Jaro-Winkler"),
)
df.group_by("market_id").agg(
pds.ttest_ind("var1", "var2", equal_var=False).alias("t-test"),
pds.chi2("category_1", "category_2").alias("chi2-test"),
pds.f_test("var1", group = "category_1").alias("f-test")
)
shape: (3, 4)
āāāāāāāāāāāāā¬āāāāāāāāāāāāāāāāāāāāāāā¬āāāāāāāāāāāāāāāāāāāāāāā¬āāāāāāāāāāāāāāāāāāāāāā
ā market_id ā t-test ā chi2-test ā f-test ā
ā --- ā --- ā --- ā --- ā
ā i64 ā struct[2] ā struct[2] ā struct[2] ā
āāāāāāāāāāāāāŖāāāāāāāāāāāāāāāāāāāāāāāŖāāāāāāāāāāāāāāāāāāāāāāāŖāāāāāāāāāāāāāāāāāāāāāā”
ā 0 ā {2.072749,0.038272} ā {33.487634,0.588673} ā {0.312367,0.869842} ā
ā 1 ā {0.469946,0.638424} ā {42.672477,0.206119} ā {2.148937,0.072536} ā
ā 2 ā {-1.175325,0.239949} ā {28.55723,0.806758} ā {0.506678,0.730849} ā
āāāāāāāāāāāāā“āāāāāāāāāāāāāāāāāāāāāāā“āāāāāāāāāāāāāāāāāāāāāāā“āāāāāāāāāāāāāāāāāāāāāā
Under some mild assumptions, (e.g. columns implement to_numpy()), PDS works with other eager dataframes. For example, with Pandas:
from polars_ds.compat import compat as pds2
df_pd["linear_regression_result"] = pds2.lin_reg(
df_pd["x1"], df_pd["x2"], df_pd["x3"],
target = df_pd["y"],
return_pred = True
)
df_pd
The magic here is the compat module and the fact that most eager dataframes implement the array protocal.
Other common numerical functions such as: pds.convolve
, pds.query_r2
, pds.principal_components
, etc. See our docs for more information.
import polars_ds as pds
To make full use of the Diagnosis module, do
pip install "polars_ds[plot]"
Feel free to take a look at our benchmark notebook!
Generally speaking, the more expressions you want to evaluate simultaneously, the faster Polars + PDS will be than Pandas + (SciPy / Sklearn / NumPy). The more CPU cores you have on your machine, the bigger the time difference will be in favor of Polars + PDS.
Currently in Beta. Feel free to submit feature requests in the issues section of the repo. This library will only depend on python Polars (for most of its core) and will try to be as stable as possible for polars>=1. Exceptions will be made when Polars's update forces changes in the plugins.
This package is not tested with Polars streaming mode and is not designed to work with data so big that has to be streamed. This concerns the plugin expressions like pds.lin_reg
, etc. By the same token, Polars large index version is not intentionally supported at this point. However, non-plugin Polars utilities provided by the function should work with the streaming engine, as they are native Polars code.
The guide here is not specific to LTS CPU, and can be used generally.
The best advice for LTS CPU is that you should compile the package yourself. First clone the repo and make sure Rust is installed on the system. Create a python virtual environment and install maturin in it. Next set the RUSTFLAG environment variable. The official polars-lts-cpu features are the following:
RUSTFLAGS=-C target-feature=+sse3,+ssse3,+sse4.1,+sse4.2,+popcnt,+cmpxchg16b
If you simply want to compile from source, you may set target cpu to native, which autodetects CPU features.
RUSTFLAGS=-C target-cpu=native
If you are compiling for LTS CPU, then in pyproject.toml, update the polars dependency to polars-lts-cpu:
polars >= 1.4.0 # polars-lts-cpu >= 1.4.0
Lastly, run
maturin develop --release
If you want to test the build locally, you may run
# pip install -r requirements-test.txt
pytest tests/test_*
If you see this error in pytest, it means setuptools is not installed and you may ignore it. It is just a legacy python builtin package.
tests/test_many.py::test_xi_corr - ModuleNotFoundError: No module named 'pkg_resources'
You can then publish it to your private PYPI server, or just use it locally.
FAQs
Unknown package
We found that polars-ds demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago.Ā It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
CAI is a new open source AI framework that automates penetration testing tasks like scanning and exploitation up to 3,600Ć faster than humans.
Security News
Deno 2.4 brings back bundling, improves dependency updates and telemetry, and makes the runtime more practical for real-world JavaScript projects.
Security News
CVEForecast.org uses machine learning to project a record-breaking surge in vulnerability disclosures in 2025.