Security News
Fluent Assertions Faces Backlash After Abandoning Open Source Licensing
Fluent Assertions is facing backlash after dropping the Apache license for a commercial model, leaving users blindsided and questioning contributor rights.
IBL (Inductive-bias Learning) is a new machine learning modeling method that uses LLM to infer the structure of the model itself from the data set and outputs it as Python code. The learned model (code model) can be used as a machine learning model to predict a new dataset.In this repository, you can try different learning methods with IBL.(Currently only binary classification with simple methods is available.)
Use the link below to try it out immediately on Google colab.
pip install iblm
import iblm
Setting
OpenAI
os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY"
ibl = iblm.IBLModel(api_type="openai", model_name="gpt-4-0125-preview", objective="binary")
Azure OpenAI
os.environ["AZURE_OPENAI_KEY"] = "YOUR_API_KEY"
os.environ["AZURE_OPENAI_ENDPOINT"] = "xxx"
os.environ["OPENAI_API_VERSION"] = "xxx"
ibl = iblm.IBLModel(api_type="azure", model_name="gpt-4-0125-preview", objective="binary")
Google API
os.environ["GOOGLE_API_KEY"] = "YOUR_API_KEY"
ibl = iblm.IBLModel(api_type="gemini", model_name="gemini-pro", objective="binary")
Anthropic API
os.environ["ANTHROPIC_API_KEY"] = "YOUR_API_KEY"
ibl = iblm.IBLModel(api_type="", model_name="", objective="binary")
Model Learning
Currently, only small amounts of data can be executed.
code_model = ibl.fit(x_train, y_train)
print(code_model)
Model Predictions
y_proba = ibl.predict(x_test)
Inductive-bias Learning
Normal Inductive-bias Learning
from iblm import IBLBaggingModel
iblm = IBLModel(
api_type="openai",
model_name="gpt-4-0125-preview",
objective="binary"
)
Inductive-bias Learning bagging
Sampling data from a given dataset, we create multiple models, and the average of these models is used as the predicted value.
from iblm import IBLBaggingModel
iblbagging = IBLBaggingModel(
api_type="openai",
model_name="gpt-4-0125-preview",
objective="binary",
num_model=20, # Number of models to create
max_sample = 2000, # Maximum number of samples from the data set
min_sample = 300, # Minimum number of samples from the data set
)
If you find this repo helpful, please cite the following papers:
@article{tanaka2023inductive,
title={Inductive-bias Learning: Generating Code Models with Large Language Model},
author={Tanaka, Toma and Emoto, Naofumi and Yumibayashi, Tsukasa},
journal={arXiv preprint arXiv:2308.09890},
year={2023}
}
FAQs
Inductive-bias Learning
We found that iblm demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Fluent Assertions is facing backlash after dropping the Apache license for a commercial model, leaving users blindsided and questioning contributor rights.
Research
Security News
Socket researchers uncover the risks of a malicious Python package targeting Discord developers.
Security News
The UK is proposing a bold ban on ransomware payments by public entities to disrupt cybercrime, protect critical services, and lead global cybersecurity efforts.