
Product
Rust Support Now in Beta
Socket's Rust support is moving to Beta: all users can scan Cargo projects and generate SBOMs, including Cargo.toml-only crates, with Rust-aware supply chain checks.
A python interface for training Reinforcement Learning agents to play the Chef's Hat Card Game.
This repository holds the ChefsHatGym environment, which contains all the necessary tools to run, train and evaluate your agents while they play the Chef`s Hat game.
With this library, you will be able to:
Full documentation can be found here: Documentation.
We also provide a list of existing plugins and extensions for this library:
The Chef’s Hat Run is a web interface that allows the setup, follow-up and management of server-based rooms of the Chef`s Hat. It is ideal to run local experiments with artificial agents, without the need to configure or code anything; To run server rooms and allow remote players to player; And to explore finished games, by using the interative plotting tools to visualize and extract important game statistics.
The Chef’s Hat Player’s Club is a collection of ready-to-use artificial agents. These agents were implemented, evaluated, and discussed in specific peer-reviewed publications and can be used anytime. If you want your agent to be included in the Player’s Club, message us.
Chef`s Hat Play is a Unity interface that allows humans to play the game against other humans or artificial agents.
The Metrics Chef`s Hat package includes the tools for creating different game behavior metrics that help to better understand and describe the agents. Developed and maintained by Laura Triglia.
Nova is a dynamic game narrator, used to describe and comment on a Chef`s Hat game. Developed and mantained by Nathalia Cauas.
We also provide a series of simulated games, inside the Simulated Games. folder. Each of these games run for 1000 matches, and different combination of agents play them. They are provided as a ready-to-use resource for agent analysis, tools development or better understanding of the Chef`s Hat Simulator as a whole.
The Chef's Hat Environment provides a simple and easy-to-use API, based on the OpenAI GYM interface, for implementing, embedding, deploying, and evaluating reinforcement learning agents.
Fora a complete overview on the development of the game, refer to:
If you want to have access to the game materials (cards and playing field), please contact us using the contact information at the end of the page.
You can use our pip installation:
pip install chefshatgym
Refer to our full documentation for a complete usage and development guide.
The basic structure of the simulator is a room, that will host four players, and initialize the game. ChefsHatGym encapsulates the entire room structure. A local game can be started with a few lines of code:
import asyncio
from rooms.room import Room
from agents.random_agent import RandomAgent
async def main():
room = Room(run_remote_room=False, room_name="local_room", max_matches=1)
players = [RandomAgent(name=f"P{i}", log_directory=room.room_dir) for i in range(4)]
for p in players:
room.connect_player(p)
await room.run()
print(room.final_scores)
asyncio.run(main())
For a more detailed example, check the examples folder..
ChefsHatGym can also host a room as a websocket server. Agents running on different machines can join the server and play together.
# Server
import asyncio
from rooms.room import Room
async def main():
room = Room(run_remote_room=True, room_name="server_room",
room_password="secret", room_port=8765)
await room.run()
asyncio.run(main())
Remote agents connect using the remote_loop
method:
import asyncio
from agents.random_agent import RandomAgent
async def main():
agent = RandomAgent(
"P1",
run_remote=True,
host="localhost",
port=8765,
room_name="server_room",
room_password="secret",
)
await agent.remote_loop()
asyncio.run(main())
For complete examples, check the examples folder.
ChefsHatGym provides an interface to encapsulate agents. It allows the extension of existing agents, but also the creation of new agents. Implementing from this interface allows your agents to be inserted in any Chef`s Hat game run by the simulator.
Running an agent from another machine is supported directly by the agent interface. By enabling run_remote=True
and calling remote_loop
, your agent gets all the local and remote functionality and can be used by the Chef`s Hat simulator.
Here is an example of an agent that only select random actions:
The Chef’s Hat Online encapsulates the Chef’s Hat Environment and allows a human to play against three agents. The system is built using a web platform, which allows you to deploy it on a web server and run it from any device. The data collected by the Chef’s Hat Online is presented in the same format as the Chef’s Hat Gym, and can be used to train or update agents, but also to leverage human performance.
Moody Framework is a plugin that endowes each agent with an intrinsic state which is impacted by the agent's own actions.
All the examples in this repository are distributed under a Non-Comercial license. If you use this environment, you have to agree with the following itens:
Barros, P., Yalçın, Ö. N., Tanevska, A., & Sciutti, A. (2023). Incorporating rivalry in reinforcement learning for a competitive game. Neural Computing and Applications, 35(23), 16739-16752.
Barros, P., & Sciutti, A. (2022). All by Myself: Learning individualized competitive behavior with a contrastive reinforcement learning optimization. Neural Networks, 150, 364-376.
Barros, P., Yalçın, Ö. N., Tanevska, A., & Sciutti, A. (2022). Incorporating Rivalry in reinforcement learning for a competitive game. Neural Computing and Applications, 1-14.
Barros, P., Tanevska, A., & Sciutti, A. (2021, January). Learning from learners: Adapting reinforcement learning agents to be competitive in a card game. In 2020 25th International Conference on Pattern Recognition (ICPR) (pp. 2716-2723). IEEE.
Barros, P., Sciutti, A., Bloem, A. C., Hootsmans, I. M., Opheij, L. M., Toebosch, R. H., & Barakova, E. (2021, March). It's Food Fight! Designing the Chef's Hat Card Game for Affective-Aware HRI. In Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction (pp. 524-528).
Barros, P., Tanevska, A., Cruz, F., & Sciutti, A. (2020, October). Moody Learners-Explaining Competitive Behaviour of Reinforcement Learning Agents. In 2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob) (pp. 1-8). IEEE.
Barros, P., Sciutti, A., Bloem, A. C., Hootsmans, I. M., Opheij, L. M., Toebosch, R. H., & Barakova, E. (2021, March). It's food fight! Designing the chef's hat card game for affective-aware HRI. In Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction (pp. 524-528).
Get more information here: https://www.chefshatcup.poli.br/home
Get more information here: https://www.whisperproject.eu/chefshat#competition
Pablo Barros - pablovin@gmail.com
FAQs
A python interface for training Reinforcement Learning agents to play the Chef's Hat Card Game.
We found that chefshatgym demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Product
Socket's Rust support is moving to Beta: all users can scan Cargo projects and generate SBOMs, including Cargo.toml-only crates, with Rust-aware supply chain checks.
Product
Socket Fix 2.0 brings targeted CVE remediation, smarter upgrade planning, and broader ecosystem support to help developers get to zero alerts.
Security News
Socket CEO Feross Aboukhadijeh joins Risky Business Weekly to unpack recent npm phishing attacks, their limited impact, and the risks if attackers get smarter.