Security News
Research
Data Theft Repackaged: A Case Study in Malicious Wrapper Packages on npm
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
A python interface for training Reinforcement Learning agents to play the Chef's Hat Card Game.
This repository holds the ChefsHatGym2 environment, which contains all the necessary tools to run, train and evaluate your agents while they play the Chef`s Hat game.
With this library, you will be able to:
Full documentation can be found here: Documentation.
We also provide a list of existing plugins and extensions for this library:
The Chef’s Hat Run is a web interface that allows the setup, follow-up and management of server-based rooms of the Chef`s Hat. It is ideal to run local experiments with artificial agents, without the need to configure or code anything; To run server rooms and allow remote players to player; And to explore finished games, by using the interative plotting tools to visualize and extract important game statistics.
The Chef’s Hat Player’s Club is a collection of ready-to-use artificial agents. These agents were implemented, evaluated, and discussed in specific peer-reviewed publications and can be used anytime. If you want your agent to be included in the Player’s Club, message us.
Chef`s Hat Play is a Unity interface that allows humans to play the game against other humans or artificial agents.
The Metrics Chef`s Hat package includes the tools for creating different game behavior metrics that help to better understand and describe the agents. Developed and maintained by Laura Triglia.
Nova is a dynamic game narrator, used to describe and comment on a Chef`s Hat game. Developed and mantained by Nathalia Cauas.
We also provide a series of simulated games, inside the Simulated Games. folder. Each of these games run for 1000 matches, and different combination of agents play them. They are provided as a ready-to-use resource for agent analysis, tools development or better understanding of the Chef`s Hat Simulator as a whole.
The Chef's Hat Environment provides a simple and easy-to-use API, based on the OpenAI GYM interface, for implementing, embedding, deploying, and evaluating reinforcement learning agents.
Fora a complete overview on the development of the game, refer to:
If you want to have access to the game materials (cards and playing field), please contact us using the contact information at the end of the page.
You can use our pip installation:
pip install chefshatgym
Refer to our full documentation for a complete usage and development guide.
The basic structure of the simulator is a room, that will host four players, and initialize the game. ChefsHatGym2 encapsulates the entire room structure, so it is easy to create a game using just a few lines of code:
# Start the room
room = ChefsHatRoomLocal(
room_name="local_room",
verbose=False,
)
# Create the players
p1 = AgentRandonLocal(name="01")
p2 = AgentRandonLocal(name="02")
p3 = AgentRandonLocal(name="03")
p4 = AgentRandonLocal(name="04")
# Adding players to the room
for p in [p1, p2, p3, p4]:
room.add_player(p)
# Start the game
info = room.start_new_game(game_verbose=True)
For a more detailed example, check the examples folder..
ChefsHatGym2 allows the creation of a gameroom server. Agents running in different machines can connect to the room server via simple TCP connection. A server room structure and remote agents is supported by the library, as shown in our examples folder.
ChefsHatGym2 provides an interface to encapsulate agents. It allows the extension of existing agents, but also the creation of new agents. Implementing from this interface, allow your agents to be inserted in any Chef`s Hat game run by the simulator.
Runing an agent from another machine is supported directly by the ChefsHat agent interface. By implementing this interface, your agent gets all the local and remote agent functionality and can be used by the Chef`s Hat simulator.
Here is an example of an agent that only select random actions:
The Chef’s Hat Online encapsulates the Chef’s Hat Environment and allows a human to play against three agents. The system is built using a web platform, which allows you to deploy it on a web server and run it from any device. The data collected by the Chef’s Hat Online is presented in the same format as the Chef’s Hat Gym, and can be used to train or update agents, but also to leverage human performance.
Moody Framework is a plugin that endowes each agent with an intrinsic state which is impacted by the agent's own actions.
All the examples in this repository are distributed under a Non-Comercial license. If you use this environment, you have to agree with the following itens:
Barros, P., Yalçın, Ö. N., Tanevska, A., & Sciutti, A. (2023). Incorporating rivalry in reinforcement learning for a competitive game. Neural Computing and Applications, 35(23), 16739-16752.
Barros, P., & Sciutti, A. (2022). All by Myself: Learning individualized competitive behavior with a contrastive reinforcement learning optimization. Neural Networks, 150, 364-376.
Barros, P., Yalçın, Ö. N., Tanevska, A., & Sciutti, A. (2022). Incorporating Rivalry in reinforcement learning for a competitive game. Neural Computing and Applications, 1-14.
Barros, P., Tanevska, A., & Sciutti, A. (2021, January). Learning from learners: Adapting reinforcement learning agents to be competitive in a card game. In 2020 25th International Conference on Pattern Recognition (ICPR) (pp. 2716-2723). IEEE.
Barros, P., Sciutti, A., Bloem, A. C., Hootsmans, I. M., Opheij, L. M., Toebosch, R. H., & Barakova, E. (2021, March). It's Food Fight! Designing the Chef's Hat Card Game for Affective-Aware HRI. In Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction (pp. 524-528).
Barros, P., Tanevska, A., Cruz, F., & Sciutti, A. (2020, October). Moody Learners-Explaining Competitive Behaviour of Reinforcement Learning Agents. In 2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob) (pp. 1-8). IEEE.
Barros, P., Sciutti, A., Bloem, A. C., Hootsmans, I. M., Opheij, L. M., Toebosch, R. H., & Barakova, E. (2021, March). It's food fight! Designing the chef's hat card game for affective-aware HRI. In Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction (pp. 524-528).
Get more information here: https://www.chefshatcup.poli.br/home
Get more information here: https://www.whisperproject.eu/chefshat#competition
Pablo Barros - pablovin@gmail.com
FAQs
A python interface for training Reinforcement Learning agents to play the Chef's Hat Card Game.
We found that chefshatgym demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Research
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
Research
Security News
Attackers used a malicious npm package typosquatting a popular ESLint plugin to steal sensitive data, execute commands, and exploit developer systems.
Security News
The Ultralytics' PyPI Package was compromised four times in one weekend through GitHub Actions cache poisoning and failure to rotate previously compromised API tokens.