Research
Recent Trends in Malicious Packages Targeting Discord
The Socket research team breaks down a sampling of malicious packages that download and execute files, among other suspicious behaviors, targeting the popular Discord platform.
@iconify/icons-raphael
Advanced tools
Readme
This package includes individual files for each icon, ready to be imported into React project with Iconify for React.
Each icon is in its own file, so you can bundle several icons from different icon sets without bundling entire icon sets.
If you are using NPM:
npm install @iconify/react @iconify/icons-raphael --save
If you are using Yarn:
yarn add @iconify/react @iconify/icons-raphael
import { Icon, InlineIcon } from "@iconify/react";
import homeIcon from "@iconify/icons-raphael/home";
import cloudIcon from "@iconify/icons-raphael/cloud";
<Icon icon={homeIcon} />
<p>This is some text with icon adjusted for baseline: <InlineIcon icon={cloudIcon} /></p>
See https://github.com/iconify/iconify-react for details.
Icons author: [object Object] License: [object Object]
FAQs
Iconify icon components for Raphael
We found that @iconify/icons-raphael demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
The Socket research team breaks down a sampling of malicious packages that download and execute files, among other suspicious behaviors, targeting the popular Discord platform.
Security News
Socket CEO Feross Aboukhadijeh joins a16z partners to discuss how modern, sophisticated supply chain attacks require AI-driven defenses and explore the challenges and solutions in leveraging AI for threat detection early in the development life cycle.
Security News
NIST's new AI Risk Management Framework aims to enhance the security and reliability of generative AI systems and address the unique challenges of malicious AI exploits.