
Security Fundamentals
Turtles, Clams, and Cyber Threat Actors: Shell Usage
The Socket Threat Research Team uncovers how threat actors weaponize shell techniques across npm, PyPI, and Go ecosystems to maintain persistence and exfiltrate data.
@the-grid/data-helper
Advanced tools
Data helpers for The Grid API. APIs tries to deliver every measurement/data in a consistent way. This consistency is complemented by data-helper that transform the original data into something useful for consumers like Taylor.
Given some data block got from The Grid API ...
DataHelper = require 'data-helper'
block =
id: 'foo'
cover:
src: 'cover.jpg'
width: 100
height: 100
scene:
bbox:
x: 10
y: 45
width: 100
height: 40
... we want to transform this data, enhancing it with more high level information (e.g. column layout data, gradient maps control points, contrast levels, etc):
helper = new DataHelper()
{transformed} = helper.transform block
It will give us an enhanced copy of the original block:
transformed =
id: 'foo'
cover:
src: 'cover.jpg'
width: 100
height: 100
scene:
bbox:
x: 10
y: 45
width: 100
height: 40
lines:
direction: 'horizontal'
stripes: [
type: 'space'
bbox:
x: 0
y: 0
width: 100
height: 45
,
type: 'scene'
bbox:
x: 10
y: 45
width: 100
height: 40
]
In this way, someone should prefer to use transformed
than the
original block
.
We can pass some properties to Data Helper, overriding defaults:
helper = new DataHelper
# If scene area is below 50% of image area, we add 15 pixels around scene
minScenePadding: 0.5
scenePadding: 15
# If face area is below 33% of image area, we increase face area 2x
minFacePadding: 0.33
facePadding: 2.0
# If scene area is below 30% of image area, we use whole image as scene
minScene: 0.3
# If face confidence is above minFaceConfidence, we discard it
minFaceConfidence: 0.4
# Stripes smaller than 50% of scene are filtered
minSpace: 0.5
Those are the transformations we have now:
If a block has a cover, it is transformed by data-helper.
A scene (block.cover.scene
) defines the most important region of an
image. It is calculated based on other information like salient
region, faces and image dimensions. That's the green bounding box in
the following image:
We also filter some measurements:
minScene%
of image area, we define scene being image itself. Default minScene
is 33% of image areaAnd do some processing:
minScenePadding%
of image area, we add scenePadding
pixels around sceneminFaceconfidence
, we remove itminFacePadding%
of image area, we increase face area facePadding
times the face areaall_faces.bbox
(or Negative space based on lines/stripes)
This measurement/feature/heuristics takes inspiration from the Rule of Thirds, a well known "rule of thumb" in visual composing.
Lines (block.cover.lines
) are bounding boxes that represents space
columns or rows around the scene
. We can overlay content (text) in
space
columns or rows around the scene
. We also associate a direction
to a line, defining the direction we should place text ('horizontal'
or 'vertical').
We calculate lines
for each image that has scene
in
data-helper. A 3-stripes has the following format:
block.cover.lines =
direction: 'vertical'
stripes: [
type: 'space'
bbox:
x: 0
y: 0
width: 200
height: 500
,
type: 'scene'
bbox:
x: 200
y: 150
width: 210
height: 230
,
type: 'space'
bbox:
x: 410
y: 0
width: 240
height: 500
]
Depending on space
stripes dimensions (stripes smaller than
minSpace%
of scene are filtered) we can have also 2-stripes:
block.cover.lines =
direction: 'vertical'
stripes: [
type: 'scene'
bbox:
x: 0
y: 150
width: 320
height: 230
,
type: 'space'
bbox:
x: 320
y: 0
width: 180
height: 500
]
And if both space
stripes are small, we have 1-stripe (which have
the same dimension of the image):
block.cover.lines =
direction: 'vertical'
stripes: [
type: 'scene'
bbox:
x: 0
y: 0
width: 600
height: 500
]
Here are some examples. In the following case, the right space stripe was filtered because it has less then 50% of scene's dimension, so we have a 2 columns as a result:
For the next image we have 3 columns because space stripes are greater than 50% of scene's dimension:
If there is any space stripes, we have only 1 column, the scene itself:
Other examples for the sake of clarity:
For blocks that have cover.histogram.l
or cover.histogram.s
we
provide a lightness and saturation level as a [0-1]
float number in
cover.lightness
and cover.saturation
. Greater the value, more
light or saturated is the whole image.
For blocks that have a cover
and dimensions, we try to find the
closest aspect ratio, considering well known "good" aspects:
goodAspects = [
'2:1'
'1:2'
'2:3'
'3:2'
'4:3'
'3:4'
'4:5'
'5:4'
'9:16'
'16:9'
'1:1'
'1.5:1'
'1:1.5'
]
It's available as a float number in block.cover.closest_aspect
.
For blocks that have cover.histogram.a
we provide a transparent level
as a [0-1]
float number in cover.transparent
key. Greater the value,
more transparent is the whole image. Another way to put it is: if
cover.transparent
is greater than zero, it has at least one transparent
pixel.
FAQs
Data helpers for The Grid API.
The npm package @the-grid/data-helper receives a total of 2 weekly downloads. As such, @the-grid/data-helper popularity was classified as not popular.
We found that @the-grid/data-helper demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 17 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security Fundamentals
The Socket Threat Research Team uncovers how threat actors weaponize shell techniques across npm, PyPI, and Go ecosystems to maintain persistence and exfiltrate data.
Security News
At VulnCon 2025, NIST scrapped its NVD consortium plans, admitted it can't keep up with CVEs, and outlined automation efforts amid a mounting backlog.
Product
We redesigned our GitHub PR comments to deliver clear, actionable security insights without adding noise to your workflow.