Security News
The Risks of Misguided Research in Supply Chain Security
Snyk's use of malicious npm packages for research raises ethical concerns, highlighting risks in public deployment, data exfiltration, and unauthorized testing.
partial.lenses
Advanced tools
[ Tutorial | Reference | Background ]
This library provides a collection of Ramda compatible partial lenses. While an ordinary lens can be used to view and update an existing part of a data structure, a partial lens can view optional data, insert new data, update existing data and delete existing data and can provide default values and maintain required data structure parts.
In JavaScript, missing data can be mapped to undefined
, which is what partial
lenses also do. When a part of a data structure is missing, an attempt to view
it returns undefined
. When a part is missing, setting it to a defined value
inserts the new part. Setting an existing part to undefined
deletes it.
Partial lenses are defined in such a way that operations compose and one can
conveniently and robustly operate on deeply nested data structures.
Let's work with the following sample JSON object:
const data = { contents: [ { language: "en", text: "Title" },
{ language: "sv", text: "Rubrik" } ] }
First we import libraries
import L from "partial.lenses"
import R from "ramda"
and compose a parameterized lens for accessing texts:
const textIn = language =>
L.compose(L.prop("contents"),
L.required([]),
L.normalize(R.sortBy(R.prop("language"))),
L.find(R.whereEq({language})),
L.default({language}),
L.prop("text"),
L.default(""))
Take a moment to read through the above definition line by line. Each line has
a specific purpose. The purpose of the L.prop(...)
lines is probably obvious.
The other lines we will mention below.
Thanks to the parameterized search part, L.find(R.whereEq({language}))
, of the
lens composition, we can use it to query texts:
> L.view(textIn("sv"), data)
"Rubrik"
> L.view(textIn("en"), data)
"Title"
Partial lenses can deal with missing data. If we use the partial lens to query a text that does not exist, we get the default:
> L.view(textIn("fi"), data)
""
We get this default, rather than undefined, thanks to the last part,
L.default("")
, of our lens composition. We get the default even if we query
from undefined
:
> L.view(textIn("fi"), undefined)
""
With partial lenses, undefined
is the equivalent of empty or non-existent.
As with ordinary lenses, we can use the same lens to update texts:
> L.set(textIn("en"), "The title", data)
{ contents: [ { language: "en", text: "The title" },
{ language: "sv", text: "Rubrik" } ] }
The same partial lens also allows us to insert new texts:
> L.set(textIn("fi"), "Otsikko", data)
{ contents: [ { language: "en", text: "Title" },
{ language: "fi", text: "Otsikko" },
{ language: "sv", text: "Rubrik" } ] }
Note the position into which the new text was inserted. The array of texts is
kept sorted thanks to the L.normalize(R.sortBy(R.prop("language")))
part of
our lens.
Finally, we can use the same partial lens to delete texts:
> L.set(textIn("sv"), undefined, data)
{ contents: [ { language: "en", text: "Title" } ] }
Note that a single text is actually a part of an object. The key to having the
whole object vanish, rather than just the text
property, is the
L.default({language})
part of our lens composition. A L.default(value)
lens
works symmetrically. When set with value
, the result is undefined
, which
means that the focus of the lens is to be deleted.
If we delete all of the texts, we get the required value:
> R.pipe(L.set(textIn("sv"), undefined),
L.set(textIn("en"), undefined))(data)
{ contents: [] }
The contents
property is not removed thanks to the L.required([])
part of
our lens composition. L.required
is the dual of L.default
. L.default
replaces undefined values when viewed and L.required
replaces undefined values
when set.
Note that unless required and default values are explicitly specified as part of the lens, they will both be undefined.
Take out one (or more) L.required(...)
, L.normalize(...)
or L.default(...)
part(s) from the lens composition and try to predict what happens when you rerun
the examples with the modified lens composition. Verify your reasoning by
actually rerunning the examples.
For clarity, the previous code snippets avoided some of the shorthands that this library supports. In particular,
L.compose(...)
can be abbreviated as L(...)
,L.prop(string)
can be abbreviated as string
, andL.set(l, undefined, s)
can be abbreviated as L.delete(l, s)
.It is also typical to compose lenses out of short paths following the schema of the JSON data being manipulated. Reconsider the lens from the start of the example:
const textIn = language =>
L.compose(L.prop("contents"),
L.required([]),
L.normalize(R.sortBy(R.prop("language"))),
L.find(R.whereEq({language})),
L.default({language}),
L.prop("text"),
L.default(""))
Following the structure or schema of the JSON, we could break this into three separate lenses:
Furthermore, we could organize the lenses into an object following the structure of the JSON:
const M = {
data: {
contents: L("contents",
L.required([]),
L.normalize(R.sortBy(R.prop("language"))))
},
contents: {
contentIn: language => L(L.find(R.whereEq({language})),
L.default({language}))
},
content: {
text: L("text", L.default(""))
}
}
Using the above object, we could rewrite the parameterized textIn
lens as:
const textIn = language => L(M.data.contents,
M.contents.contentIn(language),
M.content.text)
This style of organizing lenses is overkill for our toy example. In a more
realistic case the data
object would contain many more properties. Also,
rather than composing a lens, like textIn
above, to access a leaf property
from the root of our object, we might actually compose lenses incrementally as
we inspect the model structure.
The previous example is based on an actual use case. In this section we look at a more involved example: BST, binary search tree, as a lens.
Binary search may initially seem to be outside the scope of definable lenses.
However, the L.choose
lens allows for dynamic construction of lenses based on
examining the data structure being manipulated. Inside L.choose
we can write
the ordinary BST logic to pick the correct branch based on the key in the
currently examined node and the key that we are looking for. So, here is our
first attempt at a BST lens:
const binarySearch = key =>
L(L.default({key}),
L.choose(node =>
key < node.key ? L("smaller", binarySearch(key)) :
node.key < key ? L("greater", binarySearch(key)) :
L.identity))
const valueOf = key => L(binarySearch(key), "value")
This actually works to a degree. We can use the valueOf
lens constructor to
build a binary tree:
> const t = R.reduce((tree, item) => L.set(valueOf(item.key), item.value, tree),
undefined,
[{key: "c", value: 1},
{key: "a", value: 2},
{key: "b", value: 3}])
> t
{ smaller: { greater: { value: 3, key: 'b' }, value: 2, key: 'a' },
value: 1,
key: 'c' }
However, the above binarySearch
lens constructor does not maintain the BST
structure when values are being deleted:
> L.delete(valueOf('c'), t)
{ smaller: { greater: { value: 3, key: 'b' },
value: 2,
key: 'a' },
key: 'c' }
How do we fix this? What we need is to normalize the data structure after
changes. The L.normalize
lens can be used for that purpose. Here is the
updated binarySearch
definition:
const binarySearch = key =>
L(L.default({key}),
L.normalize(node => {
if ("value" in node)
return node
if (!("greater" in node) && "smaller" in node)
return node.smaller
if (!("smaller" in node) && "greater" in node)
return node.greater
return node
}),
L.choose(node =>
key < node.key ? L("smaller", binarySearch(key)) :
node.key < key ? L("greater", binarySearch(key)) :
L.identity))
Now we can also delete values from a binary tree:
> L.delete(valueOf('c'), t)
{ greater: { value: 3, key: 'b' }, value: 2, key: 'a' }
As an exercise you could improve the normalization to maintain some balance condition such as AVL.
The lenses and operations on lenses are accessed via the default import:
import L from "partial.lenses"
You can access basic operations on lenses via the default import L
:
L.compose(l, ...ls)
L(l, ...ls)
and L.compose(l, ...ls)
both are the same as R.compose(lift(l), ...ls.map(lift))
(see compose) and
compose a lens from a path of lenses.
For example:
> L.view(L("a", 1), {a: ["b", "c"]})
"c"
L.lens(get, set)
L.lens(get, set)
is the same as R.lens(get, set)
(see
lens) and creates a new primitive lens.
L.over(l, x2x, s)
L.over(l, x2x, s)
is the same as R.over(lift(l), x2x, s)
(see
over) and allows one to map over the
focused element of a data structure.
For example:
> L.over("elems", R.map(L.delete("x")), {elems: [{x: 1, y: 2}, {x: 3, y: 4}]})
{elems: [{y: 2}, {y: 4}]}
L.set(l, x, s)
L.set(l, x, s)
is the same as R.set(lift(l), x, s)
(see
set) and is also equivalent to L.over(l, () => x, s)
.
For example:
> L.set(L("a", 0, "x"), 11, {id: "z"})
{a: [{x: 11}], id: "z"}
L.view(l, s)
L.view(l, s)
is the same as R.view(lift(l), s)
(see
view) and returns the focused element
from a data structure.
For example:
> L.view("y", {x: 112, y: 101})
101
The idempotent lift
operation is defined as
const lift = l => {
switch (typeof l) {
case "string": return L.prop(l)
case "number": return L.index(l)
default: return l
}
}
and is available as a non-default export. All operations in this library that take lenses as arguments implicitly lift them.
L.delete(l, s)
L.delete(l, s)
is equivalent to L.set(l, undefined, s)
. With partial
lenses, setting to undefined typically has the effect of removing the focused
element.
For example:
> L.delete(L("a", "b"), {a: {b: 1}, x: {y: 2}})
{x: {y: 2}}
L.deleteAll(l, s)
L.deleteAll(l, s)
deletes all the non undefined
items targeted by the lens
l
from s
. This only makes sense for a lens that
undefined
when it doesn't find an item to focus on.For example:
> L.deleteAll(L.findWith("a"), [{x: 1}, {a: 2}, {a: 3, y: 4}, {z: 5}])
[{x: 1}, {y: 4}, {z: 5}]
In alphabetical order.
L.append
L.append
is a special lens that operates on arrays. The view of L.append
is
always undefined. Setting L.append
to undefined has no effect by itself.
Setting L.append
to a defined value appends the value to the end of the
focused array.
L.augment({prop: obj => val, ...props})
L.augment({prop: obj => val, ...props})
is given a template of functions to
compute new properties. When viewing or setting undefined, the result is
undefined. When viewing a defined object, the object is extended with the
computed properties. When set with a defined object, the extended properties
are removed.
L.choose(maybeValue => PLens)
L.choose(maybeValue => PLens)
creates a lens whose operation is determined by
the given function that maps the underlying view, which can be undefined, to a
lens. The lens returned by the given function will be lifted.
L.filter(predicate)
L.filter(predicate)
operates on arrays. When viewed, only elements matching
the given predicate will be returned. When set, the resulting array will be
formed by concatenating the set array and the complement of the filtered
context. If the resulting array would be empty, the whole result will be
undefined.
Note: An alternative design for filter could implement a smarter algorithm to
combine arrays when set. For example, an algorithm based on
edit distance could be used to
maintain relative order of elements. While this would not be difficult to
implement, it doesn't seem to make sense, because in most cases use of
normalize
would be preferable.
L.find(value => boolean)
L.find(value => boolean)
operates on arrays like L.index
, but the index to
be viewed is determined by finding the first element from the input array that
matches the given predicate. When no matching element is found the effect is
same as with L.append
.
L.findWith(l, ...ls)
L.findWith(l, ...ls)
is defined as
L.findWith = (l, ...ls) => {
const lls = L(l, ...ls)
return L(L.find(x => L.view(lls, x) !== undefined), lls)
}
and basically chooses an index from an array through which the given lens, L(l, ...ls)
, focuses on a defined item and then returns a lens that focuses on that
item.
L.firstOf(l, ...ls)
L.firstOf(l, ...ls)
returns a partial lens that acts like the first of the
given lenses, l, ...ls
, whose view is not undefined on the given target. When
the views of all of the given lenses are undefined, the returned lens acts like
l
.
Note that L.firstOf
is an associative operation, but there is no identity
element.
L.identity
L.identity
is equivalent to R.lens(R.identity, R.identity)
and is the
identity element of lenses: both L(L.identity, l)
and L(l, L.identity)
are
equivalent to l
.
L.index(integer)
L.index(integer)
or L(integer)
is similar to R.lensIndex(integer)
(see
lensIndex), but acts as a partial
lens:
L.normalize(value => value)
L.normalize(value => value)
maps the value with same given transform when
viewed and set and implicitly maps undefined to undefined. More specifically,
L.normalize(transform)
is equivalent to R.lens(toPartial(transform), toPartial(transform))
where
const toPartial = transform => x => undefined === x ? x : transform(x)
The main use case for normalize
is to make it easy to determine whether, after
a change, the data has actually changed. By keeping the data normalized, a
simple R.equals
comparison will do.
L.pick({p1: l1, ...pls})
L.pick({p1: l1, ...pls})
creates a lens out of the given object template of
lenses. When viewed, an object is created, whose properties are obtained by
viewing through the lenses of the template. When set with an object, the
properties of the object are set to the context via the lenses of the template.
undefined
is treated as the equivalent of empty or non-existent in both
directions.
Note that, when set, L.pick
simply ignores any properties that the given
template doesn't mention. Note that the underlying data structure need not be
an object.
L.prop(string)
L.prop(string)
or L(string)
is similar to R.lensProp(string)
(see
lensProp), but acts as a partial
lens:
L.replace(inn, out)
L.replace(inn, out)
, when viewed, replaces the value inn
with out
and vice
versa when set. Values are compared using R.equals
(see
equals).
The main use case for replace
is to handle optional and required properties
and elements. In most cases, rather than using replace
, you will make
selective use of default
and required
:
L.default(out)
L.default(out)
is the same as L.replace(undefined, out)
.
L.define(value)
L.define(value)
is the same as L(L.required(value), L.default(value))
.
L.required(inn)
L.required(inn)
is the same as L.replace(inn, undefined)
.
Consider the following REPL session using Ramda 0.19.1:
> R.set(R.lensPath(["x", "y"]), 1, {})
{ x: { y: 1 } }
> R.set(R.compose(R.lensProp("x"), R.lensProp("y")), 1, {})
TypeError: Cannot read property 'y' of undefined
> R.view(R.lensPath(["x", "y"]), {})
undefined
> R.view(R.compose(R.lensProp("x"), R.lensProp("y")), {})
TypeError: Cannot read property 'y' of undefined
> R.set(R.lensPath(["x", "y"]), undefined, {x: {y: 1}})
{ x: { y: undefined } }
> R.set(R.compose(R.lensProp("x"), R.lensProp("y")), undefined, {x: {y: 1}})
{ x: { y: undefined } }
One might assume that R.lensPath([p0, ...ps])
is equivalent to
R.compose(R.lensProp(p0), ...ps.map(R.lensProp))
, but that is not the case.
With partial lenses you can robustly compose a path lens from prop lenses
R.compose(L.prop(p0), ...ps.map(L.prop))
or just use the shorthand notation
L(p0, ...ps)
.
To illustrate the idea we could give lenses the naive type definition
type Lens s a = (s -> a, a -> s -> s)
defining a lens as a pair of a getter and a setter. The type of a partial lens would then be
type PLens s a = (Maybe s -> Maybe a, Maybe a -> Maybe s -> Maybe s)
which we can simplify to
type PLens s a = Lens (Maybe s) (Maybe a)
This means that partial lenses can be composed, viewed, mapped over and set using the same operations as with ordinary lenses. However, primitive partial lenses (e.g. L.prop) are not necessarily the same as primitive ordinary lenses (e.g. Ramda's lensProp).
FAQs
Partial lenses is a comprehensive, high-performance optics library for JavaScript
The npm package partial.lenses receives a total of 3,578 weekly downloads. As such, partial.lenses popularity was classified as popular.
We found that partial.lenses demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Snyk's use of malicious npm packages for research raises ethical concerns, highlighting risks in public deployment, data exfiltration, and unauthorized testing.
Research
Security News
Socket researchers found several malicious npm packages typosquatting Chalk and Chokidar, targeting Node.js developers with kill switches and data theft.
Security News
pnpm 10 blocks lifecycle scripts by default to improve security, addressing supply chain attack risks but sparking debate over compatibility and workflow changes.