![Create React App Officially Deprecated Amid React 19 Compatibility Issues](https://cdn.sanity.io/images/cgdhsj6q/production/04fa08cf844d798abc0e1a6391c129363cc7e2ab-1024x1024.webp?w=400&fit=max&auto=format)
Security News
Create React App Officially Deprecated Amid React 19 Compatibility Issues
Create React App is officially deprecated due to React 19 issues and lack of maintenance—developers should switch to Vite or other modern alternatives.
partial.lenses
Advanced tools
Partial lenses is a comprehensive, high-performance optics library for JavaScript
Lenses are basically an abstraction for simultaneously specifying operations to update and query immutable data structures. Lenses are highly composable and can be efficient. This library provides a rich collection of partial isomorphisms, lenses, and traversals, collectively known as optics, for manipulating JSON and users can write new optics for manipulating non-JSON objects, such as Immutable.js collections. A partial lens can view optional data, insert new data, update existing data and remove existing data and can, for example, provide defaults and maintain required data structure parts. Try Lenses!
L.all((maybeValue, index) => testable, traversal, maybeData) ~> boolean
L.and(traversal, maybeData) ~> boolean
L.any((maybeValue, index) => testable, traversal, maybeData) ~> boolean
L.collect(traversal, maybeData) ~> [...values]
L.collectAs((maybeValue, index) => maybeValue, traversal, maybeData) ~> [...values]
L.concat(monoid, traversal, maybeData) ~> value
L.concatAs((maybeValue, index) => value, monoid, traversal, maybeData) ~> value
L.count(traversal, maybeData) ~> number
L.countIf((maybeValue, index) => testable, traversal, maybeData) ~> number
L.foldl((value, maybeValue, index) => value, value, traversal, maybeData) ~> value
L.foldr((value, maybeValue, index) => value, value, traversal, maybeData) ~> value
L.isDefined(traversal, maybeData) ~> boolean
L.isEmpty(traversal, maybeData) ~> boolean
L.join(string, traversal, maybeData) ~> string
L.joinAs((maybeValue, index) => maybeString, string, traversal, maybeData) ~> string
L.maximum(traversal, maybeData) ~> maybeValue
L.maximumBy((maybeValue, index) => maybeKey, traversal, maybeData) ~> maybeValue
L.minimum(traversal, maybeData) ~> maybeValue
L.minimumBy((maybeValue, index) => maybeKey, traversal, maybeData) ~> maybeValue
L.none((maybeValue, index) => testable, traversal, maybeData) ~> boolean
L.or(traversal, maybeData) ~> boolean
L.product(traversal, maybeData) ~> number
L.productAs((maybeValue, index) => number, traversal, maybeData) ~> number
L.select(traversal, maybeData) ~> maybeValue
L.selectAs((maybeValue, index) => maybeValue, traversal, maybeData) ~> maybeValue
L.sum(traversal, maybeData) ~> number
L.sumAs((maybeValue, index) => number, traversal, maybeData) ~> number
L.append ~> lens
L.filter((maybeValue, index) => testable) ~> lens
L.find((maybeValue, index) => testable) ~> lens
L.findHint((maybeValue, {hint: index}) => testable, {hint: index}) ~> lens
L.findWith(...optics) ~> optic
L.index(elemIndex) ~> lens
or elemIndex
L.last ~> lens
L.slice(maybeBegin, maybeEnd) ~> lens
Let's look at an example that is based on an actual early use case that lead to
the development of this library. What we have is an external HTTP API that both
produces and consumes JSON objects that include, among many other properties, a
titles
property:
const sampleTitles = {titles: [{language: "en", text: "Title"},
{language: "sv", text: "Rubrik"}]}
We ultimately want to present the user with a rich enough editor, with features
such as undo-redo and validation, for manipulating the content represented by
those JSON objects. The titles
property is really just one tiny part of the
data model, but, in this tutorial, we only look at it, because it is sufficient
for introducing most of the basic ideas.
So, what we'd like to have is a way to access the text
of titles in a given
language. Given a language, we want to be able to
Furthermore, when updating, inserting, and removing texts, we'd like the operations to treat the JSON as immutable and create new JSON objects with the changes rather than mutate existing JSON objects, because this makes it trivial to support features such as undo-redo and can also help to avoid bugs associated with mutable state.
Operations like these are what lenses are good at. Lenses can be seen as a
simple embedded DSL
for specifying data manipulation and querying functions. Lenses allow you to
focus on an element in a data structure by specifying a path from the root of
the data structure to the desired element. Given a lens, one can then perform
operations, like get
and set
, on the element that the
lens focuses on.
Let's first import the libraries
import * as L from "partial.lenses"
import * as R from "ramda"
and ▶ play just a bit with lenses.
Note that links with the ▶ play symbol, take you to an interactive version of this page where almost all of the code snippets are editable and evaluated in the browser. Note that due to the large number of snippets the interactive version of this page takes awhile to render. There is also a separate playground page that allows you to quickly try out lenses.
As mentioned earlier, with lenses we can specify a path to focus on an element.
To specify such a path we use primitive lenses
like L.prop(propName)
, to access a named property of an object,
and L.index(elemIndex)
, to access an element at a given index in
an array, and compose the path using L.compose(...lenses)
.
So, to just get at the titles
array of the sampleTitles
we can use
the lens L.prop("titles")
:
L.get(L.prop("titles"),
sampleTitles)
// [{ language: "en", text: "Title" },
// { language: "sv", text: "Rubrik" }]
To focus on the first element of the titles
array, we compose with
the L.index(0)
lens:
L.get(L.compose(L.prop("titles"),
L.index(0)),
sampleTitles)
// { language: "en", text: "Title" }
Then, to focus on the text
, we compose with L.prop("text")
:
L.get(L.compose(L.prop("titles"),
L.index(0),
L.prop("text")),
sampleTitles)
// "Title"
We can then use the same composed lens to also set the text
:
L.set(L.compose(L.prop("titles"),
L.index(0),
L.prop("text")),
"New title",
sampleTitles)
// { titles: [{ language: "en", text: "New title" },
// { language: "sv", text: "Rubrik" }] }
In practise, specifying ad hoc lenses like this is not very useful. We'd like to access a text in a given language, so we want a lens parameterized by a given language. To create a parameterized lens, we can write a function that returns a lens. Such a lens should then find the title in the desired language.
Furthermore, while a simple path lens like above allows one to get and set an existing text, it doesn't know enough about the data structure to be able to properly insert new and remove existing texts. So, we will also need to specify such details along with the path to focus on.
Let's then just compose a parameterized lens for accessing the
text
of titles:
const textIn = language => L.compose(L.prop("titles"),
L.define([]),
L.normalize(R.sortBy(L.get("language"))),
L.find(R.whereEq({language})),
L.valueOr({language, text: ""}),
L.removable("text"),
L.prop("text"))
Take a moment to read through the above definition line by line. Each part
either specifies a step in the path to select the desired element or a way in
which the data structure must be treated at that point.
The L.prop(...)
parts are already familiar. The other parts we
will mention below.
Thanks to the parameterized search
part, L.find(R.whereEq({language}))
, of the lens composition, we
can use it to query titles:
L.get(textIn("sv"), sampleTitles)
// 'Rubrik'
The L.find
lens is a given a predicate that it then uses to find an
element from an array to focus on. In this case the predicate is specified with
the help of Ramda's R.whereEq
function
that creates an equality predicate from a given template object.
Partial lenses can generally deal with missing data. In this case,
when L.find
doesn't find an element, it instead works like a lens
to append a new element into an array.
So, if we use the partial lens to query a title that does not exist, we get the default:
L.get(textIn("fi"), sampleTitles)
// ''
We get this value, rather than undefined
, thanks to
the L.valueOr({language, text: ""})
part of our lens
composition, which ensures that we get the specified value rather than null
or
undefined
. We get the default even if we query from undefined
:
L.get(textIn("fi"), undefined)
// ''
With partial lenses, undefined
is the equivalent of empty or non-existent.
As with ordinary lenses, we can use the same lens to update titles:
L.set(textIn("en"), "The title", sampleTitles)
// { titles: [ { language: 'en', text: 'The title' },
// { language: 'sv', text: 'Rubrik' } ] }
The same partial lens also allows us to insert new titles:
L.set(textIn("fi"), "Otsikko", sampleTitles)
// { titles: [ { language: 'en', text: 'Title' },
// { language: 'fi', text: 'Otsikko' },
// { language: 'sv', text: 'Rubrik' } ] }
There are couple of things here that require attention.
The reason that the newly inserted object not only has the text
property, but
also the language
property is due to
the L.valueOr({language, text: ""})
part that we used to provide
a default.
Also note the position into which the new title was inserted. The array of
titles is kept sorted thanks to
the L.normalize(R.sortBy(L.get("language")))
part of our lens.
The L.normalize
lens transforms the data when either read or
written with the given function. In this case we used
Ramda's R.sortBy
to specify that we want
the titles to be kept sorted by language.
Finally, we can use the same partial lens to remove titles:
L.set(textIn("sv"), undefined, sampleTitles)
// { titles: [ { language: 'en', text: 'Title' } ] }
Note that a single title text
is actually a part of an object. The key to
having the whole object vanish, rather than just the text
property, is
the L.removable("text")
part of our lens composition. It
makes it so that when the text
property is set to undefined
, the result will
be undefined
rather than merely an object without the text
property.
If we remove all of the titles, we get the required value:
L.set(L.seq(textIn("sv"),
textIn("en")),
undefined,
sampleTitles)
// { titles: [] }
Above we use L.seq
to run the L.set
operation over both
of the focused titles. The titles
property is not removed thanks to the
L.define([])
part of our lens composition. It makes it so that
when reading or writing through the lens, undefined
becomes the given value.
Take out one (or
more)
L.define(...)
,
L.normalize(...)
, L.valueOr(...)
or L.removable(...)
part(s) from the lens composition and try
to predict what happens when you rerun the examples with the modified lens
composition. Verify your reasoning by actually rerunning the examples.
For clarity, the previous code snippets avoided some of the shorthands that this library supports. In particular,
L.compose(...)
can be abbreviated as an array
[...]
,L.prop(propName)
can be abbreviated as propName
, andL.set(l, undefined, s)
can be abbreviated
as L.remove(l, s)
.It is also typical to compose lenses out of short paths following the schema of the JSON data being manipulated. Recall the lens from the start of the example:
L.compose(L.prop("titles"),
L.define([]),
L.normalize(R.sortBy(L.get("language"))),
L.find(R.whereEq({language})),
L.valueOr({language, text: ""}),
L.removable("text"),
L.prop("text"))
Following the structure or schema of the JSON, we could break this into three separate lenses:
Furthermore, we could organize the lenses to reflect the structure of the JSON model:
const Title = {
text: [L.removable("text"), "text"]
}
const Titles = {
titleIn: language => [L.find(R.whereEq({language})),
L.valueOr({language, text: ""})]
}
const Model = {
titles: ["titles",
L.define([]),
L.normalize(R.sortBy(L.get("language")))],
textIn: language => [Model.titles,
Titles.titleIn(language),
Title.text]
}
We can now say:
L.get(Model.textIn("sv"), sampleTitles)
// 'Rubrik'
This style of organizing lenses is overkill for our toy example. In a more
realistic case the sampleTitles
object would contain many more properties.
Also, rather than composing a lens, like Model.textIn
above, to access a leaf
property from the root of our object, we might actually compose lenses
incrementally as we inspect the model structure.
So far we have used a lens to manipulate individual items. This library also supports traversals that compose with lenses and can target multiple items. Continuing on the tutorial example, let's define a traversal that targets all the texts:
const texts = [Model.titles,
L.elems,
Title.text]
What makes the above a traversal is the L.elems
part. The result
of composing a traversal with a lens is a traversal. The other parts of the
above composition should already be familiar from previous examples. Note how
we were able to use the previously defined Model.titles
and Title.text
lenses.
Now, we can use the above traversal to collect
all the texts:
L.collect(texts, sampleTitles)
// [ 'Title', 'Rubrik' ]
More generally, we can map and fold over texts. For example, we
could use L.maximumBy
to find a title with the maximum length:
L.maximumBy(R.length, texts, sampleTitles)
// 'Rubrik'
Of course, we can also modify texts. For example, we could uppercase all the titles:
L.modify(texts, R.toUpper, sampleTitles)
// { contents: [ { language: 'en', text: 'TITLE' },
// { language: 'sv', text: 'RUBRIK' } ] }
We can also manipulate texts selectively. For example, we could remove all the texts that are longer than 5 characters:
L.remove([texts, L.when(t => t.length > 5)],
sampleTitles)
// { contents: [ { language: 'en', text: 'Title' } ] }
This concludes the tutorial. The reference documentation contains lots of tiny examples and a few more involved examples. The examples section describes a couple of lens compositions we've found practical as well as examples that may help to see possibilities beyond the immediately obvious.
The combinators provided by this library are available as named imports. Typically one just imports the library as:
import * as L from "partial.lenses"
This library has historically been developed in a fairly aggressive manner so that features have been marked as obsolete and removed in subsequent major versions. This can be particularly burdensome for developers of libraries that depend on partial lenses. To help the development of such libraries, this section specifies a tiny subset of this library as stable. While it is possible that the stable subset is later extended, nothing in the stable subset will ever be changed in a backwards incompatible manner.
The following operations, with the below mentioned limitations, constitute the stable subset:
L.compose(...optics) ~> optic
is stable with the exception
that one must not depend on being able to compose optics with ordinary
functions. Also, the use of arrays to denote composition is not part of the
stable subset. Note that L.compose()
is guaranteed to be
equivalent to the L.identity
optic.
L.get(lens, maybeData) ~> maybeValue
is stable without limitations.
L.lens(maybeData => maybeValue, (maybeValue, maybeData) => maybeData) ~> lens
is
stable with the exception that one must not depend on the user specified
getter and setter functions being passed more than 1 and 2 arguments,
respectively, and one must make no assumptions about any extra parameters
being passed.
L.modify(optic, maybeValue => maybeValue, maybeData) ~> maybeData
is
stable with the exception that one must not depend on the user specified
function being passed more than 1 argument and one must make no assumptions
about any extra parameters being passed.
L.remove(optic, maybeData) ~> maybeData
is stable without
limitations.
L.set(optic, maybeValue, maybeData) ~> maybeData
is stable without
limitations.
The main intention behind the stable subset is to enable a dependent library to make basic use of lenses created by client code using the dependent library.
In retrospect, the stable subset has existed since version 2.2.0.
The abstractions, traversals, lenses, and isomorphisms, provided by this library are collectively known as optics. Traversals can target any number of elements. Lenses are a restriction of traversals that target a single element. Isomorphisms are a restriction of lenses with an inverse.
In addition to basic bidirectional optics, this library also supports more arbitrary transforms using optics with sequencing and transform ops. Transforms allow operations, such as modifying a part of data structure multiple times or even in a loop, that are not possible with basic optics.
Some optics libraries provide many more abstractions, such as "optionals", "prisms" and "folds", to name a few, forming a DAG. Aside from being conceptually important, many of those abstractions are not only useful but required in a statically typed setting where data structures have precise constraints on their shapes, so to speak, and operations on data structures must respect those constraints at all times.
On the other hand, in a dynamically typed language like JavaScript, the shapes of run-time objects are naturally malleable. Nothing immediately breaks if a new object is created as a copy of another object by adding or removing a property, for example. We can exploit this to our advantage by considering all optics as partial and manage with a smaller amount of distinct classes of optics.
By definition, a total function, or just a function, is defined for all possible inputs. A partial function, on the other hand, may not be defined for all inputs.
As an example, consider an operation to return the first element of an array. Such an operation cannot be total unless the input is restricted to arrays that have at least one element. One might think that the operation could be made total by returning a special value in case the input array is empty, but that is no longer the same operation—the special value is not the first element of the array.
Now, in partial lenses, the idea is that in case the input does not match the
expectation of an optic, then the input is treated as being undefined
, which
is the equivalent of non-existent: reading through the optic gives undefined
and writing through the optic replaces the focus with the written value. This
makes the optics in this library partial and allows specific partial optics,
such as the simple L.prop
lens, to be used in a wider range of
situations than corresponding total optics.
Making all optics partial has a number of consequences. For one thing, it can
potentially hide bugs: an incorrectly specified optic treats the input as
undefined
and may seem to work without raising an error. We have not found
this to be a major source of bugs in practice. However, partiality also has a
number of benefits. In particular, it allows optics to seamlessly support both
insertion and removal. It also allows to reduce the number of necessary
abstractions and it tends to make compositions of optics more concise with fewer
required parts, which both help to avoid bugs.
Starting with version 10.0.0, to strongly guide away from
mutating data structures, optics
call
Object.freeze
on
any new objects they create when NODE_ENV
is not production
.
Why only non-production
builds? Because Object.freeze
can be quite
expensive and the main benefit is in catching potential bugs early during
development.
Also note that optics do not implicitly "deep freeze" data structures given to them or freeze data returned by user defined functions. Only objects newly created by optic functions themselves are frozen.
A lot of libraries these days claim to be composable. Is any collection of functions composable? In the opinion of the author of this library, in order for something to be called "composable", a couple of conditions must be fulfilled:
Conversely, if there is no operation to perform composition or there are no useful simplifying laws on how compositions behave, then one should not call such a thing composable.
Now, optics are composable in several ways and in each of those ways there is an operation to perform the composition and laws on how such composed optics behave. Here is a table of the means of composition supported by this library:
Operation(s) | Semantics | |
---|---|---|
Nesting | L.compose(...optics) or [...optics] | Monoid over unityped optics |
Recursing | L.lazy(optic => optic) | Fixed point |
Adapting | L.orElse(backupOptic, primaryOptic) | Semigroup over optics |
Querying | L.choice(...optics) and L.chain(value => optic, optic) | MonadPlus over optics |
Picking | L.pick({...prop:lens}) | Product of lenses |
Branching | L.branch({...prop:traversal}) | Coproduct of traversals |
Sequencing | L.seq(...transforms) | Sequential application of transforms |
The above table and, in particular, the semantics column is by no means complete. In particular, the documentation of this library does not generally spell out proofs of the semantics.
Aside from understanding laws on how forms of composition behave, it is useful to understand laws that are specific to operations on lenses and optics, in general. As described in the paper A clear picture of lens laws, many laws have been formulated for lenses and it can be useful to have lenses that do not necessarily obey some laws.
Here is a snippet that demonstrates that partial lenses can obey the laws of, so called, well behaved lenses:
const elem = 2
const data = {x: 1}
const lens = "x"
const test = (actual, expected) => R.equals(actual, expected) || actual
R.identity({
GetSet: test( L.set(lens, L.get(lens, data), data), data ),
SetGet: test( L.get(lens, L.set(lens, elem, data)), elem )
})
// { GetSet: true, SetGet: true }
Note, however, that partial lenses are not (total) lenses. You might want to ▶ play with the laws in your browser.
L.modify(optic, (maybeValue, index) => maybeValue, maybeData) ~> maybeData
L.modify
allows one to map over the focused element
L.modify(["elems", 0, "x"], R.inc, {elems: [{x: 1, y: 2}, {x: 3, y: 4}]})
// { elems: [ { x: 2, y: 2 }, { x: 3, y: 4 } ] }
or, when using a traversal, elements
L.modify(["elems", L.elems, "x"],
R.dec,
{elems: [{x: 1, y: 2}, {x: 3, y: 4}]})
// { elems: [ { x: 0, y: 2 }, { x: 2, y: 4 } ] }
of a data structure.
L.remove(optic, maybeData) ~> maybeData
L.remove
allows one to remove the focused element
L.remove([0, "x"], [{x: 1}, {x: 2}, {x: 3}])
// [ { x: 2 }, { x: 3 } ]
or, when using a traversal, elements
L.remove([L.elems, "x", L.when(x => x > 1)], [{x: 1}, {x: 2, y: 1}, {x: 3}])
// [ { x: 1 }, { y: 1 } ]
from a data structure.
Note that L.remove(optic, maybeData)
is equivalent
to L.set(lens, undefined, maybeData)
. With partial lenses, setting
to undefined
typically has the effect of removing the focused element.
L.set(optic, maybeValue, maybeData) ~> maybeData
L.set
allows one to replace the focused element
L.set(["a", 0, "x"], 11, {id: "z"})
// {a: [{x: 11}], id: 'z'}
or, when using a traversal, elements
L.set([L.elems, "x", L.when(x => x > 1)], -1, [{x: 1}, {x: 2, y: 1}, {x: 3}])
// [ { x: 1 }, { x: -1, y: 1 }, { x: -1 } ]
of a data structure.
Note that L.set(lens, maybeValue, maybeData)
is equivalent
to L.modify(lens, R.always(maybeValue), maybeData)
.
L.traverse(category, (maybeValue, index) => operation, optic, maybeData) ~> operation
L.traverse
maps each focus to an operation and returns an operation that runs
those operations in-order and collects the results. The category
argument
must be either
a
Functor
,
Applicative
,
or
Monad
depending
on the optic as specified in L.toFunction
.
Here is a bit involved example that uses the State monad and L.traverse
to
replace elements in a data structure by the number of times those elements have
appeared at that point in the data structure:
const Monad = ({of, chain}) => ({
of,
chain,
ap: (x2yS, xS) => chain(x2y => chain(x => of(x2y(x)), xS), x2yS),
map: (x2y, xS) => chain(x => of(x2y(x)), xS)
})
const StateM = Monad({
of: x => s => [x, s],
chain: (x2yS, xS) => s1 => {
const [x, s] = xS(s1)
return x2yS(x)(s)
}
})
const countS = x => x2n => {
const k = `${x}`
const n = (x2n[k] || 0) + 1
return [n, L.set(k, n, x2n)]
}
L.traverse(StateM, countS, L.elems, [1, 2, 1, 1, 2, 3, 4, 3, 4, 5])({})[0]
// [1, 1, 2, 3, 2, 1, 1, 2, 2, 1]
L.compose(...optics) ~> optic
or [...optics]
L.compose
creates a nested composition of the given optics and ordinary
functions such that in L.compose(bigger, smaller)
the smaller
optic can only
see and manipulate the part of the whole as seen through the bigger
optic.
The following equations characterize composition:
L.compose() = L.identity
L.compose(l) = l
L.modify(L.compose(o, ...os)) = R.compose(L.modify(o), ...os.map(L.modify))
L.get(L.compose(o, ...os)) = R.pipe(L.get(o), ...os.map(L.get))
Furthermore, in this library, an array of optics [...optics]
is treated as a
composition L.compose(...optics)
. Using the array notation, the above
equations can be written as:
[] = L.identity
[l] = l
L.modify([o, ...os]) = R.compose(L.modify(o), ...os.map(L.modify))
L.get([o, ...os]) = R.pipe(L.get(o), ...os.map(L.get))
For example:
L.set(["a", 1], "a", {a: ["b", "c"]})
// { a: [ 'b', 'a' ] }
L.get(["a", 1], {a: ["b", "c"]})
// 'c'
You can also directly compose optics with ordinary functions. The result of such a composition is a read-only optic.
For example:
L.get(["x", x => x + 1], {x: 1})
// 2
L.set(["x", x => x + 1], 3, {x: 1})
// { x: 1 }
Note that eligible ordinary functions must have a maximum arity of two: the
first argument will be the data and second will be the index. Both can, of
course, be undefined
. Also starting from version 11.0.0
it is not guaranteed that such ordinary functions would not be passed other
arguments and therefore such functions should not depend on the number of
arguments being passed nor on any arguments beyond the first two.
Note that R.compose
is not the same as
L.compose
.
L.chain((value, index) => optic, optic) ~> optic
L.chain
provides a
monadic
chain
combinator for querying with optics. L.chain(toOptic, optic)
is equivalent to
L.compose(optic, L.choose((maybeValue, index) =>
maybeValue === undefined
? L.zero
: toOptic(maybeValue, index)))
Note that with the R.always
,
L.chain
, L.choice
and L.zero
combinators, one can
consider optics as subsuming the maybe monad.
L.choice(...optics) ~> optic
L.choice
returns a partial optic that acts like the first of the given optics
whose view is not undefined
on the given data structure. When the views of
all of the given optics are undefined
, the returned optic acts
like L.zero
, which is the identity element of L.choice
. See
also L.choices
.
For example:
L.modify([L.elems, L.choice("a", "d")], R.inc, [{R: 1}, {a: 1}, {d: 2}])
// [ { R: 1 }, { a: 2 }, { d: 3 } ]
L.choose((maybeValue, index) => optic) ~> optic
L.choose
creates an optic whose operation is determined by the given function
that maps the underlying view, which can be undefined
, to an optic. In other
words, the L.choose
combinator allows an optic to be constructed after
examining the data structure being manipulated.
For example:
const majorAxis =
L.choose(({x, y} = {}) => Math.abs(x) < Math.abs(y) ? "y" : "x")
L.get(majorAxis, {x: -3, y: 1})
// -3
L.modify(majorAxis, R.negate, {x: -3, y: 1})
// { x: 3, y: 1 }
L.optional ~> optic
L.optional
is an optic over an optional element. When used as a traversal,
and the focus is undefined
, the traversal is empty. When used as a lens, and
the focus is undefined
, the lens will be read-only.
As an example, consider the difference between:
L.set([L.elems, "x"], 3, [{x: 1}, {y: 2}])
// [ { x: 3 }, { y: 2, x: 3 } ]
and:
L.set([L.elems, "x", L.optional], 3, [{x: 1}, {y: 2}])
// [ { x: 3 }, { y: 2 } ]
Note that L.optional
is equivalent
to L.when(x => x !== undefined)
.
L.when((maybeValue, index) => testable) ~> optic
L.when
allows one to selectively skip elements within a traversal or to
selectively turn a lens into a read-only lens whose view is undefined
.
For example:
L.modify([L.elems, L.when(x => x > 0)], R.negate, [0, -1, 2, -3, 4])
// [ 0, -1, -2, -3, -4 ]
Note that L.when(p)
is equivalent
to L.choose((x, i) => p(x, i) ? L.identity : L.zero)
.
L.zero ~> optic
L.zero
is the identity element of L.choice
and L.chain
. As a traversal, L.zero
is a traversal of no
elements and as a lens, i.e. when used with L.get
, L.zero
is a
read-only lens whose view is always undefined
.
For example:
L.collect([L.elems,
L.choose(x => x instanceof Array ? L.elems
: x instanceof Object ? "x"
: L.zero)],
[1, {x: 2}, [3,4]])
// [ 2, 3, 4 ]
L.choices(optic, ...optics) ~> optic
L.choices
returns a partial optic that acts like the first of the given optics
whose view is not undefined
on the given data structure. When the views of
all of the given optics are undefined
, the returned optic acts like the last
of the given optics. See also L.choice
.
For example:
L.set([L.elems, L.choices("a", "d")], 3, [{R: 1}, {a: 1}, {d: 2}])
// [ { R: 1, d: 3 }, { a: 3 }, { d: 3 } ]
L.orElse(backupOptic, primaryOptic) ~> optic
L.orElse(backupOptic, primaryOptic)
acts like primaryOptic
when its view is
not undefined
and otherwise like backupOptic
. You can use L.orElse
on its
own with R.reduceRight
(and R.reduce
) to create an associative
choice over optics or use L.orElse
to specify a default or backup optic
for L.choice
, for example.
L.lazy(optic => optic) ~> optic
L.lazy
can be used to construct optics lazily. The function given to L.lazy
is passed a forwarding proxy to its return value and can also make forward
references to other optics and possibly construct a recursive optic.
Note that when using L.lazy
to construct a recursive optic, it will only work
in a meaningful way when the recursive uses are either precomposed
or presequenced with some other optic in a way that neither causes
immediate nor unconditional recursion.
For example, here is a traversal that targets all the primitive elements in a data structure of nested arrays and objects:
const flatten = [L.optional, L.lazy(rec => {
const elems = [L.elems, rec]
const values = [L.values, rec]
return L.choose(x => x instanceof Array ? elems
: x instanceof Object ? values
: L.identity)
})]
Note that the above creates a cyclic representation of the traversal.
Now, for example:
L.collect(flatten, [[[1], 2], {y: 3}, [{l: 4, r: [5]}, {x: 6}]])
// [ 1, 2, 3, 4, 5, 6 ]
L.modify(flatten, x => x+1, [[[1], 2], {y: 3}, [{l: 4, r: [5]}, {x: 6}]])
// [ [ [ 2 ], 3 ], { y: 4 }, [ { l: 5, r: [ 6 ] }, { x: 7 } ] ]
L.remove([flatten, L.when(x => 3 <= x && x <= 4)],
[[[1], 2], {y: 3}, [{l: 4, r: [5]}, {x: 6}]])
// [ [ [ 1 ], 2 ], [ { r: [ 5 ] }, { x: 6 } ] ]
L.log(...labels) ~> optic
L.log(...labels)
is an identity optic that
outputs
console.log
messages with the given labels
(or
format in Node.js)
when data flows in either direction, get
or set
, through the lens.
For example:
L.set(["x", L.log("x")], "11", {x: 10})
// x get 10
// x set 11
// { x: '11' }
L.set(["x", L.log("%s x: %j")], "11", {x: 10})
// get x: 10
// set x: "11"
// { x: '11' }
L.toFunction(optic) ~> optic
L.toFunction
converts a given optic, which can be a string,
an integer, an array, or a function to a function.
This can be useful for implementing new combinators that cannot otherwise be
implemented using the combinators provided by this library. See
also L.traverse
.
For isomorphisms and lenses, the returned function will have the signature
(Maybe s, Index, Functor c, (Maybe a, Index) -> c b) -> c t
for traversals the signature will be
(Maybe s, Index, Applicative c, (Maybe a, Index) -> c b) -> c t
and for transforms the signature will be
(Maybe s, Index, Monad c, (Maybe a, Index) -> c b) -> c t
Note that the above signatures are written using the "tupled" parameter notation
(...) -> ...
to denote that the functions are not curried.
The
Functor
,
Applicative
,
and
Monad
arguments
are expected to conform to
their
Static Land
specifications.
Note that, in conjunction with partial optics, it may be advantageous to have
the algebras to allow for partiality. With traversals it is also possible, for
example, to simply post compose optics with L.optional
to
skip undefined
elements.
Note that if you simply wish to perform an operation that needs roughly the full
expressive power of the underlying lens encoding, you should
use L.traverse
, because it is independent of the underlying
encoding, while L.toFunction
essentially exposes the underlying encoding and
it is better to avoid depending on that.
Ordinary optics are passive and bidirectional in such a way that the same optic can be both read and written through. The underlying implementation of this library also allows one to implement active operations that don't quite provide the same kind of passive bidirectionality, but can be used to flexibly modify data structures. Such operations are called transforms in this library.
Unlike ordinary optics, transforms allow for monadic sequencing, which makes it possible to operate on a part of data structure multiple times. This allows operations that are impossible to implement using ordinary optics, but also potentially makes it more difficult to reason about the results. This ability also makes it impossible to read through transforms in the same sense as with ordinary optics.
Recall that lenses have a single focus and traversals
have multiple focuses that can then be operated upon using various operations
such as L.modify
. Although it is not strictly enforced by this
library, it is perhaps clearest to think that transforms have no focuses. A
transform using transform ops, that act as traversals of no
elements, can, and perhaps preferably should, be empty and should
be executed using L.transform
, which,
unlike L.modify
, takes no user defined operation to apply to
focuses.
The line between transforms and optics is not entirely clear cut in the sense that it is technically possible to use various transform ops within an ordinary optic definition. Furthermore, it is also possible to use sequencing to create transforms that have focuses that can then be operated upon. The results of such uses don't quite follow the laws of ordinary optics, but may sometimes be useful.
L.transform(optic, maybeData) ~> maybeData
L.transform(o, s)
is shorthand for L.modify(o, x => x, s)
and is intended
for running transforms defined
using transform ops.
L.seq(...transforms) ~> transform
L.seq
creates a transform that modifies the focus with each of the given
transforms in sequence.
Here is an example of a bottom-up transform over a data structure of nested objects and arrays:
const everywhere = [L.optional, L.lazy(rec => {
const elems = L.seq([L.elems, rec], L.identity)
const values = L.seq([L.values, rec], L.identity)
return L.choose(x => x instanceof Array ? elems
: x instanceof Object ? values
: L.identity)
})]
The above everywhere
transform is similar to
the F.everywhere
transform of the fastener
zipper-library. Note that the above everywhere
and the flatten
example differ in that flatten
only targets the non-object and non-array
elements of the data structure while everywhere
also targets those.
L.modify(everywhere, x => [x], {xs: [{x: 1}, {x: 2}]})
// [ {xs: [ [ [ { x: [ 1 ] } ], [ { x: [ 2 ] } ] ] ] } ]
L.modifyOp((maybeValue, index) => maybeValue) ~> optic
L.modifyOp
creates an optic that maps the focus with the given function. When
used as a traversal, L.modifyOp
acts as a traversal of no elements. When used
as a lens, L.modifyOp
acts as a read-only lens whose view is the mapped focus.
Usually, however, L.modifyOp
is used within transforms.
For example:
L.transform(L.branch({xs: [L.elems, L.modifyOp(R.inc)],
z: [L.optional, L.modifyOp(R.negate)],
ys: [L.elems, L.modifyOp(R.dec)]}),
{xs: [1, 2, 3],
ys: [1, 2, 3]})
// { xs: [ 2, 3, 4 ],
// ys: [ 0, 1, 2 ] }
L.removeOp ~> optic
L.removeOp
is shorthand for L.setOp(undefined)
.
Here is an example based on a question from a user:
const sampleToFilter = {elements: [{time: 1, subelements: [1, 2, 3, 4]},
{time: 2, subelements: [1, 2, 3, 4]},
{time: 3, subelements: [1, 2, 3, 4]}]}
L.transform(['elements',
L.elems,
L.seq([L.when(elem => elem.time < 2), L.removeOp],
['subelements', L.elems, L.when(i => i < 3), L.removeOp])],
sampleToFilter)
// { elements: [ { time: 2, subelements: [ 3, 4 ] },
// { time: 3, subelements: [ 3, 4 ] } ] }
The idea is to filter the data both by time
and by subelements
.
L.setOp(maybeValue) ~> optic
L.setOp(x)
is shorthand for L.modifyOp(R.always(x))
.
A traversal operates over a collection of non-overlapping focuses that are visited only once and can, for example, be collected, folded, modified, set and removed. Put in another way, a traversal specifies a set of paths to elements in a data structure.
L.branch({prop: traversal, ...props}) ~> traversal
L.branch
creates a new traversal from a given possibly nested template object
that specifies how the new traversal should visit the properties of an object.
If one thinks of traversals as specifying sets of paths, then the template can
be seen as mapping each property to a set of paths to traverse.
For example:
L.collect(L.branch({first: L.elems, second: {value: L.identity}}),
{first: ["x"], second: {value: "y"}})
// [ 'x', 'y' ]
The use of L.identity
above might be puzzling at
first. L.identity
essentially specifies an empty path. So,
when a property is mapped to L.identity
in the template given
to L.branch
, it means that the element is to be visited by the resulting
traversal.
Note that you can also compose L.branch
with other optics. For example, you
can compose with L.pick
to create a traversal over specific
elements of an array:
L.modify([L.pick({x: 0, z: 2}),
L.branch({x: L.identity, z: L.identity})],
R.negate,
[1, 2, 3])
// [ -1, 2, -3 ]
See the BST traversal section for a more meaningful example.
L.elems ~> traversal
L.elems
is a traversal over the elements of an array-like
object. When written through, L.elems
always produces an Array
.
For example:
L.modify(["xs", L.elems, "x"], R.inc, {xs: [{x: 1}, {x: 2}]})
// { xs: [ { x: 2 }, { x: 3 } ] }
Just like with other optics operating on array-like objects, when
manipulating non-Array
objects, L.rewrite
can be used to
convert the result to the desired type, if necessary:
L.modify([L.rewrite(xs => Int8Array.from(xs)), L.elems],
R.inc,
Int8Array.from([-1,4,0,2,4]))
// Int8Array [ 0, 5, 1, 3, 5 ]
L.values ~> traversal
L.values
is a traversal over the values of an instanceof Object
. When
written through, L.values
always produces an Object
.
For example:
L.modify(L.values, R.negate, {a: 1, b: 2, c: 3})
// { a: -1, b: -2, c: -3 }
When manipulating objects with a non-Object
constructor
function XYZ(x,y,z) {
this.x = x
this.y = y
this.z = z
}
XYZ.prototype.norm = function () {
return (this.x * this.x +
this.y * this.y +
this.z * this.z)
}
L.rewrite
can be used to convert the result to the desired type,
if necessary:
const objectTo = C => o => Object.assign(Object.create(C.prototype), o)
L.modify([L.rewrite(objectTo(XYZ)), L.values],
R.negate,
new XYZ(1,2,3))
// XYZ { x: -1, y: -2, z: -3 }
L.matches(/.../g) ~> traversal
L.matches
, when given a regular expression with
the
global
flag,
/.../g
, is a partial traversal over the matches that the regular expression
gives over the focused string. See also L.matches
.
WARNING: L.matches
is experimental and might be removed or changed before
next major release.
For example:
L.collect([L.matches(/[^&=?]+=[^&=]+/g),
L.pick({name: L.matches(/^[^=]+/),
value: L.matches(/[^=]+$/)})],
"?first=foo&second=bar")
// [ { name: 'first', value: 'foo' },
// { name: 'second', value: 'bar' } ]
Note that when writing through L.matches
and the result would be an empty
string, ""
, the result will be undefined
to support propagating removal.
Note that an empty match terminates the traversal. It is possible to make use of that feature, but it is also possible that an empty match is due to an incorrect regular expression that can match the empty string.
L.all((maybeValue, index) => testable, traversal, maybeData) ~> boolean
L.all
determines whether all of the elements focused on by the given traversal
satisfy the given predicate.
For example:
L.all(x => 1 <= x && x <= 6,
flatten,
[[[1], 2], {y: 3}, [{l: 4, r: [5]}, {x: 6}]])
// true
See also: L.any
, L.none
, and L.selectAs
.
L.and(traversal, maybeData) ~> boolean
L.and
determines whether all of the elements focused on by the given traversal
are truthy.
For example:
L.and(L.elems, [])
// true
Note that L.and
is equivalent to L.all(x => x)
. See
also: L.or
.
L.any((maybeValue, index) => testable, traversal, maybeData) ~> boolean
L.any
determines whether any of the elements focused on by the given traversal
satisfy the given predicate.
For example:
L.any(x => x > 5,
flatten,
[[[1], 2], {y: 3}, [{l: 4, r: [5]}, {x: 6}]])
// true
See also: L.all
, L.none
, and L.selectAs
.
L.collect(traversal, maybeData) ~> [...values]
L.collect
returns an array of the non-undefined
elements focused on by the
given traversal or lens from a data structure.
For example:
L.collect(["xs", L.elems, "x"], {xs: [{x: 1}, {x: 2}]})
// [ 1, 2 ]
Note that L.collect
is equivalent to L.collectAs(x => x)
.
L.collectAs((maybeValue, index) => maybeValue, traversal, maybeData) ~> [...values]
L.collectAs
returns an array of the elements focused on by the given traversal
or lens from a data structure and mapped by the given function to a
non-undefined
value.
For example:
L.collectAs(R.negate, ["xs", L.elems, "x"], {xs: [{x: 1}, {x: 2}]})
// [ -1, -2 ]
L.collectAs(toMaybe, traversal, maybeData)
is equivalent to
L.concatAs(toCollect, Collect, [traversal, toMaybe], maybeData)
where Collect
and toCollect
are defined as follows:
const Collect = {empty: R.always([]), concat: R.concat}
const toCollect = x => x !== undefined ? [x] : []
So:
L.concatAs(toCollect,
Collect,
["xs", L.elems, "x", R.negate],
{xs: [{x: 1}, {x: 2}]})
// [ -1, -2 ]
The internal implementation of L.collectAs
is optimized and faster than the
above naïve implementation.
L.concat(monoid, traversal, maybeData) ~> value
L.concat({empty, concat}, t, s)
performs a fold, using the given concat
and
empty
operations, over the elements focused on by the given traversal or lens
t
from the given data structure s
. The concat
operation and the constant
returned by empty()
should form
a
monoid over
the values focused on by t
.
For example:
const Sum = {empty: () => 0, concat: (x, y) => x + y}
L.concat(Sum, L.elems, [1, 2, 3])
// 6
Note that L.concat
is staged so that after given the first argument,
L.concat(m)
, a computation step is performed.
L.concatAs((maybeValue, index) => value, monoid, traversal, maybeData) ~> value
L.concatAs(xMi2r, {empty, concat}, t, s)
performs a map, using given function
xMi2r
, and fold, using the given concat
and empty
operations, over the
elements focused on by the given traversal or lens t
from the given data
structure s
. The concat
operation and the constant returned by empty()
should form
a
monoid over
the values returned by xMi2r
.
For example:
L.concatAs(x => x, Sum, L.elems, [1, 2, 3])
// 6
Note that L.concatAs
is staged so that after given the first two arguments,
L.concatAs(f, m)
, a computation step is performed.
L.count(traversal, maybeData) ~> number
L.count
goes through all the elements focused on by the traversal and counts
the number of non-undefined
elements.
For example:
L.count([L.elems, "x"], [{x: 11}, {y: 12}])
// 1
L.countIf((maybeValue, index) => testable, traversal, maybeData) ~> number
L.countIf
goes through all the elements focused on by the traversal and counts
the number of elements for which the given predicate returns a truthy value.
For example:
L.countIf(L.isDefined("x"), [L.elems], [{x: 11}, {y: 12}])
// 1
L.foldl((value, maybeValue, index) => value, value, traversal, maybeData) ~> value
L.foldl
performs a fold from left over the elements focused on by the given
traversal.
For example:
L.foldl((x, y) => x + y, 0, L.elems, [1,2,3])
// 6
L.foldr((value, maybeValue, index) => value, value, traversal, maybeData) ~> value
L.foldr
performs a fold from right over the elements focused on by the given
traversal.
For example:
L.foldr((x, y) => x * y, 1, L.elems, [1,2,3])
// 6
L.isDefined(traversal, maybeData) ~> boolean
L.isDefined
determines whether or not the given traversal focuses on any
non-undefined
element on the given data structure. When used with a lens,
L.isDefined
basically allows you to check whether the target of the lens
exists or, in other words, whether the data structure has the targeted element.
See also L.isEmpty
.
For example:
L.isDefined("x", {y: 1})
// false
L.isEmpty(traversal, maybeData) ~> boolean
L.isEmpty
determines whether or not the given traversal focuses on any
elements, undefined
or otherwise, on the given data structure. Note that when
used with a lens, L.isEmpty
always returns false
, because lenses always have
a single focus. See also L.isDefined
.
For example:
L.isEmpty(flatten, [[],[[[],[]],[]]])
// true
L.join(string, traversal, maybeData) ~> string
L.join
creates a string by joining the optional elements targeted by the given
traversal with the given delimiter.
L.join(",", [L.elems, "x"], [{x: 1}, {y: 2}, {x: 3}])
// "1,3"
L.joinAs((maybeValue, index) => maybeString, string, traversal, maybeData) ~> string
L.joinAs
creates a string by converting the elements targeted by the given
traversal to optional strings with the given function and then joining those
strings with the given delimiter.
For example:
L.joinAs(JSON.stringify, ",", L.elems, [{x: 1}, {y: 2}])
// '{"x":1},{"y":2}'
L.maximum(traversal, maybeData) ~> maybeValue
L.maximum
computes a maximum of the optional elements targeted by the
traversal.
For example:
L.maximum(L.elems, [1,2,3])
// 3
Note that elements are ordered according to the >
operator.
L.maximumBy((maybeValue, index) => maybeKey, traversal, maybeData) ~> maybeValue
L.maximumBy
computes a maximum of the elements targeted by the traversal based
on the optional keys returned by the given function. Elements for which the
returned key is undefined
are skipped.
For example:
L.maximumBy(R.length, L.elems, ["first", "second", "--||--", "third"])
// "second"
Note that keys are ordered according to the >
operator.
L.minimum(traversal, maybeData) ~> maybeValue
L.minimum
computes a minimum of the optional elements targeted by the
traversal.
For example:
L.minimum(L.elems, [1,2,3])
// 1
Note that elements are ordered according to the <
operator.
L.minimumBy((maybeValue, index) => maybeKey, traversal, maybeData) ~> maybeValue
L.minimumBy
computes a minimum of the elements targeted by the traversal based
on the optional keys returned by the given function. Elements for which the
returned key is undefined
are skipped.
For example:
L.minimumBy(L.get("x"), L.elems, [{x: 1}, {x: -3}, {x: 2}])
// {x: -3}
Note that keys are ordered according to the <
operator.
L.none((maybeValue, index) => testable, traversal, maybeData) ~> boolean
L.none
determines whether none of the elements focused on by the given
traversal satisfy the given predicate.
For example:
L.none(x => x > 5,
flatten,
[[[1], 2], {y: 3}, [{l: 4, r: [5]}, {x: 6}]])
// false
See also: L.all
, L.any
, and L.selectAs
.
L.or(traversal, maybeData) ~> boolean
L.or
determines whether any of the elements focused on by the given traversal
is truthy.
For example:
L.or(L.elems, [])
// false
Note that L.or
is equivalent to L.any(x => x)
. See
also: L.and
.
L.product(traversal, maybeData) ~> number
L.product
computes the product of the optional numbers targeted by the
traversal.
For example:
L.product(L.elems, [1,2,3])
// 6
L.productAs((maybeValue, index) => number, traversal, maybeData) ~> number
L.productAs
computes the product of the numbers returned by the given function
for the elements targeted by the traversal.
For example:
L.productAs((x, i) => x + i, L.elems, [3,2,1])
// 27
L.select(traversal, maybeData) ~> maybeValue
L.select
goes lazily over the elements focused on by the given traversal and
returns the first non-undefined
element.
L.select([L.elems, "y"], [{x:1},{y:2},{z:3}])
// 2
Note that L.select
is equivalent to L.selectAs(x => x)
.
L.selectAs((maybeValue, index) => maybeValue, traversal, maybeData) ~> maybeValue
L.selectAs
goes lazily over the elements focused on by the given traversal,
applying the given function to each element, and returns the first
non-undefined
value returned by the function.
L.selectAs(x => x > 3 ? -x : undefined, L.elems, [3,1,4,1,5])
// -4
L.selectAs
operates lazily. The user specified function is only applied to
elements until the first non-undefined
value is returned and after that
L.selectAs
returns without examining more elements.
Note that L.selectAs
can be used to implement many other operations over
traversals such as finding an element matching a predicate and checking whether
all/any elements match a predicate. For example, here is how you could
implement a for all predicate over traversals:
const all = (p, t, s) => !L.selectAs(x => p(x) ? undefined : true, t, s)
Now:
all(x => x < 9,
flatten,
[[[1], 2], {y: 3}, [{l: 4, r: [5]}, {x: 6}]])
// true
L.sum(traversal, maybeData) ~> number
L.sum
computes the sum of the optional numbers targeted by the traversal.
For example:
L.sum(L.elems, [1,2,3])
// 6
L.sumAs((maybeValue, index) => number, traversal, maybeData) ~> number
L.sumAs
computes the sum of the numbers returned by the given function for the
elements targeted by the traversal.
For example:
L.sumAs((x, i) => x + i, L.elems, [3,2,1])
// 9
Lenses always have a single focus which can be viewed directly. Put in another way, a lens specific a path to a single element in a data structure.
L.get(lens, maybeData) ~> maybeValue
L.get
returns the element focused on by a lens from a data
structure.
For example:
L.get("y", {x: 112, y: 101})
// 101
Note that L.get
does not work on traversals.
L.lens((maybeData, index) => maybeValue, (maybeValue, maybeData, index) => maybeData) ~> lens
L.lens
creates a new primitive lens. The first parameter is the getter and
the second parameter is the setter. The setter takes two parameters: the
first is the value written and the second is the data structure to write into.
One should think twice before introducing a new primitive lens—most of the
combinators in this library have been introduced to reduce the need to write new
primitive lenses. With that said, there are still valid reasons to create new
primitive lenses. For example, here is a lens that we've used in production,
written with the help of Moment.js, to bidirectionally
convert a pair of start
and end
times to a duration:
const timesAsDuration = L.lens(
({start, end} = {}) => {
if (undefined === start)
return undefined
if (undefined === end)
return "Infinity"
return moment.duration(moment(end).diff(moment(start))).toJSON()
},
(duration, {start = moment().toJSON()} = {}) => {
if (undefined === duration || "Infinity" === duration) {
return {start}
} else {
return {
start,
end: moment(start).add(moment.duration(duration)).toJSON()
}
}
}
)
Now, for example:
L.get(timesAsDuration,
{start: "2016-12-07T09:39:02.451Z",
end: moment("2016-12-07T09:39:02.451Z").add(10, "hours").toISOString()})
// "PT10H"
L.set(timesAsDuration,
"PT10H",
{start: "2016-12-07T09:39:02.451Z",
end: "2016-12-07T09:39:02.451Z"})
// { end: '2016-12-07T19:39:02.451Z',
// start: '2016-12-07T09:39:02.451Z' }
When composed with L.pick
, to flexibly pick the start
and end
times, the above can be adapted to work in a wide variety of cases. However,
the above lens will never be added to this library, because it would require
adding dependency to Moment.js.
See the Interfacing with Immutable.js section for another
example of using L.lens
.
L.setter((maybeValue, maybeData, index) => maybeData) ~> lens
L.setter(set)
is shorthand for L.lens(x => x, set)
.
L.foldTraversalLens((traversal, maybeData) ~> maybeValue, traversal) ~> lens
L.foldTraversalLens
creates a lens from a fold and a traversal. To make
sense, the fold should compute or pick a representative from the elements
focused on by the traversal such that when all the elements are equal then so is
the representative.
For example:
L.get(L.foldTraversalLens(L.minimum, L.elems), [3,1,4])
// 1
L.set(L.foldTraversalLens(L.minimum, L.elems), 2, [3,1,4])
// [ 2, 2, 2 ]
See the Collection toggle section for a more interesting example.
L.augment({prop: object => value, ...props}) ~> lens
L.augment
is given a template of functions to compute new properties. When
not viewing or setting a defined object, the result is undefined
. When
viewing a defined object, the object is extended with the computed properties.
When set with a defined object, the extended properties are removed.
For example:
L.modify(L.augment({y: r => r.x + 1}),
r => ({x: r.x + r.y, y: 2, z: r.x - r.y}),
{x: 1})
// { x: 3, z: -1 }
L.defaults(valueIn) ~> lens
L.defaults
is used to specify a default context or value for an element in
case it is missing. When set with the default value, the effect is to remove
the element. This can be useful for both making partial lenses with propagating
removal and for avoiding having to check for and provide default values
elsewhere.
For example:
L.get(["items", L.defaults([])], {})
// []
L.get(["items", L.defaults([])], {items: [1, 2, 3]})
// [ 1, 2, 3 ]
L.set(["items", L.defaults([])], [], {items: [1, 2, 3]})
// undefined
Note that L.defaults(valueIn)
is equivalent
to L.replace(undefined, valueIn)
.
L.define(value) ~> lens
L.define
is used to specify a value to act as both the default value and the
required value for an element.
L.get(["x", L.define(null)], {y: 10})
// null
L.set(["x", L.define(null)], undefined, {y: 10})
// { y: 10, x: null }
Note that L.define(value)
is equivalent to [L.required(value), L.defaults(value)]
.
L.normalize((value, index) => maybeValue) ~> lens
L.normalize
maps the value with same given transform when viewed and set and
implicitly maps undefined
to undefined
.
One use case for normalize
is to make it easy to determine whether, after a
change, the data has actually changed. By keeping the data normalized, a
simple R.equals
comparison will do.
Note that the difference between L.normalize
and L.rewrite
is
that L.normalize
applies the transform in both directions
while L.rewrite
only applies the transform when writing.
L.required(valueOut) ~> lens
L.required
is used to specify that an element is not to be removed; in case it
is removed, the given value will be substituted instead.
For example:
L.remove(["items", 0], {items: [1]})
// undefined
L.remove([L.required({}), "items", 0], {items: [1]})
// {}
L.remove(["items", L.required([]), 0], {items: [1]})
// { items: [] }
Note that L.required(valueOut)
is equivalent
to L.replace(valueOut, undefined)
.
L.rewrite((valueOut, index) => maybeValueOut) ~> lens
L.rewrite
maps the value with the given transform when set and implicitly maps
undefined
to undefined
. One use case for rewrite
is to re-establish data
structure invariants after changes.
Note that the difference between L.normalize
and L.rewrite
is that L.normalize
applies the transform in both directions
while L.rewrite
only applies the transform when writing.
See the BST as a lens section for a meaningful example.
Objects that have a non-negative integer length
and strings, which are not
considered Object
instances in JavaScript, are considered array-like objects
by partial optics. See also L.seemsArrayLike
.
When writing through an optic that operates on array-like objects, the result is
always either undefined
, in case the result would be empty, or a plain
Array
. For example:
L.set(1, "a", "LoLa")
// [ 'L', 'a', 'L', 'a' ]
It may seem like the result should be of the same type as the object being
manipulated, but that is problematic, because the focus of a partial optic is
always optional. Instead, when manipulating strings or array-like non-Array
objects, L.rewrite
can be used to convert the result to the
desired type, if necessary. For example:
L.set([L.rewrite(R.join("")), 1], "a", "LoLa")
// 'LaLa'
Also, when manipulating array-like objects, partial lenses generally ignore
everything but the length
property and the integer properties from 0
to
length-1
.
L.append ~> lens
L.append
is a write-only lens that can be used to append values to
an array-like object. The view of L.append
is always
undefined
.
For example:
L.get(L.append, ["x"])
// undefined
L.set(L.append, "x", undefined)
// [ 'x' ]
L.set(L.append, "x", ["z", "y"])
// [ 'z', 'y', 'x' ]
Note that L.append
is equivalent to L.index(i)
with the index
i
set to the length of the focused array or 0 in case the focus is not a
defined array.
L.filter((maybeValue, index) => testable) ~> lens
L.filter
operates on array-like objects. When not viewing an
array-like object, the result is undefined
. When viewing an array-like
object, only elements matching the given predicate will be returned. When set,
the resulting array will be formed by concatenating the elements of the set
array-like object and the elements of the complement of the filtered focus. If
the resulting array would be empty, the whole result will be undefined
.
For example:
L.set(L.filter(x => x <= "2"), "abcd", "3141592")
// [ 'a', 'b', 'c', 'd', '3', '4', '5', '9' ]
NOTE: If you are merely modifying a data structure, and don't need to limit
yourself to lenses, consider using the L.elems
traversal composed
with L.when
.
An alternative design for filter could implement a smarter algorithm to combine
arrays when set. For example, an algorithm based
on edit distance could be used to
maintain relative order of elements. While this would not be difficult to
implement, it doesn't seem to make sense, because in most cases use
of L.normalize
or L.rewrite
would be
preferable. Also, the L.elems
traversal composed
with L.when
will retain order of elements.
L.find((maybeValue, index) => testable) ~> lens
L.find
operates on array-like objects
like L.index
, but the index to be viewed is determined by finding
the first element from the focus that matches the given predicate. When no
matching element is found the effect is same as with L.append
.
L.remove(L.find(x => x <= 2), [3,1,4,1,5,9,2])
// [ 3, 4, 1, 5, 9, 2 ]
L.findHint((maybeValue, {hint: index}) => testable, {hint: index}) ~> lens
L.findHint
is much like L.find
and determines the index of
an array-like object to operate on by searching with the given
predicate. Unlike L.find
, L.findHint
is designed to operate
efficiently when used repeatedly on uniquely identifiable targets, such as
objects with unique id
s. To this end, L.findHint
is given an object with a
hint
property. The search is started from the closest existing index to the
hint
and then by increasing distance from that index. The hint
is updated
after each search and the hint
can also be mutated from the outside. The
hint
object is also passed to the predicate as the second argument. This
makes it possible to both practically eliminate the linear search and to
implement the predicate without allocating extra memory for it.
WARNING: L.findHint
is experimental and might be removed or changed before
next major release.
For example:
L.modify([L.findHint(R.whereEq({id: 2}), {hint: 2}), "value"],
R.toUpper,
[{id: 3, value: "a"},
{id: 2, value: "b"},
{id: 1, value: "c"},
{id: 4, value: "d"},
{id: 5, value: "e"}])
// [{id: 3, value: "a"},
// {id: 2, value: "B"},
// {id: 1, value: "c"},
// {id: 4, value: "d"},
// {id: 5, value: "e"}]
L.findWith(...optics) ~> optic
L.findWith(...optics)
chooses an index from an array-like
object through which the given optic, [...optics]
, has a
non-undefined
view and then returns an optic that focuses on that.
For example:
L.get(L.findWith("x"), [{z: 6}, {x: 9}, {y: 6}])
// 9
L.set(L.findWith("x"), 3, [{z: 6}, {x: 9}, {y: 6}])
// [ { z: 6 }, { x: 3 }, { y: 6 } ]
L.index(elemIndex) ~> lens
or elemIndex
L.index(elemIndex)
or just elemIndex
focuses on the element at specified
index of an array-like object.
undefined
.undefined
, the element is removed from the resulting array,
shifting all higher indices down by one. If the result would be an empty
array, the whole result will be undefined
.undefined
.For example:
L.set(2, "z", ["x", "y", "c"])
// [ 'x', 'y', 'z' ]
NOTE: There is a gotcha related to removing elements from array-like
objects. Namely, when the last element is removed, the result is undefined
rather than an empty array. This is by design, because this allows the removal
to propagate upwards. It is not uncommon, however, to have cases where removing
the last element from an array-like object must not remove the array itself.
Consider the following examples without L.required([])
:
L.remove(0, ["a", "b"])
// [ 'b' ]
L.remove(0, ["b"])
// undefined
L.remove(["elems", 0], {elems: ["b"], some: "thing"})
// { some: 'thing' }
Then consider the same examples with L.required([])
:
L.remove([L.required([]), 0], ["a", "b"])
// [ 'b' ]
L.remove([L.required([]), 0], ["b"])
// []
L.remove(["elems", L.required([]), 0], {elems: ["b"], some: "thing"})
// { elems: [], some: 'thing' }
There is a related gotcha with L.required
. Consider the
following example:
L.remove(L.required([]), [])
// []
L.get(L.required([]), [])
// undefined
In other words, L.required
works in both directions. Thanks to
the handling of undefined
within partial lenses, this is often not a problem,
but sometimes you need the "default" value both ways. In that case you can
use L.define
.
L.last ~> lens
L.last
focuses on the last element of an array-like object or
works like L.append
in case no such element exists.
Focusing on an empty array or undefined
results in returning undefined
. For
example:
L.get(L.last, [1,2,3])
// 3
L.get(L.last, [])
// undefined
Setting value with L.last
sets the last element of the object or appends the
value if the focused object is empty or undefined
. For example:
L.set(L.last, 5, [1,2,3])
// [1,2,5]
L.set(L.last, 1, [])
// [1]
L.slice(maybeBegin, maybeEnd) ~> lens
L.slice
focuses on a specified range of elements of
an array-like object. The range is determined like with the
standard
slice
method
of arrays, basically
undefined
gives the defaults: 0 for the begin and length for the end.For example:
L.get(L.slice(1, -1), [1,2,3,4])
// [ 2, 3 ]
L.set(L.slice(-2, undefined), [0], [1,2,3,4])
// [ 1, 2, 0 ]
Anything that is an instanceof Object
is considered an object by partial
lenses.
When writing through an optic that operates on objects, the result is always
either undefined
, in case the result would be empty, or a plain Object
. For
example:
function Custom(gold, silver, bronze) {
this.gold = gold
this.silver = silver
this.bronze = bronze
}
L.set("silver", -2, new Custom(1,2,3))
// { gold: 1, silver: -2, bronze: 3 }
When manipulating objects whose constructor is not
Object
, L.rewrite
can be used to convert the result to the
desired type, if necessary:
L.set([L.rewrite(objectTo(Custom)), "silver"], -2, new Custom(1,2,3))
// Custom { gold: 1, silver: -2, bronze: 3 }
Partial lenses also generally guarantees that the creation order of keys is preserved (even though the library used to print out evaluation results from code snippets might not preserve the creation order). For example:
for (const k in L.set("silver", -2, new Custom(1,2,3)))
console.log(k)
// gold
// silver
// bronze
When creating new objects, partial lenses generally ignore everything but own string keys. In particular, properties from the prototype chain are not copied and neither are properties with symbol keys.
L.prop(propName) ~> lens
or propName
L.prop(propName)
or just propName
focuses on the specified object property.
undefined
.Object
.undefined
, the property is removed from the result.
If the result would be an empty object, the whole result will be undefined
.When setting or removing properties, the order of keys is preserved.
For example:
L.get("y", {x: 1, y: 2, z: 3})
// 2
L.set("y", -2, {x: 1, y: 2, z: 3})
// { x: 1, y: -2, z: 3 }
When manipulating objects whose constructor is not
Object
, L.rewrite
can be used to convert the result to the
desired type, if necessary:
L.set([L.rewrite(objectTo(XYZ)), "z"], 3, new XYZ(3,1,4))
// XYZ { x: 3, y: 1, z: 3 }
L.props(...propNames) ~> lens
L.props
focuses on a subset of properties of an object, allowing one to treat
the subset of properties as a unit. The view of L.props
is undefined
when
none of the properties is defined. Otherwise the view is an object containing a
subset of the properties. Setting through L.props
updates the whole subset of
properties, which means that any missing properties are removed if they did
exists previously. When set, any extra properties are ignored.
L.set(L.props("x", "y"), {x: 4}, {x: 1, y: 2, z: 3})
// { x: 4, z: 3 }
Note that L.props(k1, ..., kN)
is equivalent to L.pick({[k1]: k1, ..., [kN]: kN})
.
L.removable(...propNames) ~> lens
L.removable
creates a lens that, when written through, replaces the whole
result with undefined
if none of the given properties is defined in the
written object. L.removable
is designed for making removal propagate through
objects.
Contrast the following examples:
L.remove("x", {x: 1, y: 2})
// { y: 2 }
L.remove([L.removable("x"), "x"], {x: 1, y: 2})
// undefined
Note that L.removable(...ps)
is roughly equivalent
to
rewrite(y => y instanceof Object && !L.get(L.props(...ps), y) ? undefined : y)
.
Also note that, in a composition, L.removable
is likely preceded
by L.valueOr
(or L.defaults
) like in
the tutorial example. In such a pair, the preceding lens gives a
default value when reading through the lens, allowing one to use such a lens to
insert new objects. The following lens then specifies that removing the then
focused property (or properties) should remove the whole object. In cases where
the shape of the incoming object is know, L.defaults
can
replace such a pair.
L.matches(/.../) ~> lens
L.matches
, when given a regular expression without
the
global
flags,
/.../
, is a partial lens over the match. When there is no match, or the
target is not a string, then L.matches
will be read-only. See
also L.matches
.
WARNING: L.matches
is experimental and might be removed or changed before
next major release.
For example:
L.set(L.matches(/\.[^./]+$/),
".txt",
"/dir/file.ext")
// '/dir/file.txt'
Note that when writing through L.matches
and the result would be an empty
string, ""
, the result will be undefined
to support propagating removal.
L.valueOr(valueOut) ~> lens
L.valueOr
is an asymmetric lens used to specify a default value in case the
focus is undefined
or null
. When set, L.valueOr
behaves like the identity
lens.
For example:
L.get(L.valueOr(0), null)
// 0
L.set(L.valueOr(0), 0, 1)
// 0
L.remove(L.valueOr(0), 1)
// undefined
L.pick({prop: lens, ...props}) ~> lens
L.pick
creates a lens out of the given possibly nested object template of
lenses and allows one to pick apart a data structure and then put it back
together. When viewed, an object is created, whose properties are obtained by
viewing through the lenses of the template. When set with an object, the
properties of the object are set to the context via the lenses of the template.
undefined
is treated as the equivalent of empty or non-existent in both
directions.
For example, let's say we need to deal with data and schema in need of some semantic restructuring:
const sampleFlat = {px: 1, py: 2, vx: 1.0, vy: 0.0}
We can use L.pick
to create a lens to pick apart the data and put it back
together into a more meaningful structure:
const sanitize = L.pick({pos: {x: "px", y: "py"},
vel: {x: "vx", y: "vy"}})
Note that in the template object the lenses are relative to the root focus of
L.pick
.
We now have a better structured view of the data:
L.get(sanitize, sampleFlat)
// { pos: { x: 1, y: 2 }, vel: { x: 1, y: 0 } }
That works in both directions:
L.modify([sanitize, "pos", "x"], R.add(5), sampleFlat)
// { px: 6, py: 2, vx: 1, vy: 0 }
NOTE: In order for a lens created with L.pick
to work in a predictable
manner, the given lenses must operate on independent parts of the data
structure. As a trivial example, in L.pick({x: "same", y: "same"})
both of
the resulting object properties, x
and y
, address the same property of the
underlying object, so writing through the lens will give unpredictable results.
Note that, when set, L.pick
simply ignores any properties that the given
template doesn't mention. Also note that the underlying data structure need not
be an object.
L.replace(maybeValueIn, maybeValueOut) ~> lens
L.replace(maybeValueIn, maybeValueOut)
, when viewed, replaces the value
maybeValueIn
with maybeValueOut
and vice versa when set.
For example:
L.get(L.replace(1, 2), 1)
// 2
L.set(L.replace(1, 2), 2, 0)
// 1
The main use case for replace
is to handle optional and required properties
and elements. In most cases, rather than using replace
, you will make
selective use of defaults
, required
and define
.
Isomorphisms are lenses with a kind of inverse. The focus of an isomorphism is the whole data structure rather than a part of it.
More specifically, a lens, iso
, is an isomorphism if the following equations
hold for all x
and y
in the domain and range, respectively, of the lens:
L.set(iso, L.get(iso, x), undefined) = x
L.get(iso, L.set(iso, y, undefined)) = y
The above equations mean that x => L.get(iso, x)
and y => L.set(iso, y, undefined)
are inverses of each other.
That is the general idea. Strictly speaking it is not required that the two functions are precisely inverses of each other. It can be useful to have "isomorphisms" that, when written through, actually change the data structure. For that reason the name "adapter", rather than "isomorphism", is sometimes used for the concept.
L.getInverse(isomorphism, maybeData) ~> maybeData
L.getInverse
views through an isomorphism in the inverse direction.
For example:
const expect = (p, f) => x => p(x) ? f(x) : undefined
const offBy1 = L.iso(expect(R.is(Number), R.inc),
expect(R.is(Number), R.dec))
L.getInverse(offBy1, 1)
// 0
Note that L.getInverse(iso, data)
is equivalent
to L.set(iso, data, undefined)
.
Also note that, while L.getInverse
makes most sense when used with an
isomorphism, it is valid to use L.getInverse
with partial lenses in general.
Doing so essentially constructs a minimal data structure that contains the given
value. For example:
L.getInverse("meaning", 42)
// { meaning: 42 }
L.iso(maybeData => maybeValue, maybeValue => maybeData) ~> isomorphism
L.iso
creates a new primitive isomorphism from the given pair of functions.
Usually the given functions should be inverses of each other, but that isn't
strictly necessary. The functions should also be partial so that when the input
doesn't match their expectation, the output is mapped to undefined
.
For example:
const reverseString = L.iso(expect(R.is(String), R.reverse),
expect(R.is(String), R.reverse))
L.modify([L.uriComponent,
L.json(),
"bottle",
0,
reverseString,
L.rewrite(R.join("")),
0],
R.toUpper,
"%7B%22bottle%22%3A%5B%22egassem%22%5D%7D")
// "%7B%22bottle%22%3A%22egasseM%22%7D"
L.inverse(isomorphism) ~> isomorphism
L.inverse
returns the inverse of the given isomorphism. Note that this
operation only makes sense on isomorphisms.
For example:
L.get(L.inverse(offBy1), 1)
// 0
L.identity ~> isomorphism
L.identity
is the identity element of lens composition and also the identity
isomorphism. L.identity
can also been seen as specifying an empty path.
Indeed, in this library, when used as an optic, L.identity
is equivalent to
[]
. The following equations characterize L.identity
:
L.get(L.identity, x) = x
L.modify(L.identity, f, x) = f(x)
L.compose(L.identity, l) = l
L.compose(l, L.identity) = l
L.complement ~> isomorphism
L.complement
is an isomorphism that performs logical negation of any
non-undefined
value when either read or written through.
For example:
L.set([L.complement, L.log()],
"Could be anything truthy",
"Also converted to bool")
// get false
// set "Could be anything truthy"
// false
L.is(value) ~> isomorphism
L.is
reads the given value as true
and everything else as false
and writes
true
as the given value and everything else as undefined
.
See here for an example.
L.uri ~> isomorphism
L.uri
is an isomorphism based on the
standard
decodeURI
and
encodeURI
functions.
L.uriComponent ~> isomorphism
L.uriComponent
is an isomorphism based on the
standard
decodeURIComponent
and
encodeURIComponent
functions.
L.json({reviver, replacer, space}) ~> isomorphism
L.json({reviver, replacer, space})
returns an isomorphism based on the
standard
JSON.parse
and
JSON.stringify
functions.
The optional reviver
is passed
to
JSON.parse
and
the optional replacer
and space
are passed
to
JSON.stringify
.
L.seemsArrayLike(anything) ~> boolean
L.seemsArrayLike
determines whether the given value is an instanceof Object
that has a non-negative integer length
property or a string, which are not
Objects in JavaScript. In this library, such values are
considered array-like objects that can be manipulated with
various optics.
Note that this function is intentionally loose, which is also intentionally apparent from the name of this function. JavaScript includes many array-like values, including normal arrays, typed arrays, and strings. Unfortunately there seems to be no simple way to directly and precisely test for all of those. Testing explicitly for every standard variation would be costly and might not cover user defined types. Fortunately, optics are targeting specific paths inside data-structures, rather than completely arbitrary values, which means that even a loose test can be accurate enough.
Note that if you are new to lenses, then you probably want to start with the tutorial.
A case that we have run into multiple times is where we have an array of constant strings that we wish to manipulate as if it was a collection of boolean flags:
const sampleFlags = ["id-19", "id-76"]
Here is a parameterized lens that does just that:
const flag = id => [L.normalize(R.sortBy(R.identity)),
L.find(R.equals(id)),
L.is(id)]
Now we can treat individual constants as boolean flags:
L.get(flag("id-69"), sampleFlags)
// false
L.get(flag("id-76"), sampleFlags)
// true
In both directions:
L.set(flag("id-69"), true, sampleFlags)
// ['id-19', 'id-69', 'id-76']
L.set(flag("id-76"), false, sampleFlags)
// ['id-19']
It is not atypical to have UIs where one selection has an effect on other
selections. For example, you could have an UI where you can specify maximum
and initial
values for some measure and the idea is that the initial
value
cannot be greater than the maximum
value. One way to deal with this
requirement is to implement it in the lenses that are used to access the
maximum
and initial
values. This way the UI components that allows the user
to edit those values can be dumb and do not need to know about the restrictions.
One way to build such a lens is to use a combination of L.props
(or, in more complex cases, L.pick
) to limit the set of properties
to deal with, and L.rewrite
to insert the desired restriction
logic. Here is how it could look like for the maximum
:
const maximum =
[L.props("maximum", "initial"),
L.rewrite(props => {
const {maximum, initial} = props
if (maximum < initial)
return {maximum, initial: maximum}
else
return props
}),
"maximum"]
Now:
L.set(maximum,
5,
{maximum: 10, initial: 8, something: "else"})
// {maximum: 5, initial: 5, something: "else"}
A typical element of UIs that display a list of selectable items is a checkbox to select or unselect all items. For example, the TodoMVC spec includes such a checkbox. The state of a checkbox is a single boolean. How do we create a lens that transforms a collection of booleans into a single boolean?
The state of a todo list contains a boolean completed
flag per item:
const sampleTodos = [{completed: true}, {completed: false}, {completed: true}]
We can address those flags with a traversal:
const completedFlags = [L.elems, "completed"]
To compute a single boolean out of a traversal over booleans we can use
the L.and
fold and use that to define a lens parameterized over flag
traversals using L.foldTraversalLens
:
const selectAll = L.foldTraversalLens(L.and)
Now we can say, for example:
L.get(selectAll(completedFlags), sampleTodos)
// false
L.set(selectAll(completedFlags), true, sampleTodos)
// [{completed: true}, {completed: true}, {completed: true}]
As an exercise define unselectAll
using the L.or
fold. How does it
differ from selectAll
?
Binary search trees might initially seem to be outside the scope of definable lenses. However, given basic BST operations, one could easily wrap them as a primitive partial lens. But could we leverage lens combinators to build a BST lens more compositionally?
We can. The L.choose
combinator allows for dynamic construction
of lenses based on examining the data structure being manipulated.
Inside L.choose
we can write the ordinary BST logic to pick the
correct branch based on the key in the currently examined node and the key that
we are looking for. So, here is our first attempt at a BST lens:
const searchAttempt = key => L.lazy(rec => {
const smaller = ["smaller", rec]
const greater = ["greater", rec]
const found = L.defaults({key})
return L.choose(n => {
if (!n || key === n.key)
return found
return key < n.key ? smaller : greater
})
})
const valueOfAttempt = key => [searchAttempt(key), "value"]
Note that we also make use of the L.lazy
combinator to create a
recursive lens with a cyclic representation.
This actually works to a degree. We can use the valueOfAttempt
lens
constructor to build a binary tree. Here is a little helper to build a tree
from pairs:
const fromPairs =
R.reduce((t, [k, v]) => L.set(valueOfAttempt(k), v, t), undefined)
Now:
const sampleBST = fromPairs([[3, "g"], [2, "a"], [1, "m"], [4, "i"], [5, "c"]])
sampleBST
// { key: 3,
// value: 'g',
// smaller: { key: 2, value: 'a', smaller: { key: 1, value: 'm' } },
// greater: { key: 4, value: 'i', greater: { key: 5, value: 'c' } } }
However, the above searchAttempt
lens constructor does not maintain the BST
structure when values are being removed:
L.remove(valueOfAttempt(3), sampleBST)
// { key: 3,
// smaller: { key: 2, value: 'a', smaller: { key: 1, value: 'm' } },
// greater: { key: 4, value: 'i', greater: { key: 5, value: 'c' } } }
How do we fix this? We could check and transform the data structure to a BST
after changes. The L.rewrite
combinator can be used for that
purpose. Here is a naïve rewrite to fix a tree after value removal:
const naiveBST = L.rewrite(n => {
if (undefined !== n.value) return n
const s = n.smaller, g = n.greater
if (!s) return g
if (!g) return s
return L.set(search(s.key), s, g)
})
Here is a working search
lens and a valueOf
lens constructor:
const search = key => L.lazy(rec => {
const smaller = ["smaller", rec]
const greater = ["greater", rec]
const found = L.defaults({key})
return [naiveBST, L.choose(n => {
if (!n || key === n.key)
return found
return key < n.key ? smaller : greater
})]
})
const valueOf = key => [search(key), "value"]
Now we can also remove values from a binary tree:
L.remove(valueOf(3), sampleBST)
// { key: 4,
// value: 'i',
// greater: { key: 5, value: 'c' },
// smaller: { key: 2, value: 'a', smaller: { key: 1, value: 'm' } } }
As an exercise, you could improve the rewrite to better maintain balance.
Perhaps you might even enhance it to maintain a balance condition such
as AVL
or Red-Black. Another
worthy exercise would be to make it so that the empty binary tree is null
rather than undefined
.
What about traversals over BSTs? We can use
the L.branch
combinator to define an in-order traversal over the
values of a BST:
const values = L.lazy(rec => [
L.optional,
naiveBST,
L.branch({smaller: rec,
value: L.identity,
greater: rec})])
Given a binary tree sampleBST
we can now manipulate it as a whole. For
example:
L.join("-", values, sampleBST)
// 'm-a-g-i-c'
L.modify(values, R.toUpper, sampleBST)
// { key: 3,
// value: 'G',
// smaller: { key: 2, value: 'A', smaller: { key: 1, value: 'M' } },
// greater: { key: 4, value: 'I', greater: { key: 5, value: 'C' } } }
L.remove([values, L.when(x => x > "e")], sampleBST)
// { key: 5, value: 'c', smaller: { key: 2, value: 'a' } }
Immutable.js is a popular library providing immutable data structures. As argued in Lenses with Immutable.js it can be useful to be able to manipulate Immutable.js data structures using optics.
When interfacing external libraries with partial lenses one does need to consider whether and how to support partiality. Partial lenses allow one to insert new and remove existing elements rather than just view and update existing elements.
List
indexingHere is a primitive partial lens for
indexing List
written
using L.lens
:
const getList = i => xs => Immutable.List.isList(xs) ? xs.get(i) : undefined
const setList = i => (x, xs) => {
if (!Immutable.List.isList(xs))
xs = Immutable.List()
if (x !== undefined)
return xs.set(i, x)
xs = xs.delete(i)
return xs.size ? xs : undefined
}
const idxList = i => L.lens(getList(i), setList(i))
Note how the above uses isList
to check the input. When viewing, in case the
input is not a List
, the proper result is undefined
. When updating the
proper way to handle a non-List
is to treat it as empty and also to replace a
resulting empty list with undefined
. Also, when updating, we treat
undefined
as a request to delete
rather than set
.
We can now view existing elements:
const sampleList = Immutable.List(["a", "l", "i", "s", "t"])
L.get(idxList(2), sampleList)
// 'i'
Update existing elements:
L.modify(idxList(1), R.toUpper, sampleList)
// List [ "a", "L", "i", "s", "t" ]
Remove existing elements:
L.remove(idxList(0), sampleList)
// List [ "l", "i", "s", "t" ]
And removing the last element propagates removal:
L.remove(["elems", idxList(0)],
{elems: Immutable.List(["x"]), look: "No elems!"})
// { look: 'No elems!' }
We can also create lists from non-lists:
L.set(idxList(0), "x", undefined)
// List [ "x" ]
And we can also append new elements:
L.set(idxList(5), "!", sampleList)
// List [ "a", "l", "i", "s", "t", "!" ]
Consider what happens when the index given to idxList
points further beyond
the last element. Both the L.index
lens and the above lens add
undefined
values, which is not ideal with partial lenses, because of the
special treatment of undefined
. In practise, however, it is not typical to
set
elements except to append just after the last element.
Fortunately we do not need Immutable.js data structures to provide a compatible
partial
traverse
function
to support traversals, because it is also possible to implement
traversals simply by providing suitable isomorphisms between Immutable.js data
structures and JSON. Here is a partial isomorphism between
List
and arrays:
const fromList = xs => Immutable.List.isList(xs) ? xs.toArray() : undefined
const toList = xs => R.is(Array, xs) && xs.length ? Immutable.List(xs) : undefined
const isoList = L.iso(fromList, toList)
So, now we can compose a traversal over List
as:
const seqList = [isoList, L.elems]
And all the usual operations work as one would expect, for example:
L.remove([seqList, L.when(c => c < "i")], sampleList)
// List [ 'l', 's', 't' ]
And:
L.joinAs(R.toUpper,
"",
[seqList, L.when(c => c <= "i")],
sampleList)
// 'AI'
L.choose
Consider the following example:
L.choose(x => Array.isArray(x) ? [L.elems, "data"] : "data")
A performance issue with the above is that each time it is used on an array, a
new composition, [L.elems, "data"]
, is allocated. Performance may be improved
by moving the allocation outside of L.choose
:
const onArray = [L.elems, "data"]
L.choose(x => Array.isArray(x) ? onArray : "data")
The distribution of this library includes
a
prebuilt and minified browser bundle.
However, this library is not designed to be primarily used via that bundle.
Rather, this library is bundled with Rollup, uses
/*#__PURE__*/
annotations to
help UglifyJS do better dead code
elimination, and uses process.env.NODE_ENV
to detect "production"
mode to
discard some warnings and error checks. This means that when using Rollup
with replace
and uglify plugins to build
browser bundles, the generated bundles will basically only include what you use
from this library.
For best results, increasing the number compression passes may allow UglifyJS to eliminate more dead code. Here is a sample snippet from a Rollup config:
import replace from "rollup-plugin-replace"
import uglify from "rollup-plugin-uglify"
// ...
export default {
plugins: [
replace({
"process.env.NODE_ENV": JSON.stringify("production")
}),
// ...
uglify({
compress: {
passes: 3
}
})
]
}
Consider the following REPL session using Ramda:
R.set(R.lensPath(["x", "y"]), 1, {})
// { x: { y: 1 } }
R.set(R.compose(R.lensProp("x"), R.lensProp("y")), 1, {})
// TypeError: Cannot read property 'y' of undefined
R.view(R.lensPath(["x", "y"]), {})
// undefined
R.view(R.compose(R.lensProp("x"), R.lensProp("y")), {})
// TypeError: Cannot read property 'y' of undefined
R.set(R.lensPath(["x", "y"]), undefined, {x: {y: 1}})
// { x: { y: undefined } }
R.set(R.compose(R.lensProp("x"), R.lensProp("y")), undefined, {x: {y: 1}})
// { x: { y: undefined } }
One might assume that R.lensPath([p0, ...ps])
is equivalent to
R.compose(R.lensProp(p0), ...ps.map(R.lensProp))
, but that is not the case.
With partial lenses you can robustly compose a path lens from prop
lenses L.compose(L.prop(p0), ...ps.map(L.prop))
or just use the
shorthand notation [p0, ...ps]
. In JavaScript, missing (and
mismatching) data can be mapped to undefined
, which is what partial lenses
also do, because undefined
is not a valid JSON value.
When a part of a data structure is missing, an attempt to view it returns
undefined
. When a part is missing, setting it to a defined value inserts the
new part. Setting an existing part to undefined
removes it.
There are several lens and optics libraries for JavaScript. In this section I'd like to very briefly elaborate on a number design choices made during the course of developing this library.
Making all optics partial allows optics to not only view and update existing elements, but also to insert, replace (as in replace with data of different type) and remove elements and to do so in a seamless and efficient way. In a library based on total lenses, one needs to e.g. explicitly compose lenses with prisms to deal with partiality. This not only makes the optic compositions more complex, but can also have a significant negative effect on performance.
The downside of implicit partiality is the potential to create incorrect optics that signal errors later than when using total optics.
JSON is the data-interchange format of choice today. By being able to effectively and efficiently manipulate JSON data structures directly, one can avoid using special internal representations of data and make things simpler (e.g. no need to convert from JSON to efficient immutable collections and back).
undefined
undefined
is a natural choice in JavaScript, especially when dealing with
JSON, to represent nothingness. Some libraries use null
, but that is arguably
a poor choice, because null
is a valid JSON value. Some libraries implement
special Maybe
types, but the benefits do not seem worth the trouble. First of
all, undefined
already exists in JavaScript and is not a valid JSON value.
Inventing a new value to represent nothingness doesn't seem to add much. OTOH,
wrapping values with Just
objects introduces a significant performance
overhead due to extra allocations. Operations with optics do not otherwise
necessarily require large numbers of allocations and can be made highly
efficient.
Not having an explicit Just
object means that dealing with values such as
Just Nothing
requires special consideration.
Aside from the brevity, allowing strings and non-negative integers to be directly used as optics allows one to avoid allocating closures for such optics. This can provide significant time and, more importantly, space savings in applications that create large numbers of lenses to address elements in data structures.
The downside of allowing such special values as optics is that the internal implementation needs to be careful to deal with them at any point a user given value needs to be interpreted as an optic.
Aside from the brevity, treating an array of optics as a composition allows the
library to be optimized to deal with simple paths highly efficiently and
eliminate the need for separate primitives
like assocPath
and dissocPath
for performance reasons.
Client code can also manipulate such simple paths as data.
One interesting consequence of partiality is that it becomes possible to invert isomorphisms without explicitly making it possible to extract the forward and backward functions from an isomorphism. A simple internal implementation based on functors and applicatives seems to be expressive enough for a wide variety of operations.
L.branch
By providing combinators for creating new traversals, lenses and isomorphisms,
client code need not depend on the internal implementation of optics. The
current version of this library exposes the internal implementation
via L.toFunction
, but it would not be unreasonable to not
provide such an operation. Only very few applications need to know the internal
representation of optics.
Indexing in partial lenses is unnested, very simple and based on the indices and keys of the underlying data structures. When indexing was added, it essentially introduced no performance degradation, but since then a few operations have been added that do require extra allocations to support indexing. It is also possible to compose optics so as to create nested indices or paths, but currently no combinator is directly provided for that.
The algebraic structures used in partial lenses follow the Static Land specification rather than the Fantasy Land specification. Static Land does not require wrapping values in objects, which translates to a significant performance advantage throughout the library, because fewer allocations are required.
Concern for performance has been a part of the work on partial lenses for some time. The basic principles can be summarized in order of importance:
Here are a few benchmark results on partial lenses (as L
version 11.7.1) and
some roughly equivalent operations using Ramda (as R
version 0.23.0), Ramda Lens (as P
version 0.1.1), Flunc Optics (as O
version
0.0.2), Optika (as K
version 0.0.2),
and lodash.get (as _get
version
4.4.2). As always with benchmarks, you should take these numbers with a pinch
of salt and preferably try and measure your actual use cases!
22,340,825/s 1.00x L.get(L_findHint_id_5000, ids)
6,810,637/s 1.00x R.reduceRight(add, 0, xs100)
423,028/s 16.10x L.foldr(add, 0, L.elems, xs100)
4,105/s 1659.26x O.Fold.foldrOf(O.Traversal.traversed, addC, 0, xs100)
11,245/s 1.00x R.reduceRight(add, 0, xs100000)
55/s 203.84x L.foldr(add, 0, L.elems, xs100000)
0/s Infinityx O.Fold.foldrOf(O.Traversal.traversed, addC, 0, xs100000) -- STACK OVERFLOW
693,671/s 1.00x L.foldl(add, 0, L.elems, xs100)
210,310/s 3.30x R.reduce(add, 0, xs100)
2,846/s 243.70x O.Fold.foldlOf(O.Traversal.traversed, addC, 0, xs100)
3,408,795/s 1.00x L.sum(L.elems, xs100)
2,751,043/s 1.24x K.traversed().sumOf(xs100)
498,397/s 6.84x L.concat(Sum, L.elems, xs100)
189,138/s 18.02x xs100.reduce((a, b) => a + b, 0)
126,016/s 27.05x R.sum(xs100)
22,786/s 149.60x P.sumOf(P.traversed, xs100)
4,354/s 782.85x O.Fold.sumOf(O.Traversal.traversed, xs100)
566,456/s 1.00x L.maximum(L.elems, xs100)
3,268/s 173.35x O.Fold.maximumOf(O.Traversal.traversed, xs100)
137,786/s 1.00x L.sum([L.elems, L.elems, L.elems], xsss100)
134,226/s 1.03x L.concat(Sum, [L.elems, L.elems, L.elems], xsss100)
4,470/s 30.83x P.sumOf(R.compose(P.traversed, P.traversed, P.traversed), xsss100)
855/s 161.08x O.Fold.sumOf(R.compose(O.Traversal.traversed, O.Traversal.traversed, O.Traversal.traversed), xsss100)
3,063,115/s 1.00x K.traversed().arrayOf(xs100)
264,515/s 11.58x L.collect(L.elems, xs100)
28,117/s 108.94x xs100.map(I.id)
3,431/s 892.89x O.Fold.toListOf(O.Traversal.traversed, xs100)
111,515/s 1.00x L.collect([L.elems, L.elems, L.elems], xsss100)
27,156/s 4.11x K.traversed().traversed().traversed().arrayOf(xsss100)
26,489/s 4.21x (() => { let acc = []; xsss100.forEach(x0 => { x0.forEach(x1 => { acc = acc.concat(x1); })}); return acc; })()
9,809/s 11.37x R.chain(R.chain(R.identity), xsss100)
811/s 137.48x O.Fold.toListOf(R.compose(O.Traversal.traversed, O.Traversal.traversed, O.Traversal.traversed), xsss100)
65,860/s 1.00x R.flatten(xsss100)
32,578/s 2.02x L.collect(flatten, xsss100)
15,178,735/s 1.00x L.modify(L.elems, inc, xs)
1,908,401/s 7.95x R.map(inc, xs)
866,990/s 17.51x xs.map(inc)
419,944/s 36.14x P.over(P.traversed, inc, xs)
414,330/s 36.63x K.traversed().over(xs, inc)
386,661/s 39.26x O.Setter.over(O.Traversal.traversed, inc, xs)
421,617/s 1.00x L.modify(L.elems, inc, xs1000)
118,199/s 3.57x R.map(inc, xs1000)
2,821/s 149.48x xs1000.map(inc)
2,804/s 150.35x K.traversed().over(xs1000, inc)
381/s 1105.29x O.Setter.over(O.Traversal.traversed, inc, xs1000) -- QUADRATIC
364/s 1157.54x P.over(P.traversed, inc, xs1000) -- QUADRATIC
154,255/s 1.00x L.modify([L.elems, L.elems, L.elems], inc, xsss100)
10,075/s 15.31x R.map(R.map(R.map(inc)), xsss100)
8,021/s 19.23x xsss100.map(x0 => x0.map(x1 => x1.map(inc)))
7,930/s 19.45x K.traversed().traversed().traversed().over(xsss100, inc)
3,521/s 43.82x P.over(R.compose(P.traversed, P.traversed, P.traversed), inc, xsss100)
2,901/s 53.18x O.Setter.over(R.compose(O.Traversal.traversed, O.Traversal.traversed, O.Traversal.traversed), inc, xsss100)
35,926,652/s 1.00x L.get(1, xs)
26,450,788/s 1.36x _get(xs, 1)
3,911,048/s 9.19x R.nth(1, xs)
1,598,498/s 22.48x R.view(l_1, xs)
775,451/s 46.33x K.idx(1).get(xs)
55,427,847/s 1.00x L_get_1(xs)
20,448,553/s 2.71x L.get(1)(xs)
3,845,738/s 14.41x R_nth_1(xs)
2,713,357/s 20.43x R.nth(1)(xs)
23,951,414/s 1.00x L.set(1, 0, xs)
6,806,441/s 3.52x R.update(1, 0, xs)
5,872,301/s 4.08x (() => { let ys = xs.slice(); ys[1] = 0; return ys; })()
983,695/s 24.35x R.set(l_1, 0, xs)
820,935/s 29.18x xs.map((x, i) => i === 1 ? 0 : x)
536,599/s 44.64x K.idx(1).set(xs, 0)
39,659,146/s 1.00x L.get("y", xyz)
26,664,449/s 1.49x R.prop("y", xyz)
10,143,058/s 3.91x _get(xyz, "y")
2,549,710/s 15.55x R.view(l_y, xyz)
703,362/s 56.39x K.key("y").get(xyz)
68,949,308/s 1.00x L_get_y(xyz)
22,483,723/s 3.07x R_prop_y(xyz)
19,751,691/s 3.49x L.get("y")(xyz)
9,009,383/s 7.65x R.prop("y")(xyz)
11,569,677/s 1.00x R.assoc("y", 0, xyz)
11,093,933/s 1.04x L.set("y", 0, xyz)
1,344,722/s 8.60x R.set(l_y, 0, xyz)
557,467/s 20.75x K.key("y").set(xyz, 0)
15,262,027/s 1.00x L.get([0,"x",0,"y"], axay)
14,907,248/s 1.02x R.path([0,"x",0,"y"], axay)
12,702,608/s 1.20x _get(axay, [0,"x",0,"y"])
2,381,364/s 6.41x R.view(l_0x0y, axay)
486,335/s 31.38x R.view(l_0_x_0_y, axay)
208,374/s 73.24x K.idx(0).key("x").idx(0).key("y").get(axay)
3,976,228/s 1.00x L.set([0,"x",0,"y"], 0, axay)
963,482/s 4.13x R.assocPath([0,"x",0,"y"], 0, axay)
539,717/s 7.37x R.set(l_0x0y, 0, axay)
336,600/s 11.81x R.set(l_0_x_0_y, 0, axay)
166,457/s 23.89x K.idx(0).key("x").idx(0).key("y").set(axay, 0)
3,940,376/s 1.00x L.modify([0,"x",0,"y"], inc, axay)
608,097/s 6.48x R.over(l_0x0y, inc, axay)
354,775/s 11.11x R.over(l_0_x_0_y, inc, axay)
170,534/s 23.11x K.idx(0).key("x").idx(0).key("y").over(axay, inc)
24,865,101/s 1.00x L.remove(1, xs)
3,236,318/s 7.68x R.remove(1, 1, xs)
10,992,174/s 1.00x L.remove("y", xyz)
2,699,520/s 4.07x R.dissoc("y", xyz)
17,152,581/s 1.00x L.get(["x","y","z"], xyzn)
15,482,471/s 1.11x _get(xyzn, ["x", "y", "z"])
14,970,587/s 1.15x R.path(["x","y","z"], xyzn)
2,385,382/s 7.19x R.view(l_xyz, xyzn)
849,966/s 20.18x R.view(l_x_y_z, xyzn)
268,601/s 63.86x K.key("x").key("y").key("z").get(xyzn)
164,899/s 104.02x O.Getter.view(o_x_y_z, xyzn)
4,684,524/s 1.00x L.set(["x","y","z"], 0, xyzn)
1,901,101/s 2.46x R.assocPath(["x","y","z"], 0, xyzn)
855,120/s 5.48x R.set(l_xyz, 0, xyzn)
585,778/s 8.00x R.set(l_x_y_z, 0, xyzn)
225,883/s 20.74x K.key("x").key("y").key("z").set(xyzn, 0)
212,988/s 21.99x O.Setter.set(o_x_y_z, 0, xyzn)
1,040,171/s 1.00x R.find(x => x > 3, xs100)
586,345/s 1.77x L.selectAs(x => x > 3 ? x : undefined, L.elems, xs100)
2,768/s 375.80x O.Fold.findOf(O.Traversal.traversed, x => x > 3, xs100)
7,296,500/s 1.00x L.selectAs(x => x < 3 ? x : undefined, L.elems, xs100)
3,802,052/s 1.92x R.find(x => x < 3, xs100)
2,723/s 2679.70x O.Fold.findOf(O.Traversal.traversed, x => x < 3, xs100) -- NO SHORTCUT EVALUATION
3,934,177/s 1.00x L.remove(50, xs100)
1,828,554/s 2.15x R.remove(50, 1, xs100)
4,290,425/s 1.00x L.set(50, 2, xs100)
1,686,594/s 2.54x R.update(50, 2, xs100)
681,526/s 6.30x R.set(l_50, 2, xs100)
473,739/s 9.06x K.idx(50).set(xs100, 2)
Various operations on partial lenses have been optimized for common cases, but there is definitely a lot of room for improvement. The goal is to make partial lenses fast enough that performance isn't the reason why you might not want to use them.
See bench.js for details.
As said in the first sentence of this document, lenses are convenient for performing updates on individual elements of immutable data structures. Having abilities such as nesting, adapting, recursing and restructuring using lenses makes the notion of an individual element quite flexible and, even further, traversals make it possible to selectively target zero or more elements of non-trivial data structures in a single operation. It can be tempting to try to do everything with lenses, but that will likely only lead to misery. It is important to understand that lenses are just one of many functional abstractions for working with data structures and sometimes other approaches can lead to simpler or easier solutions. Zippers, for example, are, in some ways, less principled and can implement queries and transforms that are outside the scope of lenses and traversals.
One type of use case which we've ran into multiple times and falls out of the sweet spot of lenses is performing uniform transforms over data structures. For example, we've run into the following use cases:
One approach to making such whole data structure spanning updates is to use a simple bottom-up transform. Here is a simple implementation for JSON based on ideas from the Uniplate library:
const descend = (w2w, w) => R.is(Object, w) ? R.map(w2w, w) : w
const substUp = (h2h, w) => descend(h2h, descend(w => substUp(h2h, w), w))
const transform = (w2w, w) => w2w(substUp(w2w, w))
transform(w2w, w)
basically just performs a single-pass bottom-up transform
using the given function w2w
over the given data structure w
. Suppose we
are given the following data:
const sampleBloated = {
just: "some",
extra: "crap",
that: [
"we",
{want: "to",
filter: ["out"],
including: {the: "following",
extra: true,
fields: 1}}]
}
We can now remove the extra
fields
like this:
transform(R.ifElse(R.allPass([R.is(Object), R.complement(R.is(Array))]),
L.remove(L.props("extra", "fields")),
R.identity),
sampleBloated)
// { just: 'some',
// that: [ 'we', { want: 'to',
// filter: ['out'],
// including: {the: 'following'} } ] }
Lenses are an old concept and there are dozens of academic papers on lenses and dozens of lens libraries for various languages. Below are just a few links—feel free to suggest more!
Contributions in the form of pull requests are welcome!
Before starting work on a major PR, it is a good idea to open an issue or maybe ask on gitter whether the contribution sounds like something that should be added to this library.
If you allow us to make changes to your PR, it can make the process smoother: Allowing changes to a pull request branch created from a fork. We also welcome starting the PR sooner, before it is ready to be merged, rather than later so we know what is going on and can help.
Aside from the code changes, a PR should also include tests, and documentation.
When implementing partial optics it is important to consider the behavior of the optics when the focus doesn't match the expectation of the optic and also whether the optic should propagate removal. Such behavior should also be tested.
It is best not to commit changes to generated files in PRs. Some of the files
in docs
, lib
and dist
directories are generated.
The prepare
script is the usual way to build after changes:
npm run prepare
It builds the dist
files and runs the lint rules and tests. You can also run
the scripts for those subtasks separately.
The tests in this library are written in an atypical manner.
First of all, the tests are written as strings that are eval
ed. This way one
doesn't need to invent names or write prose for tests.
There is also a special test that checks the arity of the exports. You'll notice it immediately if you add an export.
The test/types.js
file contains contract or type predicates
for the library primitives. Those are also used when running tests to check
that the implementation matches the contracts.
When you implement a new combinator, you will need to also add a type contract and a shadow implementation for the primitive.
The docs
folder contains the generated documentation. You can can open the
file locally:
open docs/index.html
To actually build the docs (translate the markdown to html), you can run
npm run docs
or you can use the watch
npm run docs-watch
which builds the docs if you save README.md
(you will need to manually refresh
browser).
FAQs
Partial lenses is a comprehensive, high-performance optics library for JavaScript
We found that partial.lenses demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Create React App is officially deprecated due to React 19 issues and lack of maintenance—developers should switch to Vite or other modern alternatives.
Security News
Oracle seeks to dismiss fraud claims in the JavaScript trademark dispute, delaying the case and avoiding questions about its right to the name.
Security News
The Linux Foundation is warning open source developers that compliance with global sanctions is mandatory, highlighting legal risks and restrictions on contributions.