Security News
Maven Central Adds Sigstore Signature Validation
Maven Central now validates Sigstore signatures, making it easier for developers to verify the provenance of Java packages.
migratortron
Advanced tools
Clone data or complete sites from one Contensis project to any other!
This tool is available as a REST API, cli or as a library for use in your own projects
Create a ContensisRepository
connection to the source project and all target projects from data supplied in the payload. Build a transformGuid
function with alias and project 'baked in' - required to reliably seed our deterministic guids throughout the process.
HydrateContensisRepositories
will hydrate each of these repositories with Projects, Content Types and Components from each Contensis instance, finding dependencies and examining relationships to build complete content Models.
GetEntries
will search for entries and load them into the source repository, while searching for the same entries in each target repository (transforming any guids supplied in the query so we can match entries previously created by the tool). Each found entry will be examined for any dependent entries or asset entries and their guids returned. These dependent entries will also be searched for in each repository by their guid to ensure we have all migration dependencies loaded into each repository.
BuildEntries
will create a MigrateEntry
in each target repository for each entry in the source repository. This MigrateEntry
contains two entries, a firstPassEntry
and a finalEntry
.
Each entry is built by looping through all fields in a matched content type and transforming or censoring the field value, depending on the field type and if we are building an entry ready for a first pass. The same is true for any component fields found and also nested components.
A firstPassEntry
will have any found dependencies stripped out and a finalEntry
will have only a few types of field tweaked to prevent errors when loading into Contensis. The built entry has each field examined for any asset or entry links and transforms any guid found using a prebuilt transformGuid
function attached to each target repository. This built entry is compared to the source entry and a status is set, this can be create
, update
or two-pass
. We can keep track of the entry by its originalId
or transformed target id
. BuildAssetEntries
does a similar job when used with asset entries.
Language differences between source and target are handled by replacing the language keys inside each built content type or entry with the default language from the target project. So you can load content types and entries from a source project with en-GB
language into a target project with es-ES
language code, and expect everything to be created with es-ES
language code set. The same language replacements are made when comparing source and target for differences to determine if we need to make any updates to existing content types or entries.
Each field of every entry will be checked for dependencies and entries that do not already exist will be marked for a "two-pass" migration.
We will create the entry in an unpublished state with all dependency fields removed, this will allow any potential dependencies of the entry to get created before we attempt to create the complete entry with all these links initiated.
After all entries marked for create or update have completed, we will make a second-pass over the entries marked for a "two-pass" migration, this time creating the final entry with all dependencies present. The entry will then be published.
If a download
or commit
flag has been provided the process will continue, otherwise the fetched repository data is mapped into a condensed response format and returned to the caller.
DownloadAssetContent
will examine any asset entries found in the source repository and - by concatenating assetHostname
from the payload with the sys.uri
field - download any files to the local file system that have been assigned a create
or update
status in any of the target repositories.
If a commit
flag has been provided the process will continue, otherwise the fetched repository data is mapped into a condensed response format and returned to the caller.
UploadAssetContent
will examine any asset entries found in the source repository that have been assigned a create
or update
status in any of the target repositories and upload the asset, creating or updating the asset entry at the same time.
CommitEntries
will load each Migrate entry in each target repository into a promise queue, the queue is then executed in parallel with two execution threads by default, this can be overridden by setting the concurrency option in the payload. Each entry will be loaded by order of their migrate status, creating any new entries first, updating existing entries second and finally updating any stub entries created as part of a two-pass migration with all their dependencies. All entries are published except for entry stubs.
The process currently continues if any errors are encountered, often is the case an error loading an asset or entry early on will likely cause further errors later due to then attempting to create entries containing missing dependencies.
The Management API uses a fetch wrapper enterprise-fetch
that employs its own timeout and retry mechanism. A retry policy is in place to timeout any call after 60s and retry any failed call 3 times. We do not retry 404, 409 and 422 errors. We can also divert requests to a proxy such as Fiddler to debug the API requests made by the service.
** any dependency may also appear as an array in a repeatable form of the field.
"Failed to create the entry"
is often caused by the guid already existing somewhere in Contensis, usually a remnant of an old entry in one of the SQL tables. This will happen more often if you are deleting and re-loading the same entries again. If you are using transformGuids: false
it is possible that entry already exists in another project.
{
"message": "There are validation errors creating the entry",
"data": [
{
"field": "slug",
"message": "The entry slug 'simple-entry' already exists for the language 'en-GB'"
}
]
}
To fix this you normally need to expose the entry title slug in the source content type and update it in the source entry to be unique
This feature allows us to copy the contents of one entry field to another, this is useful for example when a field is named incorrectly, or was specified originally as one field type but we would like to curate and present this content differently in future.
Copying field data directly from one field to another can only be done with the source and destination field types metioned in the below table
When we copy certain field types, a transformation is made to the data to make it compatible with the destination field type.
Copying a field will overwrite any data in the destination field, it will not preserve or respect any data that currently exists or has been manually entered
Finer grained control of the field data transformation (including field types not supported directly) can be made using a template
source | destination | notes |
---|---|---|
string | string | |
stringArray | ||
canvas | Content is surrounded within a paragraph block (template can alter the source value) | |
richText | ||
richTextArray | ||
boolean | True if evaluates "truthy" (0, false or null would be false) | |
stringArray | stringArray | |
string | Multiples separated with newline | |
canvas | ^ | |
richText | ^ | |
richTextArray | ||
richText | canvas | |
richText | ||
richTextArray | ||
string | ||
stringArray | ||
richTextArray | richTextArray | |
richText | Multiples separated with newline | |
canvas | ||
boolean | boolean | |
string | "Yes" or "No" | |
stringArray | ^ | |
integer | True = 1, false = 0 | |
integerArray | ^ | |
decimal | True = 1, false = 0 | |
decimalArray | ^ | |
integer | integer | |
integerArray | ||
decimal | ||
decimalArray | ||
boolean | True if evaluates "truthy" (0, false or null would be false) | |
decimal | decimal | |
decimalArray | ||
integer | Truncate any decimal precision (e.g. 44.9 = 44) | |
integerArray | ^ | |
boolean | True if evaluates "truthy" (0, false or null would be false) | |
dateTime | dateTime | |
dateTimeArray | ||
image | image | |
imageArray | ||
imageArray | imageArray | |
image | ||
component | component | Source and destination component must contain the same fields |
componentArray | ^ | |
component.<field type> | <field type> | Supports the field types mentioned above |
componentArray.<field type> | <field type> | ^ at the first position in the array |
<field type> | component.<field type> | Adds the field to existing component object or add new component with just this field |
componentArray.<field type> | ^ at the first position in the array | |
composer | <field type> | Not supported |
<field type> | composer | Not supported |
canvas | <field type> | Not supported |
Key: ^ = as above
If your field type is not supported above, or you wish to modify the output value for the field we can supply a LiquidJS template where we can make use of "tags" and "filters" available in LiquidJS to perform custom transformations on our entry field data
The result after parsing this template will become the new value for the destination field for every entry
Templates allow us to to make some very precise adjustments to the field value we will update
A number of variables are available to use in the liquid template
value
- the value of the source field in the entryexisting_value
- any existing value of the target field in the entrytarget_value
- the value that has been prepared to go into the destination fieldentry
- the entire entry object (if we need to reference another field in the entry)These are simple examples of using and chaining LiquidJS filters
"{{ value | capitalize }}"
will capitalise the first letter of the value"{{ value | downcase }}"
will lowercase the entire value"{{ value | downcase | capitalize }}"
will lowercase the entire value then capitalise the first letter"{{ value >= 50 }}"
using logic based on a source field value we can set a boolean to true or falseUse of LiquidJS tags is also available for more complex scenarios
A special variable is available called source_value
(which is the same as value
) except this template is parsed and rendered prior to any field transformations taking place. This is useful if you wish to alter the source field value prior to any internal transformations.
Using source_value
means target_value
and value
variables are not available.
"<h1>{{ source_value }}</h1>"
allows us to surround our source_value
with some text before it is converted into the destination field type (e.g. canvas)"{{ source_value | remove: ".aspx" }}"
will remove any instance of .aspx
from our source valueBecause of the near infinite flexibility provided by Composer field configurations, in order to transform parts of, or the entire contents of a Composer field in an entry to another field type we can only do this by writing our own template to configure how each item in the Composer is to be "rendered" before adding the transformation result to our destination entry field.
If we have the following Composer content in JSON containing a number of different data types or "Composer items":
[
{
"type": "text",
"value": "This is my plain text"
},
{
"type": "markup",
"value": "<p>This is rich <em>text</em> with some <strong>styling</strong></p>"
},
{
"type": "quote",
"value": {
"source": "This is the source",
"text": "This is a quote"
}
},
{
"type": "number",
"value": 123456789
},
{
"type": "boolean",
"value": false
},
{
"type": "location",
"value": {
"lat": 51.584151,
"lon": -2.997664
}
},
{
"type": "list",
"value": [
"Plum",
"Orange",
"Banana"
]
},
{
"type": "iconWithText",
"value": {
"icon": {
"sys": {
"id": "51639de0-a1e4-4352-b166-17f86e3558bf"
}
},
"text": "This is my icon text"
}
},
{
"type": "asset",
"value": {
"sys": {
"id": "e798df96-1de3-4b08-a270-3787b902a580"
}
}
},
{
"type": "image",
"value": {
"altText": "A photo of Richard Saunders.",
"asset": {
"sys": {
"id": "bc6435eb-c2e3-4cef-801f-b6061f9cdad6"
}
}
}
}
]
We could supply a template to pull out specific item types into our destination field
The example below will take the list field above and allow the content to be copied into any string type field (remove comments if copy pasting this example)
# use a "for" tag to iterate over our "value" variable (composer field)
{% for c_item in value %}
# use an "if" tag to match a composer item type of list in the composer array
{% if c_item.type == 'list' %}
# render any list from the composer field, use a "join" filter to convert the value array to a string
{{ c_item.value | join: ', ' }}
# close the "if" tag
{% endif %}
# close the "for" tag
{% endfor %}
A short hand example similar to the above using only LiquidJS filters
Taking the value
(a composer item array), filtering just the composer item types of 'list', mapping just the 'value' taking the first
found 'list' and concatenating the values into a comma-separated string.
{{ value | where: 'type', 'list' | map: 'value' | first | join: ', ' }}
So a composer field containing this JSON
[{
"type": "list",
"value": [
"Plum",
"Orange",
"Banana"
]
}]
becomes Plum, Orange, Banana
Or render the same list field data ready to copy into a Rich text or Canvas field, we are free to decorate any value with required markup so it is presented and transformed correctly.
{% for c_item in source_value %}
{% if c_item.type == 'list' %}
<ul>
{% for l_item in c_item.value %}
<li>{{l_item}}</li>
{% endfor %}
</ul>
{% endif %}
{% endfor %}
To transform the above Composer content into a Canvas field, we would need to "render" each item in the Composer that we require in the Canvas field as a very simple HTML representation, and this becomes the value we pass to the HTML parser that in turn renders the JSON that allows us to store the Canvas content in Contensis.
The same kind of theory can be applied to any source field we wish to convert to Canvas content
We must use the source_value
variable in the template instead of value
variable as the template needs to alter the source value and be applied before the process transforms the value into Canvas
If the source field (or composer item value) is already a rich text field containing existing markup, we don't need to do any special rendering before this is parsed and converted to Canvas content
{% for c_item in source_value %}
{% if c_item.type == 'image' %}
<img src='https://imageapi.site.com/{{ c_item.value.asset.sys.id }}' alt='{{ c_item.value.altText }}'/>
{% elsif c_item.type == 'quote' %}
<p>{{ c_item.value.source }}</p>
<blockquote>{{ c_item.value.text }}</blockquote>
{% elsif c_item.type == 'markup' %}{{ c_item.value.text }}
{% else %}<p>{{ c_item.value | join: '</p><p>' }}</p>
{% endif %}
{% endfor %}
We can utilise a LiquidJS template to concatenate multiple field values together and copy to a destination field
Here we will copy the value of the source field to the destination field but also append any existing value if it exists
{{ value }}{% if existing_value %} - {{ existing_value }}{% endif %}
Or we can refer to other fields in the entry
variable
{{ entry.text }}{% if entry.heading %} - {{ entry.heading }}{% endif %}
npm install
npm run build
npm test
npm start
will build the project and start the apinpm run proxy
for debugging network calls - this will do the same as npm start
except http calls will be passed through a local proxynpm run mock
for debugging mocked tests - same as npm start
except network calls are disabled and all network responses will be served from recorded mock datanpm run record
for recording mocks for tests - same as npm start
except network calls are captured and saved into the ./nock/recorded/
foldernpm run debug
for development with hot-reloading - runs tsc
in watch mode and starts the api with nodemon
npm test
will run all mocha
test scripts named ./**/*.spec.js
NetConnectNotAllowedError: Nock: Disallowed net connect for "localhost:3001...
This error means the underlying network request could not be found in the list of mock requests. This usually means the code has changed in a way that has affected the network calls that are made to the Management API. If it was an intentional change, the failing tests must be "recorded" again by using the npm run record
script and then making the same call to the API which will record all the network requests made during the API call, and save them (overwriting the existing mock data in ./nock/recorded
folder).
If the changes made did not warrant a knock-on change to the Management API calls then a bug might have been introduced. You can change your code and rebuild the project each time until you can make all tests pass with the existing mock data (or network calls) npm run build
, npm test
.
Debug individual tests or specs by either adding describe.only(
to the failing test"s describe(
function or by changing the npm test
script in package.json
to run the failing test(s) only, when this test eventually passes - revert this package.json
change and run all tests again.
Attach a NodeJS debugger to the tests as they run by adding mocha --inspect-brk=9229
to the mocha ...
part of the test
script in package.json
. Each time you run npm test
, NodeJS will wait for a debugger instance to become attached before continuing, once a debugger is attached the process will begin, hitting any breakpoints set in code.
You can only do this with a "clean" working copy of the project (i.e. there are no uncommitted or modified git-tracked files in the project)
npm version {minor|patch|v0.0.0}
package.json
npm publish
FAQs
Contensis CMS Migration Library
The npm package migratortron receives a total of 5 weekly downloads. As such, migratortron popularity was classified as not popular.
We found that migratortron demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 0 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Maven Central now validates Sigstore signatures, making it easier for developers to verify the provenance of Java packages.
Security News
CISOs are racing to adopt AI for cybersecurity, but hurdles in budgets and governance may leave some falling behind in the fight against cyber threats.
Research
Security News
Socket researchers uncovered a backdoored typosquat of BoltDB in the Go ecosystem, exploiting Go Module Proxy caching to persist undetected for years.