Security News
pnpm 10.0.0 Blocks Lifecycle Scripts by Default
pnpm 10 blocks lifecycle scripts by default to improve security, addressing supply chain attack risks but sparking debate over compatibility and workflow changes.
migratortron
Advanced tools
Clone data or complete sites from one Contensis project to any other!
This tool is available as a REST API, cli or as a library for use in your own projects
Create a ContensisRepository
connection to the source project and all target projects from data supplied in the payload. Build a transformGuid
function with alias and project 'baked in' - required to reliably seed our deterministic guids throughout the process.
HydrateContensisRepositories
will hydrate each of these repositories with Projects, Content Types and Components from each Contensis instance, finding dependencies and examining relationships to build complete content Models.
GetEntries
will search for entries and load them into the source repository, while searching for the same entries in each target repository (transforming any guids supplied in the query so we can match entries previously created by the tool). Each found entry will be examined for any dependent entries or asset entries and their guids returned. These dependent entries will also be searched for in each repository by their guid to ensure we have all migration dependencies loaded into each repository.
BuildEntries
will create a MigrateEntry
in each target repository for each entry in the source repository. This MigrateEntry
contains two entries, a firstPassEntry
and a finalEntry
.
Each entry is built by looping through all fields in a matched content type and transforming or censoring the field value, depending on the field type and if we are building an entry ready for a first pass. The same is true for any component fields found and also nested components.
A firstPassEntry
will have any found dependencies stripped out and a finalEntry
will have only a few types of field tweaked to prevent errors when loading into Contensis. The built entry has each field examined for any asset or entry links and transforms any guid found using a prebuilt transformGuid
function attached to each target repository. This built entry is compared to the source entry and a status is set, this can be create
, update
or two-pass
. We can keep track of the entry by its originalId
or transformed target id
. BuildAssetEntries
does a similar job when used with asset entries.
Language differences between source and target are handled by replacing the language keys inside each built content type or entry with the default language from the target project. So you can load content types and entries from a source project with en-GB
language into a target project with es-ES
language code, and expect everything to be created with es-ES
language code set. The same language replacements are made when comparing source and target for differences to determine if we need to make any updates to existing content types or entries.
Each field of every entry will be checked for dependencies and entries that do not already exist will be marked for a "two-pass" migration.
We will create the entry in an unpublished state with all dependency fields removed, this will allow any potential dependencies of the entry to get created before we attempt to create the complete entry with all these links initiated.
After all entries marked for create or update have completed, we will make a second-pass over the entries marked for a "two-pass" migration, this time creating the final entry with all dependencies present. The entry will then be published.
If a download
or commit
flag has been provided the process will continue, otherwise the fetched repository data is mapped into a condensed response format and returned to the caller.
DownloadAssetContent
will examine any asset entries found in the source repository and - by concatenating assetHostname
from the payload with the sys.uri
field - download any files to the local file system that have been assigned a create
or update
status in any of the target repositories.
If a commit
flag has been provided the process will continue, otherwise the fetched repository data is mapped into a condensed response format and returned to the caller.
UploadAssetContent
will examine any asset entries found in the source repository that have been assigned a create
or update
status in any of the target repositories and upload the asset, creating or updating the asset entry at the same time.
CommitEntries
will load each Migrate entry in each target repository into a promise queue, the queue is then executed in parallel with two execution threads by default, this can be overridden by setting the concurrency option in the payload. Each entry will be loaded by order of their migrate status, creating any new entries first, updating existing entries second and finally updating any stub entries created as part of a two-pass migration with all their dependencies. All entries are published except for entry stubs.
The process currently continues if any errors are encountered, often is the case an error loading an asset or entry early on will likely cause further errors later due to then attempting to create entries containing missing dependencies.
The Management API uses a fetch wrapper enterprise-fetch
that employs its own timeout and retry mechanism. A retry policy is in place to timeout any call after 60s and retry any failed call 3 times. We do not retry 404, 409 and 422 errors. We can also divert requests to a proxy such as Fiddler to debug the API requests made by the service.
** any dependency may also appear as an array in a repeatable form of the field.
"Failed to create the entry"
is often caused by the guid already existing somewhere in Contensis, usually a remnant of an old entry in one of the SQL tables. This will happen more often if you are deleting and re-loading the same entries again. If you are using transformGuids: false
it is possible that entry already exists in another project.
{
"message": "There are validation errors creating the entry",
"data": [
{
"field": "slug",
"message": "The entry slug 'simple-entry' already exists for the language 'en-GB'"
}
]
}
To fix this you normally need to expose the entry title slug in the source content type and update it in the source entry to be unique
This feature allows us to copy the contents of one entry field to another, this is useful for example when a field is named incorrectly, or was specified originally as one field type but we would like to curate and present this content differently in future.
The documentation for this is in the Contensis CLI docs
npm install
npm run build
npm test
npm start
will build the project and start the apinpm run proxy
for debugging network calls - this will do the same as npm start
except http calls will be passed through a local proxynpm run mock
for debugging mocked tests - same as npm start
except network calls are disabled and all network responses will be served from recorded mock datanpm run record
for recording mocks for tests - same as npm start
except network calls are captured and saved into the ./nock/recorded/
foldernpm run debug
for development with hot-reloading - runs tsc
in watch mode and starts the api with nodemon
npm test
will run all mocha
test scripts named ./**/*.spec.js
NetConnectNotAllowedError: Nock: Disallowed net connect for "localhost:3001...
This error means the underlying network request could not be found in the list of mock requests. This usually means the code has changed in a way that has affected the network calls that are made to the Management API. If it was an intentional change, the failing tests must be "recorded" again by using the npm run record
script and then making the same call to the API which will record all the network requests made during the API call, and save them (overwriting the existing mock data in ./nock/recorded
folder).
If the changes made did not warrant a knock-on change to the Management API calls then a bug might have been introduced. You can change your code and rebuild the project each time until you can make all tests pass with the existing mock data (or network calls) npm run build
, npm test
.
Debug individual tests or specs by either adding describe.only(
to the failing test"s describe(
function or by changing the npm test
script in package.json
to run the failing test(s) only, when this test eventually passes - revert this package.json
change and run all tests again.
Attach a NodeJS debugger to the tests as they run by adding mocha --inspect-brk=9229
to the mocha ...
part of the test
script in package.json
. Each time you run npm test
, NodeJS will wait for a debugger instance to become attached before continuing, once a debugger is attached the process will begin, hitting any breakpoints set in code.
You can only do this with a "clean" working copy of the project (i.e. there are no uncommitted or modified git-tracked files in the project)
npm version {minor|patch|v0.0.0}
package.json
npm publish
FAQs
Contensis CMS Migration Library
The npm package migratortron receives a total of 79 weekly downloads. As such, migratortron popularity was classified as not popular.
We found that migratortron demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 0 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
pnpm 10 blocks lifecycle scripts by default to improve security, addressing supply chain attack risks but sparking debate over compatibility and workflow changes.
Product
Socket now supports uv.lock files to ensure consistent, secure dependency resolution for Python projects and enhance supply chain security.
Research
Security News
Socket researchers have discovered multiple malicious npm packages targeting Solana private keys, abusing Gmail to exfiltrate the data and drain Solana wallets.