Security News
RubyGems.org Adds New Maintainer Role
RubyGems.org has added a new "maintainer" role that allows for publishing new versions of gems. This new permission type is aimed at improving security for gem owners and the service overall.
etcopydata
Advanced tools
SFDX Plugin to populate your scratch org and/or developer sandbox with data for multiple related sObjects.
SFDX Plugin to populate your scratch org and/or developer sandbox with data extracted from multiple sObjects.
sfdx plugins:install etcopydata
You'll be prompted that this, like any plugin, is not officially code-signed by Salesforce. If that's annoying, you can whitelist it
git clone https://github.com/eltoroit/ETCopyData.git
cd ETCopyData
npm install --production
sfdx plugins:link .
This plugin is highly configurable with a JSON file named ETCopyData.json
located on the current folder you are using when running this plugin. If the file does not exist, the plugin creates the file before erroring out, this allows you to get the bare bones of the file and modify it.
{
"orgSource": "dhOrg",
"orgDestination": soTest,
"sObjectsData": [
{
"name": "Account",
"ignoreFields": "OwnerId",
"externalIdField": "LegacyId__c",
"twoPassReferenceFields": "Field1__c,Field2__c",
"where": "Industry = 'Technology'",
"orderBy": "Name"
}
],
"sObjectsMetadata": [
{
"name": "User",
"matchBy": "Email",
"fieldsToExport": "FirstName,LastName,Email,Id",
"where": null,
"orderBy": null
}
],
"rootFolder": "./ETCopyData",
"includeAllCustom": true,
"customObjectsToIgnore": null,
"stopOnErrors": true,
"ignoreFields": "OwnerId, CreatedBy, CreatedDate, CurrencyIsoCode",
"copyToProduction": false,
"twoPassReferenceFields": "LinkedA__c,LinkedB__c,LinkedC__c",
"deleteDestination": true,
"useBulkAPI": true,
"bulkPollingTimeout": 1800000
}
Field | Default | Data Type | Description |
---|---|---|---|
orgSource | null | String | SFDX alias given to the org (production, sandbox or scratch org) that has the data you want to export. |
orgDestination | null | String | SFDX alias given to the org (production, sandbox or scratch org) that receives the data that you import. |
sObjectsData2 | [] | sObjectsData[] | List of custom or standard sObjects where the data is going to be exported from, and where it will be imported to. |
sObjectsMetadata3 | [] | sObjectsMetadata[] | Metadata sObjects that will be used for importing your data. |
rootFolder | null | String | Folder used to store the exported data and where the data will be imported from. |
includeAllCustom | false | Boolean | True if you want all the customer sObjects, false if you only want the ones listed in the orgDestination section |
customObjectsToIgnore | null | String | If you have a large list of custom sObjects and you want to import most of them, it may be easier to include all custom sObjects and exclude few of them |
stopOnErrors | true | Boolean | True if you want to stop on errors deleting data or importing data, false and the errors will be reported back but they will not stop the execution. |
ignoreFields4 | null | String | List of fields to ignore for every sObject, each field is separated with a comma. Example: "Field1__c, Field2__c, Field__3" |
copyToProduction5 | false | Boolean | True if you want to load data to a production org, false to load into sandboxes and scratch orgs |
twoPassReferenceFields6 | null | String | List of fields that need to be updated in a second pass |
deleteDestination7 | false | Boolean | True if you want to delete the existing records in the destination org before you load the new records. |
useBulkAPI11 | false | Boolean | True if you prefer to use Bulk API, false if you prefer to use REST API. API. |
bulkPollingTimeout8 | 1800000 | Integer | Timeout in milliseconds that Bulk API operations will timeout. |
You must provide the name of the sObject
{
"name": "Account"
}
{
"name": "Location__c",
"ignoreFields": "OwnerId, IgnoreField__c",
"externalIdField": "External_Id_Field__c"
"twoPassReferenceFields": "LinkedA__c,LinkedB__c,LinkedC__c",
"where": "State__c = 'Texas'",
"orderBy": "City__c",
}
This is the structure for each sObject
Field | Default | Data Type | Description |
---|---|---|---|
name | N/A | String | Required field. SObject API name rather than the label, which means that custom sObjects end with __c. |
ignoreFields | null | String[] | List of fields to ignore for every sObject, these list will be combined with the global ignoreFields field. |
externalIdField | null | String | API name of external ID field to be used if an upsert operation is desired. |
twoPassReferenceFields10 | null | String[] | For imports, lists the fields that need to be set using a separate update as they refer an SObject that is not loaded yet. |
where | null | String | Restrict which records are be exported. |
orderBy | null | String | For exports, determines the order for the records that are exported. |
{
"name": "User",
"matchBy": "Email"
}
{
"name": "User",
"matchBy": "Email",
"fieldsToExport": "FirstName,LastName,Email,Id",
"where": null,
"orderBy": "LastName"
}
This is the structure for each metadata sObject
Field | Default | Data Type | Description |
---|---|---|---|
name | N/A | String | Required field. SObject API name rather than the label. |
matchBy9 | N/A | STring | Required field. What makes the two metadata sObjects the same? |
fieldsToExport | N/A | String[] | List of fields that will be exported for each metadata sObject. |
where | null | String | Restrict which records are be exported. |
orderBy | null | String | For exports, determines the order for the metadata records that are exported. |
ETCopyData fully supports importing references between SObjects, both Lookup and master/detail relationships.
ETCopyData determines automatically an import order, based on the Lookup and master/detail relationships that are exported and not flagged as twoPassReferenceFields. It sorts the list of SObjects using the following algorithm:
ETCopyData imports the data for the SObjects in that order, keeping track of the mapping between Ids in the source set and their equivalent Ids the target system. When importing a reference field, it can immediately set the correct Id in the target system.
If your data model is tree-like, no additional configuration is needed to automatically import all references. If your data model contains cyclic references or self references, additional configuration using the 'twoPassReferenceField' setting is required. An example cyclic reference is SObject A having a lookup field for SObject B and SObject B having a lookup field for SObject A. An example self reference is SObject A having a lookup field for SObject A.
If your data model contains one of these types of references, you will get the following error during import:
Deadlock determining import order, most likely caused by circular or self reference, configure those fields as twoPassReferenceFields
Configuring twoPassReferenceFields is a manual process. In general, if you have two SObjects that reference each other through a single Lookup relationship in each SObject, you only need to flag one of those fields as a twoPassReferenceField.
As an example, assume you have the following SObject and fields:
If your dataset contains 1000 A__c records and 10 B__c records, the optimal configuration is to configure B__c.RefA__c as twoPassReferenceField. On import, ETCopyData will execute the following steps:
Since the idea of this tool is to copy data between orgs, it could be possible to load data into production. But this can be a very dangerous situation, for that reason when you copy data to a production org, there are two security protections:
1800000
milliseconds which corresponds to 30 minutes.NODE_OPTIONS
to --max-old-space-size=8192
to reserve 8GB memory.sfdx ETCopyData:compare [-c <string>] [-d <string>] [-s <string>] [--json] [--loglevel trace|debug|info|warn|error|fatal|TRACE|DEBUG|INFO|WARN|ERROR|FATAL]
sfdx ETCopyData:delete [-c <string>] [-d <string>] [-s <string>] [--json] [--loglevel trace|debug|info|warn|error|fatal|TRACE|DEBUG|INFO|WARN|ERROR|FATAL]
sfdx ETCopyData:export [-c <string>] [-d <string>] [-s <string>] [--json] [--loglevel trace|debug|info|warn|error|fatal|TRACE|DEBUG|INFO|WARN|ERROR|FATAL]
sfdx ETCopyData:full [-c <string>] [-d <string>] [-s <string>] [--json] [--loglevel trace|debug|info|warn|error|fatal|TRACE|DEBUG|INFO|WARN|ERROR|FATAL]
sfdx ETCopyData:import [-c <string>] [-d <string>] [-s <string>] [--json] [--loglevel trace|debug|info|warn|error|fatal|TRACE|DEBUG|INFO|WARN|ERROR|FATAL]
sfdx ETCopyData:compare [-c <string>] [-d <string>] [-s <string>] [--json] [--loglevel trace|debug|info|warn|error|fatal|TRACE|DEBUG|INFO|WARN|ERROR|FATAL]
Checks the source and destination org for any differences in the sObject's metadata, this helps determine what data can be properly exported/imported.
USAGE
$ sfdx ETCopyData:compare [-c <string>] [-d <string>] [-s <string>] [--json] [--loglevel
trace|debug|info|warn|error|fatal|TRACE|DEBUG|INFO|WARN|ERROR|FATAL]
FLAGS
-c, --configfolder=PATH Root folder to find the
configuration file
-d, --orgdestination=(alias|username) SFDX alias or username for the
DESTINATION org
-s, --orgsource=(alias|username) SFDX alias or username for the
SOURCE org
--json format output as json
--loglevel=(trace|debug|info|warn|error|fatal|TRACE|DEBUG|INFO|WARN|ERROR|FATAL) [default: warn] logging level for
this command invocation
DESCRIPTION
Checks the source and destination org for any differences in the sObject's metadata, this helps determine what data
can be properly exported/imported.
See code: src/commands/ETCopyData/compare.ts
sfdx ETCopyData:delete [-c <string>] [-d <string>] [-s <string>] [--json] [--loglevel trace|debug|info|warn|error|fatal|TRACE|DEBUG|INFO|WARN|ERROR|FATAL]
Deletes data from destination org, preparing for the new data that will be uploaded. Note: Deleting optionally happens before loading, but if there are some errors this operation can be retried by itself.
USAGE
$ sfdx ETCopyData:delete [-c <string>] [-d <string>] [-s <string>] [--json] [--loglevel
trace|debug|info|warn|error|fatal|TRACE|DEBUG|INFO|WARN|ERROR|FATAL]
FLAGS
-c, --configfolder=PATH Root folder to find the
configuration file
-d, --orgdestination=(alias|username) SFDX alias or username for the
DESTINATION org
-s, --orgsource=(alias|username) SFDX alias or username for the
SOURCE org
--json format output as json
--loglevel=(trace|debug|info|warn|error|fatal|TRACE|DEBUG|INFO|WARN|ERROR|FATAL) [default: warn] logging level for
this command invocation
DESCRIPTION
Deletes data from destination org, preparing for the new data that will be uploaded. Note: Deleting optionally happens
before loading, but if there are some errors this operation can be retried by itself.
See code: src/commands/ETCopyData/delete.ts
sfdx ETCopyData:export [-c <string>] [-d <string>] [-s <string>] [--json] [--loglevel trace|debug|info|warn|error|fatal|TRACE|DEBUG|INFO|WARN|ERROR|FATAL]
Exports the data from the source org, and saves it in the destination folder so that it can be imported at a later time.
USAGE
$ sfdx ETCopyData:export [-c <string>] [-d <string>] [-s <string>] [--json] [--loglevel
trace|debug|info|warn|error|fatal|TRACE|DEBUG|INFO|WARN|ERROR|FATAL]
FLAGS
-c, --configfolder=PATH Root folder to find the
configuration file
-d, --orgdestination=(alias|username) SFDX alias or username for the
DESTINATION org
-s, --orgsource=(alias|username) SFDX alias or username for the
SOURCE org
--json format output as json
--loglevel=(trace|debug|info|warn|error|fatal|TRACE|DEBUG|INFO|WARN|ERROR|FATAL) [default: warn] logging level for
this command invocation
DESCRIPTION
Exports the data from the source org, and saves it in the destination folder so that it can be imported at a later
time.
See code: src/commands/ETCopyData/export.ts
sfdx ETCopyData:full [-c <string>] [-d <string>] [-s <string>] [--json] [--loglevel trace|debug|info|warn|error|fatal|TRACE|DEBUG|INFO|WARN|ERROR|FATAL]
Performs all the steps, including comparing schemas, exporting data from the source, optionally deleting data from the destination, and importing the data to the destination org. This may help you when setting up a new process
USAGE
$ sfdx ETCopyData:full [-c <string>] [-d <string>] [-s <string>] [--json] [--loglevel
trace|debug|info|warn|error|fatal|TRACE|DEBUG|INFO|WARN|ERROR|FATAL]
FLAGS
-c, --configfolder=PATH Root folder to find the
configuration file
-d, --orgdestination=(alias|username) SFDX alias or username for the
DESTINATION org
-s, --orgsource=(alias|username) SFDX alias or username for the
SOURCE org
--json format output as json
--loglevel=(trace|debug|info|warn|error|fatal|TRACE|DEBUG|INFO|WARN|ERROR|FATAL) [default: warn] logging level for
this command invocation
DESCRIPTION
Performs all the steps, including comparing schemas, exporting data from the source, optionally deleting data from the
destination, and importing the data to the destination org. This may help you when setting up a new process
See code: src/commands/ETCopyData/full.ts
sfdx ETCopyData:import [-c <string>] [-d <string>] [-s <string>] [--json] [--loglevel trace|debug|info|warn|error|fatal|TRACE|DEBUG|INFO|WARN|ERROR|FATAL]
Imports data into destination org, you can control if the data in the destination sObjects should be removed before loading a new data set. The data load happens in a specific order (children first, parents last) which has been determined by checking the schema in the destination org.
USAGE
$ sfdx ETCopyData:import [-c <string>] [-d <string>] [-s <string>] [--json] [--loglevel
trace|debug|info|warn|error|fatal|TRACE|DEBUG|INFO|WARN|ERROR|FATAL]
FLAGS
-c, --configfolder=PATH Root folder to find the
configuration file
-d, --orgdestination=(alias|username) SFDX alias or username for the
DESTINATION org
-s, --orgsource=(alias|username) SFDX alias or username for the
SOURCE org
--json format output as json
--loglevel=(trace|debug|info|warn|error|fatal|TRACE|DEBUG|INFO|WARN|ERROR|FATAL) [default: warn] logging level for
this command invocation
DESCRIPTION
Imports data into destination org, you can control if the data in the destination sObjects should be removed before
loading a new data set. The data load happens in a specific order (children first, parents last) which has been
determined by checking the schema in the destination org.
See code: src/commands/ETCopyData/import.ts
FAQs
SFDX Plugin to populate your scratch org and/or developer sandbox with data for multiple related sObjects.
We found that etcopydata demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
RubyGems.org has added a new "maintainer" role that allows for publishing new versions of gems. This new permission type is aimed at improving security for gem owners and the service overall.
Security News
Node.js will be enforcing stricter semver-major PR policies a month before major releases to enhance stability and ensure reliable release candidates.
Security News
Research
Socket's threat research team has detected five malicious npm packages targeting Roblox developers, deploying malware to steal credentials and personal data.