@cumulus/common
Advanced tools
Changelog
[v1.5.0] - 2018-04-23
example
folder CUMULUS-512ecs.restartTasksOnDeploy
is set to truegranuleIdExtraction
is no longer a propertyprocess
is now an optional propertyprovider_path
is no longer a propertydelete
method to the @common/CollectionConfigStore
classexample
integration-tests populates providers and collections to databaseexample
workflow messages are populated from workflow templates in s3, provider and collection information in database, and input payloads. Input templates are removed.https
protocol to provider schemaChangelog
[v1.4.0] - 2018-04-09
findTmpTestDataDirectory()
function from
@cumulus/common/test-utils
Changelog
[v1.3.0] - 2018-03-29
CustomBootstrap
lambda function. Resolves failed deployments where CustomBootstrap
lambda function was failing with error Process exited before completing request
. This was causing deployments to stall, fail to update and fail to rollback. This error is thrown when the lambda function tries to use more memory than it is allotted.cumulus
folder altogethercumulus/tasks
to tasks
folder at the root leveltasks/.not_CMA_compliant
@cumulus/integration-tests
- Added support for testing the output of an ECS activity as well as a Lambda function.Changelog
[v1.2.0] - 2018-03-20
@cumulus/api
: kinesis-consumer.js
uses sf-scheduler.js#schedule
instead of placing a message directly on the startSF
SQS queue. This is a fix for CUMULUS-359 because sf-scheduler.js#schedule
looks up the provider and collection data in DynamoDB and adds it to the meta
object of the enqueued message payload.@cumulus/api
: kinesis-consumer.js
catches and logs errors instead of doing an error callback. Before this change, kinesis-consumer
was failing to process new records when an existing record caused an error because it would call back with an error and stop processing additional records. It keeps trying to process the record causing the error because it's "position" in the stream is unchanged. Catching and logging the errors is part 1 of the fix. Proposed part 2 is to enqueue the error and the message on a "dead-letter" queue so it can be processed later (CUMULUS-413).aws.cloudwatchevents()
typo in packages/ingest/aws.js
. This typo was the root cause of the error: Error: Could not process scheduled_ingest, Error: : aws.cloudwatchevents is not a constructor
seen when trying to update a rule.@cumulus/ingest/aws
: Remove queueWorkflowMessage which is no longer being used by @cumulus/api
's kinesis-consumer.js
.Changelog
[v1.1.4] - 2018-03-15
useList
to parse-pdr [CUMULUS-404]Changelog
[v1.1.3] - 2018-03-14
Changelog
[v1.1.2] - 2018-03-14
yarn e2e
command is available for end to end testing@cumulus/deployment
deploys DynamoDB streams for the Collections, Providers and Rules tables as well as a new lambda function called dbIndexer
. The dbIndexer
lambda has an event source mapping which listens to each of the DynamoDB streams. The dbIndexer lambda receives events referencing operations on the DynamoDB table and updates the elasticsearch cluster accordingly.@cumulus/api
endpoints for collections, providers and rules only query DynamoDB, with the exception of LIST endpoints and the collections' GET endpoint.kes.override.js
of @cumulus/deployment to multiple modules and moved to a new locationgetLambdaOutput
to return the entire lambda output. Previously getLambdaOutput
returned only the payload.Changelog
[v1.1.1] - 2018-03-08
Changelog
[v1.1.0] - 2018-03-05
jlog
function to common/test-utils
to aid in test debugginguseList
flag [CUMULUS-334] by @kkelly51queue-pdrs
task now uses the cumulus-message-adapter-js
libraryqueue-pdrs
JSON schemasqueue-granules
task now uses the cumulus-message-adapter-js
libraryqueue-granules
JSON schemasgetSfnExecutionByName
function from common/aws
getGranuleStatus
function from common/aws