
Research
TeamPCP Compromises Telnyx Python SDK to Deliver Credential-Stealing Malware
Malicious versions of the Telnyx Python SDK on PyPI delivered credential-stealing malware via a multi-stage supply chain attack.
program_leader_portal_py
Advanced tools
Allowing our program leads to interact with our participants.
Install the pre-commit hook:
cd program_lead_portal_py
# if you do not have pre-commit already
brew install pre-commit
pre-commit install --install-hooks
Refer to comments at top of file in .pre-commit-config.yaml to learn how to use and update.
To move a code change from your computer to production requires two steps, building a docker image and then deploying a docker image.
The first step is to branch off of master, make your changes, commit, push and then put a PR back into master.
Note: We are no longer branching off of dev!
Once the tests are approved, your code is up to date and you have your +2, then merge that branch back into master.

That will tell circle to create a docker image, run tests and let you know when it's ready to deploying.
You'll need to install the protobufs library to be able to generate the Python definition files from the .proto files in the www/serializers directory. To install the protobufs binary, follow the instructions at https://github.com/protocolbuffers/protobuf#protocol-compiler-installation. In particular, note that not only will you need to download and extract the .tar.gz or .zip file, but you will also need to make the project and install it, as noted in the instructions at https://github.com/protocolbuffers/protobuf/blob/master/src/README.md. To install the protoc executable, once you've extracted the downloadable, cd to the extracted directory and execute:
$ ./configure
$ make
$ make check
$ sudo make install
$ sudo ldconfig
The Python tutorial can be found at https://developers.google.com/protocol-buffers/docs/pythontutorial.
Once you've updated a .proto file, you'll need to generate the Python object definitions from them. E.g.:
$ protoc -I=www/serializers/protobufs --python_out=www/serializers/protobufs www/serializers/protobufs/celery_tasks.proto
This command will take the input from the -I flag and output the resulting Python files to the
location indicated by the --python-out flag.
For some reason, you also need to indicate the .proto file itself as the final argument in the
command.
The instructions in the tutorial don't explain why this is necessary, but the instructions may be
updated in the future.
Deploys also happen in two steps. First you deploy to stage. You have an opportunity to check stage and/or ciruclate changes. After that you are ready to deploy to production. That is done through the UI form Circle:

A deploy will ask our docker image repo for the corresponding image to release, and tell aptible to deploy that. We no longer have to rebuild the computer at each step.
docker_dev repo and let Docker do the workcd docker_dev/docker
virtualenv venv
source venv/bin/activate
pip install -r requirements.txt
./start_apps.sh plp
dssh docker_plp_1
cd /plp
# start specific service(s) you want
honcho -f Procfile.dev start <service> & (# service is web, beat, or worker)
# or you can start all services
honcho -f Procfile.dev start
Do not, under any circumstances, store sensitive configuration variables (e.g. database URL, passwords, API keys, account IDs, etc) in config files.
Use environment variables instead.
Production has a superset of the env vars you need on dev.
To get the current list:
aptible config --app ketothrive-plp-prod | cut -d'=' -f1
Do not use the production values for your dev environment!
| Variable | Value | Credstash name |
|---|---|---|
| APP_ENVIRONMENT | dev | |
| APP_TYPE | program_leader | |
| AUTHY_API_KEY | plp.dev.AUTHY_API_KEY | |
| AUTHY_API_PREFIX | plp.dev.AUTHY_API_PREFIX | |
| AWS_ACCESS_KEY | plp.dev.AWS_ACCESS_KEY | |
| AWS_SECRET_KEY | plp.dev.AWS_SECRET_KEY | |
| CELERY_DEFAULT_QUEUE | plp.dev.CELERY_DEFAULT_QUEUE | |
| DATABASE_URL | plp.dev.DATABASE_URL | |
| MESSAGE_BROKER_URL | plp.dev.MESSAGE_BROKER_URL | |
| NPM_TOKEN | plp.dev.NPM_TOKEN | |
| REDIS_URL | plp.dev.REDIS_URL | |
| SALESFORCE_API_KEY | plp.dev.SALESFORCE_API_KEY | |
| SENDGRID_API_KEY | plp.dev.SENDGRID_API_KEY | |
| TWILIO_ACCOUNT_SID | plp.dev.TWILIO_ACCOUNT_SID | |
| TWILIO_AUTH_TOKEN | plp.dev.TWILIO_AUTH_TOKEN | |
| TWILIO_NOTIFY_SERVICE_ID | plp.dev.TWILIO_NOTIFY_SERVICE_ID | |
| TWILIO_PHONE_NUMBER | plp.dev.TWILIO_PHONE_NUMBER | |
| service__identity_service | plp.dev.service__identity_service | |
| service__labs | plp.dev.service__labs |
Prod also has these; you may ignore them:
DISABLE_WEAK_CIPHER_SUITES
FORCE_SSL
PRIVATE_RSA_KEY
PUBLIC_RSA_KEY
Frontend development and testing should be done in your local development environment, outside of the docker container. It is recommended to use Node Version Manager to make sure you are running the correct version of Node and NPM.
export NPM_TOKEN=$(credstash get plp.dev.NPM_TOKEN)
Install Node Version Manager https://github.com/creationix/nvm#installation
Run nvm install in this directory. This will install the version of Node
described in the .nvmrc file and update your local node command to point
to that version of Node.
a. If you activate a different version of Node at some point, run nvm use in this directory to switch to the correct version for Spark
development again.
To start the dev server (watch mode for frontend development):
npm i # Install packages if they are missing or out of date
npm run start-dev-server
To build assets with the production build (useful if you want to run the app without developing the frontend):
npm i
npm run build-assets
If you want to just get things up and running, you can run:
make frontend-server
If for some reason your front-end changes do not take effect, restart flask and refresh the page.
Most tests in the server code base use the Pytest framework.
py.test .
We use Python Behave for testing our APIs. These tests can be found in the features/ directory.
You'll need to bring up all services required for PLP and make sure nothing is running at the PLP
port (2900) before running the Behave tests.
Starting the app with ./start_apps.sh plp should bring up all services required for Behave tests.
If you are trying to run Behave tests on baremetal (not in a Docker container), just pass the -l
flag to start_apps.sh (e.g., ./start_apps.sh plp -cl) and it will bring up all services except
the PLP container.
behave
If the Behave process gets stuck for some reason, it's most likely the Flask server that's stuck.
Kill the run_server.py process and Behave should continue.
ps aux | grep runserver for it and kill <pid> it.
If you kill Behave directly, you might end up with the test database still in your local Postgres
instance.
If you get some database cruft left over you can see all databases from the PSQL terminal with \l
and drop each database using drop database <database name>;.
Postgres will prevent you from deleting the database if there is still an open connection to it - in
this case from the runserver.py process.
You can select specific directories or files or tests (by line number of the scenario) when
executing behave:
behave features/login
behave features/login/identity_token.feature
behave features/login/identity_token.feature:45
If you want to run a single test or specific tests, you can tag it/them with a @wip tag
(or any other tag) and then run only the tests with that tag:
behave --tags=wip
You can also pass the --stop flag to behave to have it stop on the first failure.
We use the @flaky tag to mark flaky tests. These tests are run separately in CircleCI.
Doing so allows us to run fewer tests when one of the flaky tests fails.
The flaky tests job also prints the full test output in Circle rather than just the dot-per-test
format we use in the backend_feature_test job.
The job also prints out the explanations for all flaky tests found in
features/flaky_test_explanations.txt at the end of the run.
Furthermore, a failure in the flaky tests job will not cause the job to fail.
Adding the @flaky tags should be seen as a last resort.
Please try to diagnose the flaky test or bring it within acceptable success rates before adding the
flaky tag.
Doing so will also reduce the frequency with which the flaky tests job fails.
If you must add the @flaky tag to a test, please also add an explanation in the
features/flaky_test_explanations.txt file.
As mentioned above, frontend tests should be run in your local development environment.
npm i
npm test
To run Jest tests in watch mode (helpful for development):
npm run test:watch
To run the end-to-end Puppeteer tests that exercise the frontend and
backend together, first make sure that the frontend assets are
available. This means either having a Webpack dev server running, or
running the production build with npm run build-assets.
The tests can then be run using the NPM script.
npm run test:end-to-end
An extra NPM script is provided to run the tests in debug mode.
npm run test:end-to-end-debug
This disables headless mode for the Chromium instance and enables
setting breakpoints in .steps.js files using
jestPuppeteer.debug(). See the jest-puppeteer
reference
for more information.
For full documentation on design as well as authoring and debugging end-to-end tests, refer to the End-to-End Testing Guide.
We use feature flags only allow some users to experience some features, or to retain the ability to turn something off for everybody. They are created and modified in the feature flag section of the admin app.
Start all other necessary containers, except the PLP container.
You can do so by using the -l flag with start_apps.sh:
$ ./start_apps.sh plp -cl
Then use the export_vars function provided in the docker_dev functions to export all environment variables for PLP:
$ export_vars plp
This will add all necessary variables to your shell, after modifying them to redirect all traffic to the local ports where ports from the containers are forwarded.
You should now be able to install all requirements on your baremetal machine, launch the server, and run tests.
brew install node@6)Same as populating below, except you would run the scripts/search/set_up_user_index.py script
instead.
This script will set up the user profile index and make sure tags are mapped properly.
This script will create the index if the index does not already exist.
If the index does already exist, this script will change the mappings so tags are properly handled.
SSH to the production (or staging, if you're trying to update staging) container and run the
scripts/search/populate_user_index.py script:
$ aptible login
$ aptible ssh --app <prod: ketothrive-plp-prod, stage: virta-plp-stage>
root@<aptible container>:/app# python scripts/search/populate_user_index.py
Same as populating, except you would run the scripts/search/update_user_index.py script instead.
This script also supports a dry-run option that will just display the stats (the number of matching,
missing and differing documents) and quit without pushing any changes to the Elasticsearch instance.
You can pass --dry-run, --dryrun, dry-run or dryrun to the script to do a dry run.
You can pass --print-details, --printdetails, print-details or printdetails to the script to
print extra details (missing, differing, and extra keys) on differing documents.
Same as populating the Elasticsearch User Index above, except you would run the
scripts/messages/requeue_dropped_messages.py script instead.
This script takes a start_time and an end_time to determine the window in which to look for dropped
text messages in the database.
It will then look through the database for messages that were meant to be sent between the start and
end times that were dropped and requeue them for delivery.
You can also pass the --dry-run or --print-details flags to do a dry run and print the details
of the operation.
The cli/ module exposes an alternative entry point for running Pytest.
run test
If you get errors with hints to remove pycache dirs, or have stale .pyc files you can remove those as follows:
run clean
FAQs
Spark
We found that program_leader_portal_py demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Research
Malicious versions of the Telnyx Python SDK on PyPI delivered credential-stealing malware via a multi-stage supply chain attack.

Security News
TeamPCP is partnering with ransomware group Vect to turn open source supply chain attacks on tools like Trivy and LiteLLM into large-scale ransomware operations.

Security News
/Research
Widespread GitHub phishing campaign uses fake Visual Studio Code security alerts in Discussions to trick developers into visiting malicious website.