
Research
Security News
Lazarus Strikes npm Again with New Wave of Malicious Packages
The Socket Research Team has discovered six new malicious npm packages linked to North Korea’s Lazarus Group, designed to steal credentials and deploy backdoors.
General Connector Classes for Google Products
Current Wrappers Available
Wrappers In the Pipeline
First we will need to get our own Google Project set up so we can get our credentials. If you don't have experience, you can do so here <a href=https://console.cloud.google.com/apis/dashboard>Google API Console
After you have your project set up, oAuth configured, and the optional service account (only for Google Big Query connections), you are good to install this package.
Make sure to download your oAuth credentials and save them to your working directory as 'client_secret.json'.
pip install googlewrapper
OR
python -m pip install googlewrapper
For each project it is reccomended to create a virtualenv. Here is a <a href=https://github.com/jaceiverson/googlewrapper/blob/master/documentation/VirtualEnv.md>simple guide on virtual environments.
Take a list of URLs from Sheets, grab Search Console Data, and import it into Big Query.
from googlewrapper import GoogleSearchConsole, GoogleSheets, GoogleBigQuery
import datetime as dt
# init our objects
sheets = GoogleSheets(YOUR_URL_HERE)
gsc = GoogleSearchConsole()
gbq = GoogleBigQuery()
# get our urls we want to pull
# remember that sheet1 is default
sites = sheets.get_column(1)
'''
this one is a bit more technical
we can pull our column Branded Words right
from sheets then assign it to a dictionary to use
in our GSC object.
Make sure that your url column is the index for
your df. This will happen by default if the urls
are in the first column in google sheets
'''
branded_list = sheets.df()['Branded Words'].to_dict()
# assign those sheets to GSC
gsc.set_sites(sites)
# assign other GSC variables
gsc.set_date(dt.date(2021,1,1))
gsc.set_dims(['page','date','query'])
# get our data
gsc_data = gsc.get_data()
# print the total clicks/impressions and avg position
# for all the sites we just pulled data for
# send them to Big Query
for site in gsc_data:
print(f"{site}'s Data\n"\
f"Clicks: {gsc_data[site]['Clicks'].sum()}\n"\
f"Impressions: {gsc_data[site]['Impressions'].sum()}\n"\
f"Avg Position: {gsc_data[site]['Position'].mean()}\n\n")
# now we will send our data into our GBQ tables for storage
# we will assign the dataset name to be our url
# we will assign table to be gsc
gbq.set_dataset(site)
gbq.set_table('gsc')
# send the data to GBQ
gbq.send(gsc_data[site])
I'd love to hear your feedback and suggestions. If you see something and you want to give it a shot and fix it, feel free to clone and make a pull request. OR you can submit and issue/feature request on GitHub.
If you found this library useful, I'd appreciate a coffee. Thanks.
FAQs
Simple API wrapper for Google Products
We found that googlewrapper demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
The Socket Research Team has discovered six new malicious npm packages linked to North Korea’s Lazarus Group, designed to steal credentials and deploy backdoors.
Security News
Socket CEO Feross Aboukhadijeh discusses the open web, open source security, and how Socket tackles software supply chain attacks on The Pair Program podcast.
Security News
Opengrep continues building momentum with the alpha release of its Playground tool, demonstrating the project's rapid evolution just two months after its initial launch.