
Security News
vlt Launches "reproduce": A New Tool Challenging the Limits of Package Provenance
vlt's new "reproduce" tool verifies npm packages against their source code, outperforming traditional provenance adoption in the JavaScript ecosystem.
Scrapy middleware to handle javascript pages using selenium.
$ pip install scrapy-selenium
You should use python>=3.6. You will also need one of the Selenium compatible browsers.
from shutil import which
SELENIUM_DRIVER_NAME = 'firefox'
SELENIUM_DRIVER_EXECUTABLE_PATH = which('geckodriver')
SELENIUM_DRIVER_ARGUMENTS=['-headless'] # '--headless' if using chrome instead of firefox
Optionally, set the path to the browser executable:
python SELENIUM_BROWSER_EXECUTABLE_PATH = which('firefox')
SeleniumMiddleware
to the downloader middlewares:
DOWNLOADER_MIDDLEWARES = {
'scrapy_selenium.SeleniumMiddleware': 800
}
Use the scrapy_selenium.SeleniumRequest
instead of the scrapy built-in Request
like below:
from scrapy_selenium import SeleniumRequest
yield SeleniumRequest(url, self.parse_result)
The request will be handled by selenium, and the request will have an additional meta
key, named driver
containing the selenium driver with the request processed.
def parse_result(self, response):
print(response.request.meta['driver'].title)
For more information about the available driver methods and attributes, refer to the selenium python documentation
The selector
response attribute work as usual (but contains the html processed by the selenium driver).
def parse_result(self, response):
print(response.selector.xpath('//title/@text'))
The scrapy_selenium.SeleniumRequest
accept 4 additional arguments:
wait_time
/ wait_until
When used, selenium will perform an Explicit wait before returning the response to the spider.
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
yield SeleniumRequest(
url=url,
callback=self.parse_result,
wait_time=10,
wait_until=EC.element_to_be_clickable((By.ID, 'someid'))
)
screenshot
When used, selenium will take a screenshot of the page and the binary data of the .png captured will be added to the response meta
:
yield SeleniumRequest(
url=url,
callback=self.parse_result,
screenshot=True
)
def parse_result(self, response):
with open('image.png', 'wb') as image_file:
image_file.write(response.meta['screenshot'])
script
When used, selenium will execute custom JavaScript code.
yield SeleniumRequest(
url,
self.parse_result,
script='window.scrollTo(0, document.body.scrollHeight);',
)
FAQs
Scrapy with selenium
We found that scrapy-selenium demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
vlt's new "reproduce" tool verifies npm packages against their source code, outperforming traditional provenance adoption in the JavaScript ecosystem.
Research
Security News
Socket researchers uncovered a malicious PyPI package exploiting Deezer’s API to enable coordinated music piracy through API abuse and C2 server control.
Research
The Socket Research Team discovered a malicious npm package, '@ton-wallet/create', stealing cryptocurrency wallet keys from developers and users in the TON ecosystem.