Security News
Fluent Assertions Faces Backlash After Abandoning Open Source Licensing
Fluent Assertions is facing backlash after dropping the Apache license for a commercial model, leaving users blindsided and questioning contributor rights.
Wrote this program to scrape some sitemaps and the following sitemap links on multiple servers. In order to save time it was pip packaged for easy repeated use.
Follow the installation instructions. The docstrings have detailed explainations for use.
This program uses python 3.8
pip
install package and use as needed.
Install pip
package
pip install samssimplescraper==0.1.3
The package has two modules.
sitemapscraper
is used to scrape sitemaps and can also scrape further levels of sub-sitemaps The methods will return lists of the scraped links that can be used to scrape the wanted links.scraper
is used to scrape the the list that is returned from the sitemapscraper or a user-made list of links. There is also a method that returns a status check of how many links have been scraped of the total.how-to-find-your-sitemap https://writemaps.com/blog/how-to-find-your-sitemap/
from samssimplescraper import LinksRetriever
# instantiate LinksRetriever with the sitemap you wish to scrape
links_retriever = LinksRetriever(url='https://www.example.com/sitemap_index.xml')
# get a list of the link using .get_sitemap_links method, can also add filter
mainpage_links = links_retriever.get_sitemap_links(tag='loc')
# if website has more layers use the method to get the links on those pages
final_links = links_retriever.get_next_links(links=mainpage_links, tag='loc')
Note: If you are not going to continue scraping in the same script then be sure to save your list using pickle:
import pickle
# the data folder is automatically created when LinksRetriever is instantiated
with open('./data/pickled_lists/sitemap_links_list.pkl', 'wb') as fp:
pickle.dump(final_links, fp)
LinksRetriever
module has produced for you. The files will be saved in the data/scraped_html
folder.from samssimplescraper import Scraper
# pass the list of links and for naming purposes the root_url
Scraper.get_html(link_list=final_links, root_url='https://www.example.com/)
See the open issues for a full list of proposed features (and known issues).
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!
git checkout -b feature/AmazingFeature
)git commit -m 'Add some AmazingFeature'
)git push origin feature/AmazingFeature
)Distributed under the MIT License. See LICENSE.txt
for more information.
Samuel Adams McGuire - samuelmcguire@engineer.com
Pypi Link: https://pypi.org/project/samssimplescraper/0.1.3/
Linkedin: LinkedIn
Project Link: https://github.com/SamuelAdamsMcGuire/simplescraper
FAQs
tool to help scrape sitemaps and the links they scrape
We found that samssimplescraper demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Fluent Assertions is facing backlash after dropping the Apache license for a commercial model, leaving users blindsided and questioning contributor rights.
Research
Security News
Socket researchers uncover the risks of a malicious Python package targeting Discord developers.
Security News
The UK is proposing a bold ban on ransomware payments by public entities to disrupt cybercrime, protect critical services, and lead global cybersecurity efforts.