Often I want to simply backup a single page from a website. Until now I always had half-working solutions, but today I found one solution using wget which works really well, and I decided to document it here. That way I won’t have to search it again, and you, dear readers, can benefit from it, too ☺
Update 2020: You can also use the copyweb-script from pyFreenet:
copyweb -d TARGET_FOLDER URL
pip3 install --user pyFreenet3.
wget --no-parent --timestamping --convert-links --page-requisites --no-directories --no-host-directories --span-hosts --adjust-extension --no-check-certificate -e robots=off -U 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:220.127.116.11) Gecko/20070802 SeaMonkey/1.1.4' [URL]
Da ich das mit den anderen bisher nicht gemacht habe, nutze ich wget, um das für die noch existierenden nachzuholen. Ich vermute, dass du das auch machen willst, so dass dir die Rezepte, für die du GEZ gezahlt hast, nicht gestohlen werden können.
Deshalb ist hier mein wget-Aufruf:
The European Copyright directive threatens online communication in Europe.
But thanks to massive shared action earlier this year, the European parliament can still prevent the problems. For each of the articles there are proposals which fix them. The parliamentarians (MEPs) just have to vote for them. And since they are under massive pressure from large media companies, that went as far as defaming those who took action as fake people, the MEPs need to hear your voice to know that your are real.
If you care about the future of the Internet in the EU, please Call your MEPs.