Home Lookup a domain Lookup a URL FAQs



.br.gy


Save a page for latter

Home > FAQs



 Thank you for taking the time into reading the frequently asked questions as I hope this will help you better understand the purpose of the site.

Why did you create this website when there is The Wayback Machine, Archive.is, and others?

I did this because I just wanted to see if ChatGPT could output a PHP script that I could use to archive a webpage and sure enough ChatGPT was able to give me a pretty sweet PHP script. Well, this may not be 100% perfect due to the limitations on my VPS for what I have, I have to say I am pretty < expletive> proud.

What can be saved and what can’t be saved archived?

Generally, any basic HTML page, including PHP, JSP, ASP, etc can be saved, but any page that contains a lot of JavaScript (.js) or is behind Cloudflare, requires authentication unfortunately will either save incomplete or simply not save at all.

How come some domain names like reddit.com, USPS.com, etc are blocked? Who blocked who?

Unlike some of the other archiving sites that don’t seem to restrict AK band certain domain names from being able to be archived, I’ve decided to block some domain names from being able to be archived due to one of the following reasons:

  • Reddit, is blocked as they have restricted pretty much any website from being able to get any information and there is no point of archiving a page that basically says 403 forbidden. So the domain name is blocked to prevent a bunch of pages that basically say 403 forbidden from being saved.
  • usps.com unfortunately parts of their site like tracking.usps.com for some reason does not save properly so the output that you see is a blank page even though majority of the source code has been saved. Since pages don’t say properly, I have decided to just block them entirely perhaps in the future this this will be lifted.
  • The Daily mail (mailonline.co.uk) and other sites like them have been blocked in order to prevent certain pages from being saved.

Does this site respect/honor robots.txt, meta tag that says no archive?

No, this site does not as this site requires a user to manually copy and paste a URL and press a button in order for the page to be saved. This is just like someone visiting a webpage and taking a screenshot or printing the page out to PDF.

Other archiving sites, no longer respect/honor robots.txt, meta tag saying no archive either probably for the very same reason.

Can I delete a saved/archived page?

No, the public does not have the ability to remove a.k.a. delete any pages that have been saved/archived, however if you need to have a page removed, it is extremely important that you follow these steps to ensure that A.) all the pages you wish to have removed is successfully removed by finding all of them or B.) a whole collection of pages isn’t removed when you only want a specific page to be removed a.k.a. deleted.

Let us assume that your domain name is nic.br.gy/accounts/demo and you want this to be removed from the archived collection.

If you want everything at nic.br.gy/accounts/demo, then use this format https://saved.br.gy/view?url=< domain name >/accounts/demo and this will let us know that you want everything to be removed. Also include a brief description or synopsis of why you need this to be removed.

If you want a specific page, but not the whole entire collection , than please use this format https://saved.br.gy/saved/< random page ID>. If you need more than one like 1, 5, and 10, please post the URLs and separate them with commas or they should be listed.

And also include a brief description or synopsis on why you want these to be removed.

Please allow 48 hours for everything to be removed, if it has been past 48 hours please refresh the page by pressing Ctrl + F5 to refresh the page as sometimes computers will keep a local archived copy of the page.

 How come some domains have been blocked yet I can still look them up?

If a domain name is blocked, it means we will no longer accept any new domain names added to our system however, any domain names that are already in the system will continue to be accessible.

What is the difference between “look up a page” and the “advanced search” and how do I use them?

Look up a page: The look up page will allow you to enter a domain name like abc.tld and you can see how many times abc.tld has been saved including any pages like abc.tld/abc1, abc.tld/abc2, etc.

Advanced search: the advanced search will require you to know the exact URL or the specific page, not produce a list unless the specific page has been saved more than once. The advanced search will only show that specific page and nothing else.

For example, let’s say that you have a small website and you decided to archive your whole entire site, you will want to use the look up a page by entering your domain name like mysite.tld and this will pull up all the pages on your site that has been saved unlike the advance search or do you need to know the exact URL like mysite.tld/about_me/index.html

Can I keep track of which pages I have archived?

Unfortunately, at this time, there isn’t a way that you can keep track of which pages you have archived. The best solution is to use notepad and copy the URL and paste them there and then save that to your computer or send it to yourself as an attachment or you can also consider sending yourself an email with the URLs 

How long do you plan on running this archiving site?

I don’t have any definite plans of immediately taking the site down, however  if the size starts to full up the sever as taking a complete snapshot of some webpages can consume at least 10 MBs when adding multiple pages that can quickly add up to gigabytes. Unfortunately, I have only a limited amount of space, so I do need to use the space responsibly.

Is there any type of security vulnerability’s when running this archiving site?

Anytime that you allow the public to be able to upload anything to a server there’s always some type of risk, but all pages are saved as .html which reduces the amount of risk a little bit as HTML pages can’t execute PHP scripts, however, HTML pages can still execute JavaScript so if bad coding a.k.a. harmful codeing is in the HTML page it can still be used to cause harm.

In order to try to reduce bad coding from being executed, I always try to run everything through HTML special characters witch doesn’t allow certain characters to display properly that’s making it more difficult for someone to execute bad code.

This risk is not just specifically to saved.br.gy, this is anywhere where the publican save something to his server and then allow other people to access the same content. I would imagine that each respected site would have safeguards in place.





Home | Lookup a domain | Lookup a URL | FAQs


Powered by w3.css