Now, this method works with almost any site!!! Even if the site doesn't have WP, it doesn't mean you can't get the file without signing up or giving your email!! Have you heard about the robots.txt file If not, it's a small file to tell Search Engine spiders/crawlers if you don't want to index a specific page/post. Now, what you want to do is: 1) Get your target domain (The domain/site where it is the file you want ) 2) Type as follows on the address bar: Code: /robots.txt Just replace targetdomain with the actual domain. In the example of a target domain given yesterday, supposing the target domain, where you want to download your stuff is, the code will be .com /robots.txt Now when accessing that URL, you will see something like: Code: User-agent: * Disallow: /RandomBS In that file, the line "Disallow: /RandomBS" basically means that they don't want any spider/crawlers to go/index to the subdirectory "RandomBS". So, you might think, WHY THE HELL someone don't want spiders to go to that subdirectory?? Probably because there's something there!!! Now, your duty is just to type on the address bar: Code: /RandomBS And you will, MOST OF TIMES, arrive on the download page!!!!! Voila!!! And you thought magic didn't exist This method works on MOST sites


Popular Posts