Voat is really being crippled by the death of so many image links. In addition to the demise of sli.mg and the unreliability of other hosts, it feels like imgur is cracking down and removing all images that were submitted exclusively to voat.
The problem is that we just can't trust the next image host that pops up to be the one that'll actually last this time. We've been burned too many times for that. That means that no matter what else we do, some of us have to write scraping bots to make backups of all images linked on Voat. But how to redirect the users from the old image URLs to the new ones? I think there are 3 potential solutions:
-
Make a change to Voat itself to where it automatically redirects an image host that's down to a fallback host (or shares the load between multiple fallback hosts).
-
Create a special redirect plugin for the most popular browsers that will automatically redirect an image host that's been designated down to a fallback host.
-
[Most insane solution] Create a peer to peer network that Voat users will be running that will keep some fraction of all image files on each user's computer, then have a browser extension automatically load the image corresponding to the URL from this p2p network if it's not accessible from the server.
I'm happy to start coding, but I want to hear from other people first. Do you think one of these ideas is best? Do you have your own? What do you think people would be most likely to actually use?
mbenbernard ago
The destiny of pretty much any service is to eventually disappear, so yes, it's a dead-end. However, solution #1 would allow us to avoid such problems in the future. It's a great solution, actually!
wesofx ago
I think #1 is pretty smart. If you have Voat (or just a comment bot) automatically detect an image link and automatically upload the image to a list of alternate image hosts, there's copies on other hosts if one goes down. EDIT: with bots, it's not so long term because comments eventuall become outdate if all the hosts at the time the comment was made are down. Otherwise the bot has to update old comments. An offcial voat feature could be much more robust about staying updated.
Genr8r ago
Genr8r ago
I am a huge ipfs fan. This still probably requires a change to voat as references to an ipfs hash still needs a domain on the front end. The nice thing is the hash still works with any domain setup as a node. My fav is http://ipfs.pics.
I have been doing a little hack that still allows for voat's preview feature. Check it out...
https://ipfs.io/ipfs/Qmduj4VjKMMWRLwxcrHugxCTWD1cZ66iiKTeW2JGRZtY9u#.png
Fun fact. I originally found the above image at https://ipfs.pics. However as I write this post that site appears to be down for the first time in my experience. So I found http://www.badoit.me/ which appears to be running the same open source code as ipfs.pics. I popped in the hash after the domain and badoit.me redirected to ipfs.io and is displaying the image off of what I can only assume is the ipfs p2p network. The "permanent" web in action!
Genr8r ago
Actually if voat included an ipfs node as part of the site you could write a small script (or htaccess) to automatically swap out any external image host for https://voat.co/ipfs/your_image_hash#.jpg. It would make voat the coolest image host online. Images would only be unavailable on voat if the entire site were down. I can't imagine that ever happening. 😏
I will check my own bits at the door thank you very much.
Lord_of_the_rats ago
im pretty sure that the scraping bot can be a shell script using
curl
/wget
,sed
,awk
etcdoes any1 want me to try doing dat or did some1 else already finnish it
user6675 ago
If you're a regex wizard, that may be doable. Perhaps consider using Python instead? It'd probably involve less hair-pulling and be more maintainable by others.
In either case, please try it. Scraping the popular image hosts like Imgoat gallery individually would be one approach, or you can try to scrape Voat directly and back up every image that has at least one upgoat.
user6675 ago
IPFS is great! I never heard about it until just now. One person can run a regular scraper script from imgoat (for example), and the rest of the peers will pull and replicate the images over time. Then a generic browser extension can be created that will attempt to pull dead links from IPFS. I bet their dev team is already working on something like that.
JustFeelsGoodMan ago
Use .htaccess files server side to rewrite urls perhaps? It's what I would do but I'm a nerd like that
Reddit_traitor ago
i think the p2p may be the best idea.