Everyday, there are some new services are born and some have to be shut down. There is no eternity for many things. Websites certainly don't have that.
So, here is the one-liner for Gists:
and for public repos:
You may need to edit
This is only for one-time run and doesn't have any error handling. You should run it in a specifically created directory for storing repos. It won't update if you add new repos afterwards. But you can add some condition check for updating when a repo is already presented in the filesystem.
It's not the time for me or anyone else to use it, since GitHub is alive and probably won't be out of business or gets closed anytime soon.
I have this thought because there is another service closed, which I used when I was still on Twitter, due to being merged into bigger company.
Six or seven years ago, I lost data on a harddisk. Since then, I have been trying not to store data on local disks. I am lazy, never want to do the backups. It's not like it's hard, the script is easy to write, just I don't like put the backup harddrive online when the system only needs it once a while when the backing up is in progress.
Certainly, you can pay some money for so-called cloud storage or just remote backup storage. It doesn't seem to matter to me, someday they will be gone and does a backup of a backup is like, well, WTH was I doing that in the first place?
Anyway, for backing up public stuff on GitHub is easy.
Updated on 2012-03-15: If you also use Bitbucket, here is the one-liner.
So, here is the one-liner for Gists:
page=0; while let page++; wget -q -O - "https://api.github.com/users/$USER/gists?page=$page&per_page=100" | grep -o 'git://.*\.git'; do :; done | while read git_url; do git clone $git_url; done
and for public repos:
page=0; while let page++; wget -q -O - "https://api.github.com/users/$USER/repos?page=$page&per_page=100" | grep -o 'git://.*\.git'; do :; done | while read git_url; do git clone $git_url; done
You may need to edit
$USER
to match your username on GitHub.This is only for one-time run and doesn't have any error handling. You should run it in a specifically created directory for storing repos. It won't update if you add new repos afterwards. But you can add some condition check for updating when a repo is already presented in the filesystem.
It's not the time for me or anyone else to use it, since GitHub is alive and probably won't be out of business or gets closed anytime soon.
I have this thought because there is another service closed, which I used when I was still on Twitter, due to being merged into bigger company.
Six or seven years ago, I lost data on a harddisk. Since then, I have been trying not to store data on local disks. I am lazy, never want to do the backups. It's not like it's hard, the script is easy to write, just I don't like put the backup harddrive online when the system only needs it once a while when the backing up is in progress.
Certainly, you can pay some money for so-called cloud storage or just remote backup storage. It doesn't seem to matter to me, someday they will be gone and does a backup of a backup is like, well, WTH was I doing that in the first place?
Anyway, for backing up public stuff on GitHub is easy.
Updated on 2012-03-15: If you also use Bitbucket, here is the one-liner.
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.