![]() Naturally, I fired up my tried-and-true, wonderfully-intuitive file transfer software-Transmit. At that point, I was ready to move it all back into a newly-created Amazon S3 bucket. Uploading to Amazon S3 with TransmitĪrq is amazingly reliable software, and so after a few hours it had successfully restored all 300 GB of my data. So I accidentally ended up paying something like $150 to restore my data. Naturally, I slid the bandwidth slider to the max and hit go-literally a fraction of a second before seeing the peak retrieval rate calculation appear. I knew that access to Glacier files is a delayed process, but I didn’t realize that the amount you pay to restore files depends on how fast you want to download those files-something known as the “peak retrieval rate”. So I fired up Arq on that machine, clicked on my Glacier backup and entered my passcode in order to start the restore process. The first step was getting Arq’s data restored to my internet-hosted Mac mini. Except that, as we all know, plans never work out as planned. That’d be because I live in southern Spain, where we have awful ADSL, and didn’t want to spend another seven months waiting for the upload to happen.) You might also be wondering why I didn’t just upload the local data to Amazon S3. Second, Arq encrypts data before uploading to Glacier, and so I’d forever be dependent on Arq to restore it. it seems to prefer the local data to stick around. First, as far as I can tell, Arq is tool to backup, not move, data i.e. (You might be wondering why I didn’t just leave Arq’s data in place, and ditch the source drive. So with that in mind, I formulated a plan to make it happen: Using my other server-a fat-piped Mac mini hosted with one of the Mac mini colocators-I would use Arq to restore my 300 GB of data there, and then re-upload it all using Transmit to an Amazon S3 bucket, configured with a lifecycle rule to immediately transfer it all into Glacier. ![]() The risk I’d be exposed to is the probability that Glacier loses a file, multiplied by the probability that I’d actually need that particular file. Then a few weeks ago, I got to thinking that I’d probably be fine just storing these files exclusively in Amazon Glacier, and doing away with the local hard drive altogether. Since hard drives are prone to dying, especially in a hot climate like southern Spain, I’ve been using the excellent Arq from Haystack Software to keep an offsite copy of these files stored in Amazon Glacier-costing me about $3 per month. All in all, the drive contains some 300 GB of data. These are files that I do not expect to need in the future, but at the same time wouldn’t really be happy about losing-for example, snapshots of filesystems of computers I’ve retired, some source media, etc. ![]() For years, I’ve been accumulating an archive of files at home on an external drive connected to a Mac mini.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |