item.227762
Riwammacrumors 65816
I don't know if this is the right place to ask it but hope so.
I have a Disk Warrior 4.4 DVD.cdr file. In previous OSX I could boot a Mac from a DVD. Present Macs do not seem to allow it (I have now Mavericks in a new MacPro). In order to have a possibility to check the main drive of the new MacPro from outside with Disk Warrior (in case I might one day need it) I tried unsuccessfully to make a bootable USB stick with that Disk Warrior file in it. I googled and found in 'blog.lewan.com' a step by step instruction how to transform any ISO file into a img (not needed in my case since a .cdr is already an Image) and how, using in the Terminal hdiutil and diskutil the needed file should be finally transferred to the USB stick. In such a way by pressing the Alt key when booting the Mac I might choose to boot from the key and use in my case Disk Warrior. So I thought.. However I have not found any way to make it work. Neither copying directly the .cdr file to the USB stick not transforming the .cdr (which is already an Image!) into a .dmg nor any other way (for instance use 'Restore' in Disk Utility) brought me a bootable USB stick with Disk Warrior in it. When pressing the Alt key at booting the computer I only see the small EFI volume created on the USB stick, but not the Disk Warrior part of it. Any help will be greatly appreciated !!!!
Older entries..
May. 4, 2016 May. 5, 2016 May. 6, 2016 May. 13, 2016 May. 14, 2016 May. 16, 2016 May. 17, 2016 May. 19, 2016 May. 20, 2016 May. 21, 2016 May. 23, 2016 May. 24, 2016 May. 25, 2016 May. 30, 2016 Newer entries..
Jan 04, 2017 DiskWarrior Recovery Maker provides a simple interface to perform the task of making a DiskWarrior Recovery disk from which you can start up your.
Barry Levine
Tried (and eventually purchased) Casper from www.fssdev.com. Pretty amazing software. Cloned from the internal hard drive (Windows 10) to a USB-connected external, then swapped in the external hard drive. Booted perfectly. Using it on my HP desktop and Dell Vostro notebook. It's basically Carbon Copy Cloner for Windows. It works as advertised and, once they finish the final updating for Windows 10, it will not only create bootable clones but - after the upcoming update - will permit USB-bootable clones, as well. (It does USB-bootable Win7 clones now.)
item.227752
MacInTouch Reader
Ric, today only
Simply a separate shout-out for good-featured PC backup software, offered for free today via BitsDuJour.com: EaseUS Todo Backup Home.
http://www.bitsdujour.com/software/easeus-todo-backup-home#comments118842
May. 5, 2016
item.227802
M Young
Another upvote for Casper for backing up Windows machines. I never like Acronis TrueImage and Ghost is an old and clunky program. Casper was the closest thing to SuperDuper I found.
item.227811
Rick Cricow
I want to thank everyone who has suggested a program. I'll be digging into them and trying to figure out which is best for us.
I just wish all of our computers could be Macs!
item.227817
MacInTouch Reader
Question for Barry Levine, who posted item number 227762:
Thanks for the Win 10 info for Casper. Their site says nothing (that I can find) about Win 10. Where did you find that info, or did you contact them directly, (the possibility of which just entered my mind as I type this!)
Thanks!
May. 6, 2016
item.227857
Barry Levine
A MacInTouch reader asked me:
'Thanks for the Win 10 info for Casper. Their site says nothing (that I can find) about Win 10. Where did you find that info, or did you contact them directly (the possibility of which just entered my mind as I type this!)?'
Yes; I emailed them about Windows 10 compatibility, and they emailed back saying that the current version (Casper 8) worked fine in Windows 10 with the exception of the USB booting issue. An internal Windows 10 drive cloned to an external USB HD would clone properly (and boot when installed internally) but would not boot via USB. They hope to restore this function in the upcoming update/upgrade.
Serif affinity designer 1.8.2.1 crack free. But you will just have to give it a try to see for yourself.Furthermore, The default one is the Draw Persona, which encompasses all the vector tools. Affinity Designer Free Download is that the fastest, smoothest, most precise vector graphic style software system obtainable.
May. 13, 2016
item.228138
Simon Pocock
For health reasons, I haven't visited regularly for some time and have rather lost the plot. What is the recommendation on backup software these days. I have used Time Machine since buying a new MacBook Pro 15' last summer (on Yosemite for printer-driver reasons) and it backs up too frequently in the short term but does not (so far as I can see) allow me to weed out the older backups whilst keeping, say, one backup each month in perpetuity. That is what I would like to do. I have a 2TB drive for the backups but they are being deleted oldest-first and don't go back more than 4 or 5 months.
item.228177
Barry Trudell
Re:Backup
I use Carbon Copy Cloner. Read the following link/documentation to see if it can meet all your needs.
Schedule can be set as you require.
As always note that one backup is not really a backup..you need at least two and one kept 'offsite'.
Drives fail.
You may want to consider alternate mediums for 'permanent/never will change again ever' files (photos, books etc) that you want to keep 'perpetually'. Adapt to newer methods as technology evolves over the years. Technology is always changing..will we have DVD's/CD's in 20 years? (Remember the floppy disk?)
item.228178
Bob Weinstein
Take a look at Arq. It may very well meet your needs, including thinning out the older backups.
item.228189
MacInTouch Reader
Consider, perhaps, QRecall 2.0, which has. I have been using that application created by ShirtPocket for years and it has never let me down. It may be seen more as a 'full cloning' app, but it can be used either as a standard 'backup' strategy, or a fully bootable clone backup maker. Good company, good product, and responsive to troubleshooting and technical inquiries.
I also started using Carbon Copy Cloner last year, and it also is a fine and responsive product. Both companies have been around quite awhile and that creates confidence for me in using their apps.. others may differ and prefer the 'new kids on the block'
item.228210
Diskwarrior V5.1 Bootable Image Software
David Charlap
Simon Pocock wrote:
'What is the recommendation on backup software these days. I have used Time Machine since buying a new MacBook Pro 15' last summer (on Yosemite for printer-driver reasons) and it backs up too frequently in the short term but does not (so far as I can see) allow me to weed out the older backups whilst keeping, say, one backup each month in perpetuity. That is what I would like to do. I have a 2TB drive for the backups but they are being deleted oldest-first and don't go back more than 4 or 5 months.'
I don't know about 'the' recommendation, but I'll tell you what my recommendation is.
I recommend you use two or three backup strategies. No single strategy, IMO, can cover all your requirements.
Time Machine is great for making its hourly snapshots. It helps a lot when you need to recover from your own mistakes (e.g. overwriting files, accidentally deleting stuff, etc.) But it is useless as a long-term archive, because it purges old backups when the target volume fills up. It's also not bootable, so a whole-system recovery can take a long time (9 hours, the last time I had to do it.)
On my system, I've got about 300 GB of files, most of which don't change a lot. My 3TB backup volume is not full yet and has (mostly) monthly backups going back to when I put the drive into service, about 3 years ago. If your 2TB drive is purging backups after 4-5 months, then you clearly have more data and/or are changing files a lot more. If you use virtual machines, that can easily cause this to happen (since a VM's entire file system exists as one very large file to the host OS.)
In addition to Time Machine, I recommend using a disk cloning package to make periodic fully bootable backups. I use Carbon Copy Cloner, along with two 2TB drives. When I decide I want to make backups (usually every 2-3 weeks), I clone my full system to one of these drives, alternating between them. So if there's a catastrophic failure, I can boot from my backup and not have to wait for a full system restore. These drives are kept off-line (powered off) when I'm not actively using them, reducing the chances that malware may attack them.
Finally, I recommend making archival backups before major changes (e.g. major OS upgrades). I use optical media for this. Back when my total usage was small enough, I would make a full system backup to 2-3 dual-layer DVDs. Today, I don't bother with a full backup, but I do everything I consider important (all home directories and software I install to /usr/local, but not the OS itself.) So far, I haven't needed these, but I have them in case I need them in the future.
Many people here also like to use an on-line backup system. I don't personally use one, but I recognize their value. Their downside is that they can be slow - especially when making your first initial backup or when trying to do a full-system restore. Also, you must trust the vendor's security model. So do your homework before selecting a product. I would consider them as a useful tool for recovering individual files/projects and as a last-resort if everything else fails, but not a replacement for having local full-system backups.
If your data is critical (and we all have some critical data), you should make sure you keep some backups (whether full-system or just the critical data) off-site. A network/cloud backup system is one solution. Another is to store some of your backups in a separate building (e.g. a safe deposit box). This will protect you if your building is destroyed (fire, tornado, earthquake, hurricane, flood, etc.) Keeping at least one backup in another city, while inconvenient, will protect you against a large-scale disaster, should one occur. If you've got friends or relatives in another location that you visit periodically (e.g., I visit my parents, 200 miles away, once every 2-3 months), you may be able to bring some backups with you and leave them there for storage.
If you let us know your specific requirements (how much data, how critical is it, how much history do you need, etc.) we may be able to provide more concrete suggestions.
item.228230
Tracy Valleau
Chronosync is probably the most versatile backup software. I've replaced Time Machine with it several years ago, and never looked back.
item.228238
Doug Canning
Yes, Qrecall is an excellent app! I have been using it for 6 months; version 2.0 is very robust, very reliable. I highly recommend it. Not affiliated with the company, just a satisfied user.
item.228255
MacInTouch Reader
'Take a look at Arq. It may very well meet your needs, including thinning out the older backups.'
Funny, I'm seriously thinking about dumping Arq. Just a couple of days ago it deleted all my archived backups -- keeping just the most recent version (which I guess is better than nothing), even though I've been paying Amazon monthly, for years, to store all my old archived backups on AWS-S3. And this is the second time this has happened in the last few years. It has nothing to do with hitting my chosen budget allocation, by the way; I'm nowhere near the limit I set. Also, for reasons I won't bother going into, I'm almost sure the problem is with Arq, not with Amazon's cloud.
Actually, perhaps the archives are still on AWS, but Arq can't see them. I don't know. I do know that googling revealed some other customers who've had similar experiences. My message to the developer (through his built-in Help contact box) went unanswered.
Time Machine, Arq, and QRecall all seem similar to me in terms of their core metaphor. So I looked around, and tried to figure out whether QRecall can do network backups to S3 or other cloud storage. However the QRecall website doesn't mention anything about network backups, and I couldn't find any contact link at all -- maybe if I filled out their account-registration form there would be one, but I shouldn't have to do that to get an answer to a simple pre-sale question.
Or maybe I should buy both ExpanDrive (to mount S3 like a Finder volume) and QRecall (to backup into the S3 pseudo-volume)? But ExpanDrive, and similar programs, seem to get very mixed reviews from customers in terms of reliability, freezes, etc.
Or maybe my version of 'offsite backups' in the future will be limited to my safe deposit box at the bank. But I'm very undisciplined about swapping the two backup drives (one home, one in the safe deposit box) as often as I should. Sometimes months go by.. which is ridiculous, I know.
Opinions?
item.228260
Mark Miller
Some may think it overkill, but I still use Retrospect after all these years. It's generally geared to, these days, backing up multiple computers to a central location. But, its power is what makes it so modifiable for backup scheduling. I can create any type of backup routine I want and use it to also create history backups every year. There were some bad years, but it's come back to be a stable product.
I don't know if you need this, but it also provides client software, which I have installed on my daughter's laptop, that automatically checks in when she uses her laptop and provides backups for her without any manual oversight. Yes, you do have to pay for a minimum amount of clients, 1-5 if I remember, so it may not be for you, but this is my recommendation.
May. 16, 2016
item.228283
Michael Blake
A 'MacInTouch reader' asked a lot of questions about backups, and I can't answer them all, but offer my thoughts. I run all sorts of apps and systems for backing up and archiving - not because I'm paranoid, but I do it for fun now I'm retired and have more time to play!
Arq: just this week I lost an entire back archive. It wasn't very old, but no amount of massaging could get the old records to reappear.
QRecall is for disk to disk backup; it does a 'good' job at being efficient - but trying to find one version of that file you're looking to restore can be downright confusing. It's nice and fast and secure, but be ready to spend some time finding a file - that's if you started with all the correct rules etc. for your backup strategy which can be confusing too.
I run an iMac, MacBook and two Minis, I take clones every night using Carbon Copy Cloner (one across the network) and another clone of each machine using ChronoSync and ChronoAgent across the network to one of the Minis. ChronoSync is used throughout the day to do just plain user data backups too.
Online backups are to BackBlaze - it's cheap, simple and just works.
Other backups to various destinations also take place via other apps including Arq, QRecall, Time Machine etc.
My takeaway: I've only ever 'lost' backup data using apps that backed up to their own format archives - ie something that's not directly readable by the Finder. Best success with Time Machine, ChronoSync, CCC, SuperDuper - anything that's native.
BackBlaze isn't native until restored, but never had a problem; if you want more security, use your own key with it.
Worst experiences with Arq, QRecall, iDrive (no de-duplication, no block-level backups just whole file) and just about anything that uses proprietary archives.
Don't even consider using ExpanDrive to mount an online cloud and backup to it, it'll fall over; ED is really meant for normal, slower file access.
Suggested strategy: Use CCC to do a nightly clone; use ChronoSync to backup user data to an external hard drive, keeping as many versions of files you need; use BackBlaze to backup your system drive and the external hard drive holding the ChronoSync'd data (that way you have an online copy of your backup data and it won't have to be re-uploaded because BackBlaze de-duplicates). Also, use Time Machine to an external drive - yes it screws up sometimes, but it's free and normally works.
Hope it helps, I probably haven't answered your questions, but just shout!
item.228303
MacInTouch Reader
For 228138, Simon Pocock, I don't use Time Machine because I don't trust Apple solutions, this coming from the perspective of a former AppleCare rep who had to fix their 'easy' solutions when they failed.
I use Carbon Copy Cloner two ways..
1) I make exact copies of the source to the destination, including deleting anything not on the source. This is so that if the source drive dies I have an immediate weeded-out replacement ready to go, in this case 3 of them, all in the same physical size as the source. In case of a failure I would just crack the case of one of them and do a drive transplant on my MacBook Pro. In CCC I call this Identical so it appears in the drop-down menu that way, you can name it at the top of the prefs pane.
2) I have an external drive with much more capacity than the source drive; for that one I just dump everything in and don't delete anything. It's a mess. That setting in CCC (that I made) is called Attic Mode. Once in a while I need some old file or app or something has bitrotted and there it is. Every few years I buy a new physical drive for that purpose. It has saved my bacon a few times.
If I was going to use Time Machine, it would only be after the two methods above, but I'm not sure why I would bother.
As far as Retrospect, I lost a lot of data (full backup on optical discs) due to greedy upgrades and I will never trust it again. The backups were saved in their proprietary format and could not be recovered by me. I much prefer clones because I do not need the backup software to access my backups, just plug in the drive and look for what I need.
item.228321
Rich Hartman
Michael Blake [item 228283] and MacInTouch Reader (228255) both reported problems with Arq and Amazon; at least one of these problems (lost backups) was very recent.
Arq recently released a major update (from v 4.15 to 5.x); I wonder if the above reporters would mind sharing the Arq version they had trouble with?
I have been using Arq for cloud storage (DropBox, Google, Amazon, OneDrive) for about a year with no show-stopping problems; I like the encryption methodology (encryption happens locally; no keys sent to the cloud destinations) and have in general been very happy with the product. The developer responds promptly to problems and has on at least two occasions prepared special interim releases for me to solve a minor problem I was having. I have not yet upgraded to Arq 5 and am hoping the above reports reflect glitches in the new Arq release.
May. 17, 2016
item.228332
Bill Schwartz
I'm also a very happy Arq user and have experienced only top-notch, friendly support from the developer himself.
As I've mentioned here, among its other virtues, Arq passed the Steve Gibson TNO 'Trust No One' test when he studied its underlying technology. Arq is cryptographically sound. Neither the developer nor the cloud service provider nor anyone else but the owner of the data has any way to decrypt the data (assuming a strong password of course), even in the face of a court order, national security letter, FBI harassment, etc.
I would not, however, rely on Arq, or any other single technology, alone for backup. For my particular needs, I use Arq cloud backup + Time Machine backups to two hard drives that rotate in and out of a bank safety deposit box + nightly SuperDuper clones to those same hard drives.
Some might need more, some less, than this to ensure access to their data consistent with their needs.
item.228337
Michael Blake
Rich asked which version of Arq had lost my archive - it was v5.0.11, I'm now running v5.1 with no problems. I should emphasize that the problem was with just one of my Amazon Cloud Drive archives; nine other ACD and more local archives are working fine - but 1 out of 10 is unnerving.
item.228348
Bill Ste
I have been using QRecall for two years at work now and found it very secure. I have several backup files locally, and several others on Dropbox for off-site. I also backup with portable hard drive for off-site.
I found it very reliable. I just had to look for a small CAD file from Sept 2014. I moved the date bar to the date I think the file existed, and there it was. You can also search, and it will show you all versions.
The developer is very responsive, and gets downright aggressive even if there is something on your local LAN that is causing the problem!
May. 19, 2016
item.228392
Bo Clawson
Michael Blake had a corrupted archive file and we all look for answers when that happens.
How many people here have had bad USB, Firewire, Thunderbolt connection/cables or ports fail in the last few years?
I have had two. Some recent articles from tech engineering companies found a number of bad Thunderbolt cables coming out of Asia.
I make it a rule not to touch my Mac or its cables when they are executing clones and backups, as I've had connectors move and cause failed backups.
May. 20, 2016
item.228492
David Charlap
Bo Clawson wrote:
'How many people here have had bad USB, Firewire, Thunderbolt connection/cables or ports fail in the last few years? ..'
I don't know about 'in the last few years', but I've had FireWire cables go bad, and when they do, the results can be maddening - your devices dismount or produce other kinds of errors, and it seems like complete voodoo when you try to fix it. Until someone suggests you try a new cable, and then everything magically works again.
In my case, it was one of those fashionable cables with transparent insulation (so you can see the braided shield around the conductors and inside the connector). I noticed that there was a distinct green color near the connectors. Clearly, the insulation was not completely sealed, and the copper started corroding. When it corroded enough to impact the cable's performance, everything attached through it started acting flaky.
Sadly, most cables have opaque insulation and connectors, so you can't see any visible signs of corrosion. Equally sadly, cables with transparent insulation aren't common and tend to cost more (since they're typically sold to the case-modding crowd, who are willing to pay extra for style.) But if you have a choice, I would recommend them, precisely so you can see signs of corrosion and replace a cable before you start experiencing actual failures.
item.228499
MacInTouch Reader
Regarding all the discussion here about issues with backing up .sparsebundle files, or using that as a backup format or container..
I'd appreciate any clarification about (1) the difference between sparse bundles and sparse images; and (2) any backup differences between the two.
I have an encrypted .sparseimage file, which auto-mounts at boot time using a password it retrieves from my keychain (the same as my main login password, which I have to enter manually every time I boot).
The sparseimage normally shows up in the Finder as a 'volume'. The filesize is currently 3.25 GB but slowly grows over time. Typically I exclude the mounted 'volume' from my backup programs (since I assume that its contents would be backed up in unencrypted form), but instead tell them to back up the .sparseimage file itself.
As far as I can tell, disk cloners (like CCC) handle this properly, but to this day I still can't figure out what Arq is doing with that file. It takes a long time uploading it to AWS-S3.. but since the contents of that sparseimage frequently undergo slight changes (many times a day), is Arq uploading the whole thing as one big blob, or just the changes, or 'bands,' or whatever? I can't tell.
Any information about the backup implications of sparse images, sparse bundles, or other strange Mac-ish package formats would be most welcome.
[Backing up any disk image that is mounted seems like a very bad idea, since the disk image could presumably change during the course of the backup, leaving the consistency/integrity of the backup file in question. (Actually, this applies to any open file during backup - but a disk image file is a collection of many more files, so it may be a bigger deal.) -Ric Ford]
item.228531
Luis Sequeira
To the MacInTouch reader (228499) who was asking about the difference between a sparseimage and a sparsebundle:
They are both treated by the system similarly, as disk images, and both have the ability to grow over time (the 'sparse' here meaning that even though they have the potential to grow very large (let's say 100 GB, for example), they don't reserve space beforehand, and they grow as more content is added).
There is however a significant difference:
- a sparseimage is a single file
- a sparsebundle is a folder containing usually many small files
That is why you are seeing a backup of a sparseimage taking a long time, even for a small change: the whole file must be uploaded/copied in full. That does not play well with Time Machine or any other backup method; the sparsebundle is way more flexible in this regard.
item.228535
MacInTouch Reader
There are two kinds of cables: inexpensive that you need to replace often, and well made cables that cost a little more.
I have found that buying cables from Monoprice.com, I have much fewer issues with cables and mysterious problems. They even sell FireWire and Lightning cables along with Thunderbolt to video adapters. If it is a cable to connect anything to anything, they sell it. I always check their site first before looking anywhere else. I believe they manufacture their products themselves or have them made to their specifications. I have no association other than being a very happy customer.
item.228536
Matt Schultz
228492
David Charlap wrote:
'Sadly, most cables have opaque insulation and connectors, so you can't see any visible signs of corrosion. Equally sadly, cables with transparent insulation aren't common and tend to cost more (since they're typically sold to the case-modding crowd, who are willing to pay extra for style.)'
It may be stylish but it's more expensive to mold cable assembly handles, strain reliefs and extrude cable jackets using clear granulated compounds instead of industry standard colors (black, gray, white, etc.). Clear thermoplastic compounds show every internal blotch, air pocket, tooling bleed or extruder hurl, e.g., flaws not seen when using black PVC, polypropylene or polyurethane. Hence, a higher scrap factor which = increased cost. Plus, the cost of purchasing clear compounds for injection molding or extruding is higher.
Further, clear thermoplastics are not as robust and degrade faster when exposed to ultraviolet light. They become brittle and are subject to quick work by their environment. This is probably why the cable jacket and molded handle at the inline connector on David's cable broke down and allowed the copper corrosion.
item.228543
Tim Barton
Found this at
A sparse disk image is an automatically expanding disk image. In other words, you can create a 50-gigabyte sparse disk image, yet only put 5 megs inside it. The disk image will only take up five megs of space on your hard disk, but will be capable of storing up to 50 gigs of data, should you choose to add it. Note that it auto-expands but does not auto-contract. In other words, if you delete files from the image, you will not regain any free space on your hard disk (although you will on the image). Disk Utility can be used to 'shrink' a sparse image, reclaiming any unused space on the image.
A sparse bundle is essentially the same thing; the only difference is that while a sparse image is one giant file on your disk, a sparse bundle is actually lots of small files (8 megabytes each). They work and look the same way, but you can right-click on a sparse bundle, select 'show package contents' and see the individual 8 meg 'bands'.
The sparse bundle was introduced with OS 10.5 in order better support Time Machine (especially with FileVault, where the entire home directory is a sparse bundle). Previously, a backup programme would see the image as one file, and if any changes had to been made to it, it would have to recopy the entire image. With sparse bundle, it can copy only the bands that have been changed since the last backup, so the backups are much quicker. It also is likely to decrease the chance of data loss, as you could conceivably restore parts of a damaged image.
Basically, if you want a sparse image, use the sparse bundle under 10.5. Only use the sparse image if you need backwards-compatibility with earlier versions of the Mac OS.
item.228545
MacInTouch Reader
Comment on Item 228499 .. Sparsebundles vs 'Standard DMGs'
[Backing up any disk image that is mounted seems like a very bad idea, since the disk image could presumably change during the course of the backup, leaving the consistency/integrity of the backup file in question. (Actually, this applies to any open file during backup - but a disk image file is a collection of many more files, so it may be a bigger deal.) -Ric Ford]
Answering the original poster's question, from my perspective the difference between a 'standard' DMG an .sparsebundle is the standard DMG is self-contained. It is like a box, when it is mounted, the box is opened. When it is dismounted, the lid is closed and all the data (files) are safe inside. It can then be reliably sent around your drives, network, or the Internet.
By contrast, the .Sparsebundle is a package like a folder than can, even when not mounted, be reached into by the file system, creating the possibility of corruption.
Like Ric noted, dismounting before backing up is important. But timing is everything, especially when backing up to external or network drives or the Internet.
It was from using Dropbox that I discovered Excel opens into RAM a temporary copy of the original file, and the temp file is what's edited. It's given a 'strange' name by Excel. If Dropbox jitters, or is closed, while the file is open, what's left there is the probably useless 'strange name' that was being edited.
All made more complex if files are shared, as we did in our workplace using Dropbox to link remote offices.
May. 21, 2016
item.228562
MacInTouch Reader
In response to my questions about backing up an encrypted sparseimage file, Ric said:
Backing up any disk image that is mounted seems like a very bad idea, since the disk image could presumably change during the course of the backup, leaving the consistency/integrity of the backup file in question. (Actually, this applies to any open file during backup - but a disk image file is a collection of many more files, so it may be a bigger deal.)
Sounds logical, but.. I access the data in the volume emanating from that .sparseimage, intermittently, all day long. Time Machine and Arq both want to run backups (by default) once per hour. What am I supposed to do -- learn when those programs like to run backups, stop everything I'm doing a few minutes earlier, close the iData documents (typically always open) that rely on that volume, dismount the volume, wait for the backup programs to finish running, remount the volume, reopen the documents in iData.. all this once or twice an hour, all day long? And that's not even the work I'm being paid to do, which has nothing to do with any of that! Doesn't sound very practical.. :(
Alternatively, I could tell those backup programs to only run once in the middle of the night, when I'm not likely to be accessing the sparseimage volume so it's unlikely to change during the backup process. But then my main files (which have nothing to do with the sparseimage) would lose the benefit of the hourly backups which are my amateur version of Versioning, and which has saved my butt on more than one occasion.
I suppose I should research backup software flexible enough to back up different items on different schedules, but I don't have time for that now.
Also, in responses to my original post, two other readers seemed to think that, in terms of backing them up, sparsebundles are better than sparseimages, but one other reader seemed to imply the opposite.
All of which leaves me still in the dark, unfortunately.
item.228575
Wire
Anyone who knows anything about how OS X handles data integrity for backup of incrementally changing files: I would love hear about this. Windows has something called Volume Snapshot Service, or Shadow Copies or somes uch, which helps coordinate this. I think Linux logical volume manager (LVM) has a snapshot service, and this may be an very important aspect of Apple Core Storage.
Are volume snapshots used for data integrity in Time Machine? How does this work? Do third-party backup tools handle this as well?
A volume snapshot feature seems way more important as a matter of user awareness and system integrity than TRIM could ever be, yet while there has been wave after wave of Q/As about the significance of TRIM, Core Storage seems relegated to that mysterious dumb feature that's one more small step on the .. path that Apple has taken since abandoning the greatest OS X of all time, Snow Leopard 10.6.8
item.228554
Gary Kellogg
The discussion about sparse bundles vs. sparse images and backups is timely for me, in that I have been using Backblaze for online backups to supplement my Time Machine and clone backups. Backups include encrypted sparse image volumes.
I have had no trouble with this strategy. However, I have never wanted to retrieve a file from an encrypted sparse image volume backup. If I did, I believe I would need to restore the entire sparse image volume, as it is backed up online as a bunch of ~10MB bands containing encrypted gibberish. Having three 20GB+ sparse image volumes, I have been structuring things differently.
I am working with an internal SSD that is 500 GB in size. 'Only' 500 GB in size with my growing media library. So I have been using tiny USB3 128GB Samsung flash drives from Amazon. These have been reasonably fast, reliable, and can be left in place, because they are so tiny. I can just format a drive and then encrypt it as a FileVault 2 volume. I back it up to Backblaze as an external drive when it is mounted. If I ever need an older version of a file, I can just restore the file instead of the entire volume.
This same strategy can be used with encrypted sparse image volumes. Simply mount them on the desktop and add them to the Backblaze volume list. I have read an argument against this as a good practice, because files on such a volume may be subject to change. I may be missing something important, but backing up any mounted volume presents the same issues. I suppose the caveat would be that any application accessing files on such a drive should be shut down. In other words, all the files should be closed.
item.228548
Brian S
Re Sparsebundles and Time Machine performance - particularly on a NAS:
Lots of folks use a NAS [Network-Attached Storage (versus DAS [Direct-Attached Storage) for their Time Machine data, and most NAS machines (Synology being one I'm familiar with) don't use HFS+, so sparsebundle use in this scenario makes sense. So-called 'Bands' inside the sparsebundle default to 8MB in size, but as the sparsebundle grows, the number of band files can increase a lot (compare a starting 400MB backup to, say, a 600GB backup) and Time Machine's performance decreases, searching all those additional bands.
For those who are adventurous and have large Time Machine backups and want to improve performance, this link describes one way to increase band size (which effectively reduces the number of bands and thus the number of bands Time Machine needs to search).
item.228557
Kikjou Delatour
I am using an encrypted sparse bundle disk image to back up sensitive documents (medical, passport etc.) and had no problems with Dropbox and even fewer with iCloud Drive. But there is reason for caution (see below).
Some background first: A sparse bundle disk image is essentially a folder with lots of folders and files in it. In Mac-speak, it is called a package. The top (enclosing) folder has a hidden extension that identifies it to the Finder as a package, in this case '.sparsebundle'. There is no special attribute set to make a folder into a package, and hence a package can be synchronised by non-Mac processes, including
rsync .
Files inside a sparse bundle disk image are called 'bands'. They stand for a disk image that is broken into small pieces. When changes are made to a such a disk image, only some bands are changed. Hence, only a few small files need synchronisation, rather than the large monolithic file of a regular disk image.
The problem arises when the sparse bundle disk image is mounted on two computers at the same time. This is because of Dropbox's normal behaviour, in this case: to duplicate a file and to append timestamp and computer name to one of the two. The same happens to the band files inside a sparse bundle package.
Deleting duplicated band files and ensuring that they maintain the original filenames (1, 2, 3.., d, e, f) will, in most cases, re-create some form of the original sparse bundle disk image, but it is not guaranteed. You will certainly lose all changes made to the disk image by one of the two computers.
Unsurprisingly perhaps, a sparse bundle disk image stored on iCloud Drive is treated like any other document that is in use by two computers concurrently. Hence, if you unmount and remount a sparse bundle disk image that is, or was, mounted on another computer, the Finder asks you which version to open, and you don't need to fiddle with band filenames.
In the end, I recommend using sparse bundle images with file-sharing services only
A) when used only as backup (e.g. with one computer using it) or
B) when the disk is mounted infrequently and unmounted right after use.
I say 'infrequently' because it can take time to synchronise, i.e. on slow connections, particularly after extensive changes to the disk image, and one might accidentally open a half-synchronised disk image on a second computer. Infrequent usage also minimises the chance of leaving a disk mounted by accident.
May. 23, 2016
item.228578
MacInTouch Reader
Apple does not use volume snapshots or anything as sophisticated for Time Machine as what Window or ZFS uses. All they use is the system metadata service to track which directories have been changed since the last backup, and then they compare the files in those directories to what is on the Time Machine volume to identify the files that need to be backed up. They reduce backup overhead by using hard links to the previous version of files/directories when possible. This results in a backup format that can be directly accessed (since it appears as a completely* normal file system) but has none of the benefits of real filesystem snapshots.
*The filesystem needed support for hard links to directories, which can cause infinite recursion if not used carefully.
item.228581
Tracy Valleau
Talking about using sparsebundle (SB) vs sparseimage (SI) as the destination for a backup of other files or volumes:
Perhaps I'm reading some of the replies incorrectly, but there seems to be some confusion about backing files up to a SI or SB. Here's an example:
'Hence, only a few small files need synchronisation, rather than the large monolithic file of a regular disk image.'
(It's hard to say if the author is talking about the SI/SB as a source or a destination here, which may simply be my confusion.)
That said, this seems to imply that if you have (say) a 300GB SI, changing one file on it requires the entire 300 GB to be re-written. That is not the case, any more than it is the case with any other mounted volume.
A mounted image, .dmg, SI or SB is treated by the OS pretty much as any other volume that is mounted.
It is the case that if you are, in turn, going to backup the SI, or SB itself, to something else, the SI will be copied in it entirety, while (depending on the software you are using) it's likely the SB will only need the changed files backed up.
As to the danger of backing up a mounted SB or SI, remember that it's just a volume, and the 'danger' of it changing during backup is no greater than when you're using CCC or SD to clone any mounted volume. The backup software should handle changed files during backup with aplomb, or you should be using something that does. (CCC and SD both do it properly, as does Chronosync.)
I have not actually run real world tests comparing hundreds of gigs on SI vs SB, and I'm not likely to do so, since my own anecdotal experience is that 'in the real world' the time differences are minimal when they are destinations. (It is necessary to mount the SI, but not the SB, however.) And since I keep multiple independent backups, I have not yet seen a need to backup my sparse file backup-images themselves, as they are end-point destinations.
[I think the key here is the security of the target. If you back up files from an encrypted but mounted disk image, are you backing up to an encrypted target or to an unencrypted target volume?
If you back up an encrypted disk image, it remains encrypted while you create a backup copy on any volume, but that's not the case when you've mounted an encrypted image and back up the unprotected files in the clear. This is where the bundle vs. image distinction may really come into play, if the encrypted images are large. -Ric Ford]
item.228586
MacInTouch Reader
'I access the data in the volume emanating from that .sparseimage, intermittently, all day long.'
I do the same thing with a personal encrypted .sparsebundle that is about 1.5 GB total. To make sure that I have a good backup (not one which has been modified during the backup), I periodically (every few weeks after each important change) manually compress the .sparsebundle as a .zip file duplicate (control click > compress) while the .sparsebundle is not in use. That zipped duplicate lives in my Dropbox folder. Because the zipped duplicate of my personal data is of reasonable size and not constantly being updated, the full uploading of the duplicate does not cause unreasonable bandwidth usage, while helping to ensure that there will be a readable backup. Dropbox automatically puts that zipped duplicate on a total of three machines, each of which is being automatically backed up to a cloud server using various services, each with their own backup software.
This is in addition to a nightly automatically scheduled Carbon Copy Cloner backup that smart copies my entire laptop 1TB SSD (including the 1.5GB original and zipped .sparsebundle duplicate described in the previous paragraph) to a mounted .sparsebundle clone of the entire laptop SSD that resides on a hard drive on a Mac Pro on my local network, and that network clone image is mounted on my laptop.
In case you're wondering, that screenshot isn't blurry - it's showcasing GraphSketcher's built-in Gaussian blur functionality.GraphSketcher also supports layers and imports SVG files that were created in other applications.One neat thing that you can do with GraphSketcher that you can't do with other apps is resize the UI. You can change opacity, set the fill or the outline to any solid color or gradient, align items on the canvas, shift Z-order (bring items to the foreground or send them to the background), and lots of other functions. When you use your browser's zoom functionality, the whole thing grows, including the buttons. Graphsketcher 5.17.
So the original 1.5GB .sparseimage and/or its zipped duplicate winds up on the disk backups of our three computers, on Time Machine backups, on Dropbox, and doubly encrypted on Carbonite, Backblaze, and CrashPlan continuous cloud backups of the various machines.
With continuous and hourly immediate, possibly-damaged (changed during the copy) backups, plus multiple nightly probably undamaged backups, plus almost certainly undamaged zipped duplicate backups of my personal encrypted 1.5GB .sparsebundle, that wind up in duplicate on three computers (with an occasional extra backup to a triple redundant ZFS RAID) plus Time Capsules hourly plus five rotating 3.5' server backup disks in two locations, plus four rotating 2.5 inch personal laptop backup disks, plus two USB thumb drives, plus occasional M-disk data DVD backups, plus continuous cloud backups using four different services, I'm confident that it is far more likely that I'll be locked up in an insane asylum for exhibiting obsessive backup behavior than it is that I'll actually lose any personal data.. Now if we could just get Apple to use ECC memory in its laptops and a checksum-protected file system, so we could be sure that we're not saving backups of bit-rotted data, we'd be all set!
item.228591
Bob Weinstein
Re
'I suppose I should research backup software flexible enough to back up different items on different schedules, but I don't have time for that now.'
I am a new Arq user. I think that backups to different destinations can be put on different schedules, so that a backup of your 'main folders' could be backed up at a different time than your sparse image.
I would suggest asking the developer directly; he seems to be very responsive and would confirm the above if accurate.
item.228593
Bob Weinstein
I'm also a very happy Arq user and have experienced only top-notch, friendly support from the developer himself. ..
I am a new Arq user, and have also found the developer to be very responsive to support questions.
May. 24, 2016
item.228595
Gary Kellogg
Kikjou Delatour (228557) wrote about foibles in using sparse bundle images with Dropbox, etc. that are shared among multiple computers. It is easy for this to happen. Suppose an image is mounted on Machine A and files on the image are being manipulated. When the manipulations are complete, at least in my experience, the sparse bundle (or any file) will not be resynced with Dropbox until it is unmounted, if a volume, or closed, if it is a document being edited.
Surely, many of us have visited our Dropbox repository and perhaps found instances of sparse bundle images or files shared among machines that were copied to Dropbox and described with language to the effect that, 'This file version is conflicted..'
Here is where the fun begins. The user on Machine A may have made changes to a volume or file but left it open or mounted. However, the user on Machine B may have made changes and dutifully closed or unmounted that file or image. Machine B has thus been synced with its changes, with Machine A's changes not synced until sometime later. Here, we have an inadvertently forked version and work to do.
Delatour writes:
'Deleting duplicated band files and ensuring that they maintain the original filenames (1, 2, 3.., d, e, f) will, in most cases, re-create some form of the original sparse bundle disk image, but it is not guaranteed. You will certainly lose all changes made to the disk image by one of the two computers.'
I have never dared to mess with the bands themselves in attempting to sort issues with conflicted files. Also, I have never seen anything actually corrupted in terms of bands being out of whack, etc. for a given sparse bundle, because Dropbox, I am pretty sure, simply copies the file and does not try to second-guess which bands should go together when there are conflicts among other versions.
I suppose the best practice is to always unmount sparse bundle images and close files right after an editing session. Also, be ready for fast moving collaborations on projects involving Dropbox-synced data to have conflicts at a rate inverse to the amount of communication among team members.
item.228596
Gary Kellogg
MacInTouch Reader 228562 raises some valid questions about continuous or near continuous backup of files and volumes which may be used (open or mounted) almost continuously. The last time I checked into this was five or so years ago and it was in relation to Time Machine, SuperDuper, and Dropbox behaviors. My understanding was that except for SuperDuper, open files and mounted sparse image/sparse bundle volumes were excluded from backups.
SuperDuper recommends closing open files and unmounting sparse bundle images, but strongly implies it will duplicate them even if open. If it comes across an open file that is being edited, my understanding is that it will clone the file as it was last saved to the disk even if it open and changes have not been saved. I actually wrote a little Applescript named 'Kill Before Backup' to ensure the files such as my Yojimbo database file and some of the Microsoft Entourage bits would be closed. I wrote another script to 'Start After Backup.'
I use a continuous online service, Backblaze, for my User files (it does not back up System files, Apps, Utilities, etc.). I run Time Machine every other day or so and clone every week or two to a rotating pair of drives. I use Dropbox continuously as well.
I think my habit of closing stuff I am not using and unmounting sparse images when not using them has served me well. It's the machines with lots of stuff open all the time that seem most prone to sync and backup gremlins.
item.228603
Michael Fryd
A MacInTouch reader claimed that Time Machine's backup format 'has none of the benefits of real filesystem snapshots.'
I disagree.
Time Machine's backup format provides two very useful features in common with file system snapshots:
1) Each backup appears as a complete backup. Users can browse individual backups using the Finder or normal file system tools.
2) Files that don't change from one backup to the next share the same on disk storage, thus 20 backups don't take up 20 times the space of one backup.
Diskwarrior V5.1 + Bootable Image Download
While one may not like the implementation of Time Machine's backup format, it does offer at least two of the most user visible benefits of a real file system snapshot.
item.228615
Michael Blake
[Re Arq and scheduling:]
Correct, a schedule is tied to a backup service; so different destinations can have different schedules.
However, a backup service, like S3 or Amazon Cloud Drive, can be added more than once as a backup destination. That allows you to have different criteria for backing up different files but to the same destination (same service but different UUIDs). Just be careful if you need to remove a destination, as each is given the same backup service's name.
May. 25, 2016
item.228667
Diskwarrior V5.1 + Bootable Image
David Charlap
Michael Fryd wrote, regarding 'real filesystem snapshots':
'Time Machine's backup format provides two very useful features .. 1) Each backup appears as a complete backup. .. 2) Files that don't change from one backup to the next share the same on disk storage..'
Agreed. I think Time Machine is very good for the features it is designed to provide - hourly snapshots to an external file system.
But regarding a 'real filesystem snapshot', there are other really useful features that Mac OS X does not provide (whether via Time Machine or anything else). For example, Network Appliance's Snapshots (supported in their file server products - ds-2477.pdf) have a very different behavior.
They integrate snapshots into the lowest levels of the file system itself. Every file uses a database-like system of references to the file's disk blocks. These blocks support copy-on-write semantics.
So, to make a snapshot, you can clone the database records (which, according to their documents, takes only a second or two), resulting in a snapshot of the entire file system (typically mounted as a read-only directory in a special '.snapshot' directory hanging off of each user's home directory). The copy-on-write semantics for the disk blocks means that when snapshotted files are modified, the modified blocks (and only those blocks - not the entire file!) are duplicated and the current version now points to the new blocks (with the snapshot(s) pointing to the old one.)
Deleting snapshots is similarly quick - just delete the records corresponding to a snapshot. Then check the reference counts on the blocks used, freeing those that are no longer used by any snapshot or live file. (This takes longer than a second or two, but it is still pretty fast and takes place in the background.)
NetApp's snapshot system's key advantages over Time Machine are that snapshots are created and destroyed very quickly. If you're so inclined, you could have a system make a snapshot every 5 minutes with minimal impact on system performance. Because it's all done by manipulating database/directory records in the file system, without actually copying any files, it can be extremely fast. (For example, to those of you who have run DiskWarrior, how long does it take to write a new directory to a volume after it's already been created? It usually takes only 2-3 seconds when I've had to do it.)
Diskwarrior V5.1 + Bootable Image Windows 10
Time Machine's key advantage over NetApp Snapshots are that the snapshots are stored on a separate volume. This means they can be used as backups. It can support thousands of backups on a single volume, whereas NetApp only supports up to 256 snapshots per volume (NetApp configuration lets you specify intervals for snapshots and the number of snapshots to retain per interval. When new snapshots are created, the oldest ones are automatically deleted.) Time Machine's approach also means that when you delete a file, you get the free space (on the source volume) back immediately, whereas with a NetApp box, deleted files continue to consume disk space until the snapshot(s) they belong to are all purged - which could take several days or even weeks, depending on how you have it configured.
I don't think either system is a real replacement for the other. I think the two technologies could complement each other nicely, however (e.g. have the OS make an on-volume snapshot and let Time Machine clone that, to guard against files changing as the backup runs). Unfortunately, I suspect NetApp has a lot of patents protecting their snapshot system, so it seems unlikely that Apple would be able to incorporate that technology into HFS+.
(I am aware that Linux LVM and ZFS also offer snapshot capabilities. I don't know enough about them to know how they differ with respect to NetApp's implementation, which is why I didn't discuss them above.)
item.228672
MacInTouch Reader
Time Machine has many fine features. It is especially useful for finding a backup of a document that was recently deleted, or to look at an older version of a document that was edited. It saves your work throughout the day, and that is a good thing.
Time Machine is not a perfect solution, however. As others have pointed out, files may be open or in an otherwise undetermined state when Time Machine copies them to the backup. A good example is VMware Fusion, where Time Machine makes corrupted backups of active virtual machines. (The solution is to make a separate copy of a shutdown virtual machine 'file', and allow Time Machine to copy it at the next backup.)
Furthermore, I have experienced situations where Time Machine failed to restore a Mac after a hard disk failure/replacement.
In my opinion, it pays to have some type of full disk backup in addition to Time Machine. MacInTouch readers have suggested Carbon Copy Cloner, SuperDuper, and other backup products. Most of them will make full disk backups even when Mac is booted from the same drive that is being backed-up. This approach has the same potential issue as described above - some files may be in an undetermined state when the backup program is run.
The solution is to boot from a separate drive and run the whole disk backup application from it. I have a SanDisk Extreme CZ80 64GB USB 3.0 Flash Drive SDCZ80-064G-GAM46. It cost just over $30. Even though my iMac only supports USB 2.0, it runs fast, and I know it will be much faster when I upgrade my Macs to models that support USB 3.0.
![]()
I installed the latest OS X on the flash drive and added various backup and disk repair applications, including Carbon Copy Cloner, SuperDuper, DiskWarrior, Onyx, network diagnostic tools, etc. I carry it with my MacBook Pro. When I want to run a full disk backup, I shutdown whichever Mac is to be backed up, and then boot from the Flash Drive by holding down the Option key at startup. This way I know that the files on the Mac's internal drive are closed properly and will be backed up correctly.
item.228696
Colleen Thompson
Diskwarrior V5.1 Bootable Image Windows 10
Simon Pocock in #228138 asked about recommendations for backup software. So I'll shovel my opinions onto the pile.
The programs I have direct experience with, and like, are Time Machine, Carbon Copy Cloner, SuperDuper, Crashplan, and Chronosync. Some are more appropriate for specific circumstances.
I used to use SuperDuper. At one point several years ago I switched to CCC because as a consultant, I was often cloning failing drives, and SuperDuper would, by design, bail out if it hit a bad block. CCC would skip the bad block but continue the clone, logging the broken file(s). I found this to be tremendously useful. SuperDuper might have added this functionality later; I can't say.
Chronosync, unlike CCC, would let you schedule backups to a network share. This limitation of CCC may have changed in the intervening years. And as others have noted, Chronosync has fantastic granularity.
I like Crashplan for backing up portions of a system, specific document folders usually, between computers on the same network at one of my clients. Of course it has a lot more flexibility beyond that, but I liked the fact I did not need to buy so many external backup drives or use up precious storage on servers, when so much local storage on the workstations was essentially going unused.
Many years ago I administered a network where we used Retrospect, backing up to tapes. It was fine, except now that data is unreadable because of obsolete hardware and software. That's not Retrospect's fault -- you can't ding it for being so long-lived. I just haven't used it for a long long time.
For my own system, I use Time Machine, because it can back up wirelessly to a Time Capsule, requiring no intervention from me. Every couple of days I clone my entire system using CCC to an external drive. Since I use a laptop that doesn't live in one place, this does require human intervention.
I like Time Machine for my non-technical clients because it just works automatically (and almost all the time, contrary to many dismal reports on MacInTouch). I have used Time Machine backups many times to restore someone's system when they got a new computer or their hard drive failed. The biggest problem I've run across is the external backup drive going bad. Many clients will ignore the alerts about backups not being done, for months on end. Kind of mystifying, that.
item.228705
MacInTouch Reader
[Re Arq and scheduling:]
Correct, a schedule is tied to a backup service; so different destinations can have different schedules.
However, a backup service, like S3 or Amazon Cloud Drive, can be added more than once as a backup destination. That allows you to have different criteria for backing up different files but to the same destination (same service but different UUIDs). Just be careful if you need to remove a destination, as each is given the same backup service's name.
Huh? What? That's too complicated for my simple mind to follow.. not to mention that Arq has been screwing up for me even without adding those mind-bending complications. But just out of curiosity: when you say that S3 'can be added more than once as a backup destination'.. do you mean the same bucket in S3, or are you referring to creating an additional data bucket?
May. 30, 2016
item.228799
MacInTouch Reader
Re Colleen Thompson's picks for backup software..
'Chronosync, unlike CCC, would let you schedule backups to a network share.'
I emailed them a while ago to ask whether they could backup to a cloud service, like Amazon's S3, and they said sorry, no. By 'network share' do you mean on a LAN, or have they added WAN / cloud capabilities since I contacted them?
'I like Crashplan..'
Bootable Image File Download Roxio
Does it, by any chance, enable local encryption on your own computer before sending them data? I mean so that even the backup service themselves can't decrypt or access your files (other than as gibberish). Local encryption is one thing I like about Arq.. although I've found it so unreliable, I'm looking for a replacement. Which, if any, of the integrated cloud backup services (CrashPlan, Backblaze, SpiderOak, JungleDisk, etc.) offer clientside encryption? Also, don't CrashPlan and JungleDisk require that you have a Java runtime engine installed locally, as opposed to being true Mac desktop applications? Or is my info about them out-of-date? Thanks!
Comments are closed.
|
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |