Today, Amazon Web Services introduced beta access to a versioning system across all S3 regions in their cloud. This means that you can now save additional copies of an object within S3, while retaining older copies behind the scenes. You can read the developer documentation about this new functionality in amongst the other Amazon Simple Storage Service docs.
I’ll be interested to see how long it takes for someone to build a Time Machine-style backup service using this new core functionality; retaining all revisions of all files stored within that volume. People are going to have to be a little careful about how much they store with this new model, because I imagine it will be very easy to start adding up a lot of space when storing duplicates of objects over time. It seems as if AWS stores full copies of objects (not partial differences between versions), so storage space should be easier to calculate/account for at least.
Versioning is enabled on a per-bucket basis, and you may also optionally require multi-factor authentication with a hardware device to delete versioned objects. The new functionality introduces a
versionid concept for each object within a versioning-enabled bucket which keeps track of specific versions of an object, while a normal GET request will get the most recent version available.
It’s great to see Amazon continue innovating and listening to their customers. They are definitely not the only players in the cloud storage/computing game, but they seem to be doing a pretty good job at staying ahead of the curve.