• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

How do you measure data?

Jeff7181

Lifer
There's some debate among my peers over how data is/should be measured, especially as it relates to enterprise storage where there's lots of space efficiencies to be gained by data reduction technologies like compression, deduplication and thin provisioning (both in the form of not allocating physical space until logical space demands it, or in the form of thin snapshots).

If you only measure physical space, it really only tells a very small portion of the story for datasets that benefit greatly from compression and deduplication, but also for environments where snapshots are used heavily.

If you measure logical space, it seems to make the number appear larger than it actually is. Just because I take a thin snapshot, doesn't mean I have to do anything extra to manage that data, yet it doubles the amount of logical space.

What is industry standard? Maybe just reporting both? I ask because if I report the physical space consumed in the storage environment that I manage, it's just under 600TB. if I report the logical space, it's roughly 5PB.

I thought of another way to measure it - logical space used and allocated to hosts. So snapshots that are taken and held for recovery purposes don't count, unless they're mounted and used for recovery or a test or development environment. While this seems to be the most reasonable way to measure it (to me) I'm not sure it's a standard way of measuring it and thus couldn't be used to compare the environment to others in the industry.

What do you think?
 
Back
Top