Tuesday, April 24, 2007
Industry Analysts and Storage Area Networks
How come industry analysts aren't asking storage area network (SAN) vendors why they aren't building compression functionality into their products?
Yes, I understand it is not in the best interest for storage area network vendors to help enterprises efficiently utilize their technology and that by building in support for compression, it may result in either cutting potential sales by half or even may cause enterprises to buy even more storage since they can use it even more inefficiently.
I also understand that there are third party products such as StoreWiz that will do this independent of the SAN vendor but that doesn't change the fact that it really should be built in and not a separate product.
I understand that many vendors have done file-level deduplication but in all reality, we need more capability than simply calculating a hash to determine duplicates. What would it take to deduplicate within a file or across files. For example, if you consider the average utility who stores statements they send to their customers using various ECM products where much of the information contained on the statement is unique to all folks who receive them feels like a big opportunity to add value.
Maybe I got it twisted and industry analysts don't believe that storage vendors should play a part here and this particular example is more of a thing that should be solved by ECM vendors. It would be interesting to hear the perspective of both parties.
I know folks from Sun may have even different perspectives as Solaris supports compression at the file system level. Actually Windows NTFS does the same thing. Does anyone know which is more efficient in terms of compression?
| | View blog reactionsYes, I understand it is not in the best interest for storage area network vendors to help enterprises efficiently utilize their technology and that by building in support for compression, it may result in either cutting potential sales by half or even may cause enterprises to buy even more storage since they can use it even more inefficiently.
I also understand that there are third party products such as StoreWiz that will do this independent of the SAN vendor but that doesn't change the fact that it really should be built in and not a separate product.
I understand that many vendors have done file-level deduplication but in all reality, we need more capability than simply calculating a hash to determine duplicates. What would it take to deduplicate within a file or across files. For example, if you consider the average utility who stores statements they send to their customers using various ECM products where much of the information contained on the statement is unique to all folks who receive them feels like a big opportunity to add value.
Maybe I got it twisted and industry analysts don't believe that storage vendors should play a part here and this particular example is more of a thing that should be solved by ECM vendors. It would be interesting to hear the perspective of both parties.
I know folks from Sun may have even different perspectives as Solaris supports compression at the file system level. Actually Windows NTFS does the same thing. Does anyone know which is more efficient in terms of compression?