Regarding my last post, I sent an email to Nicholas Allen asking for his advice, and his latest blog post confirms my belief that it would be a dumb idea to do encoder level compression if we are doing message level security. What I have tried, and it seems to work pretty well, is to use message inspectors to do the compression/decompression as, from the documentation, it seems these execute just as the message is generated (before they are secured or encoded), and then last on the receiving side (after the decoding and decryption take place). This seems to work well. My only concern, though, is that there doesn’t appear to be any kind of standard as to how to represent compressed data in SOAP. So, if our application requires compression in order to function efficiently over the WAN, we must necessarily decrease the level of interoperability. It would, presumably, not be too difficult to re-implement this compressor/de-compressor on other SOAP stacks, but I’m not too aware of how extensible these are, on a whole. One would presume, that because SOAP is, by its very nature, very extensible, that any SOAP stack would have to also be very extensible, but I’ve been burned by this type of wishful thinking before.
I work on a pretty large enterprise WCF application and even with the binary encoding the messages that we end up sending back and forth seem to creep up to sizes that are dangerously large for slow internet links. This is becoming quite a problem.
One suggestion I am seeing to reduce data size involves setting EmitDefaultValue to false for data members that will often contain their default value. However, the MSDN seems to caution against the overuse of this setting (maybe for performance reasons, or versioning?), and to my mind, its letting the vagaries of the transport bleed too far into the application.
So, short of re-engineering the data that these services are expecting and returning, it seems we should be considering compression options. My question is what is the best practice route here? Apparently the SDK contains some sort of gzip encoder example, which seemed compelling until we realized that, most of the time, our clients want to apply security at the message level. With the amount of randomness injected by doing this encryption, I doubt what’s left of the message will compress well, or at least won’t give us the wonderful savings of compressing human readable text, or some binary encoding thereof.
So it would seem what we need is a transform to the message body that applies before security and encoding? I have just begun to investigate WS-Compression and the various WCF implementations for it that exist, and I wonder if these hold the answer. Even more lovely, to my mind, than being able to add compression to a binding, would be to be able to add conditional compression to a binding, whereby we could make some decision about a message to see whether we would benefit from compression, or whether we would just be adding to the overhead.
But I wonder if I am barking up the wrong tree? What is the best practice method for dealing with the inflated data spewed forth from the data contract serializer?
On further research it is seeming WS-Compression isnt a real standard, but merely some wishful thinking of some sample writers. Seems like there should be some allowances for compression in the WS-* realm though, as all of this stuff is very verbose.