From what I understand, ideally stego would be used in conjunction with encryption.
First, you would encrypt your message, then you would use stego to hide it.
If the stego is good, it would be a computationally intractable problem[2] for your adversary to determine whether there was indeed a message hidden within the data they were analyzing, with greater than 50% accuracy.
That said, I'm not sure how practical using an application like this would be for stego. It does not "whiten" the data it tries to hide, so unless the data's already whitened, it could potentially stand out like a sore thumb when subjected to steganalysis. And how would you propose actually using this?
This does present some intriguing possibilities, however, like maybe having Alice and Bob share a tweaked version of an OCR library and having Alice generate random images until her encrypted message has been "encoded" in such a way as to be recognizable by the tweaked OCR library that she shares with Bob. The tweaking of the library's character recognition parameters could be a sort of pre-shared key, and would not be available to Eve (the adversary).
[1] - this post comes from a hobbyist, not from any kind of security researcher, steganalyst, cryptoanalyst, etc. So please take what I say with a grain of salt and please correct me if I'm wrong.
[2] - "computationally intractable" being different for different adversaries, of course, which is one reason you need a good threat model.
From what I understand, ideally stego would be used in conjunction with encryption.
First, you would encrypt your message, then you would use stego to hide it.
If the stego is good, it would be a computationally intractable problem[2] for your adversary to determine whether there was indeed a message hidden within the data they were analyzing, with greater than 50% accuracy.
That said, I'm not sure how practical using an application like this would be for stego. It does not "whiten" the data it tries to hide, so unless the data's already whitened, it could potentially stand out like a sore thumb when subjected to steganalysis. And how would you propose actually using this?
This does present some intriguing possibilities, however, like maybe having Alice and Bob share a tweaked version of an OCR library and having Alice generate random images until her encrypted message has been "encoded" in such a way as to be recognizable by the tweaked OCR library that she shares with Bob. The tweaking of the library's character recognition parameters could be a sort of pre-shared key, and would not be available to Eve (the adversary).
[1] - this post comes from a hobbyist, not from any kind of security researcher, steganalyst, cryptoanalyst, etc. So please take what I say with a grain of salt and please correct me if I'm wrong.
[2] - "computationally intractable" being different for different adversaries, of course, which is one reason you need a good threat model.