This one had no problem with it: https://www.quipqiup.com/
I test all my posts there first. Sometimes, like yesterday, I failed to catch problems when proofing the solutions. What happened yesterday was the quote was too long for: http://www.rinkworks.com/brainfood/p/cryptmaker1.shtml so I used my DIY Octave script that puts in line breaks without keeping words (delimited by spaces) together, and you get what happened yesterday. It’s a bug I should fix. (Actually, output to a text file with line feeds replaced by < p>< lf>< cr>)
I will try quipquip, but I am thinking of contacting the developers of Claude (Anthropic) and ChatGPT (Sam Altman) to warn them of the problems. Good AI systems should be able to make easy work of cryptograms.
I didn't mention it but I had the same problem with Co-Pilot, which is the Microsoft AI. It was completely confused with today's cryptogram and tried to fool me by providing the answer to the MLK quote from earlier in the week.