I'm arguing in my mind about the "right" way to go. You guys have some RF experience so you'll at least understand my problem and perhaps help me see the best way forward! I'm building a short-range (<100m) control system. I elected to use "class licensed" 433MHz stuff. There are very simple, inexpensive modules that claim ~200m range, class-licensed and use ASK (Amplitude Shift Keying). Basically, when sending a "1", the xmitter runs, when sending a "0" the xmitter is off. So no need for seperate transmitter keying, just squirt data. The receiver is basically a superhet followed by a detector and schmitt trigger, so there's a digital stream comes out the receiver that follows the xmitter. So far so good. What nobody indicated, is that the receiver must have a hairy AGC that in the absence of a signal cranks the gain up until it's detecting noise, so before a transmission, the output is basically white noise through a schmitt. This makes it really hard for my processor to make sense of the data its expecting. I added a moderate length "sync" pulse before the transmission, and the receiving units code looks for an uninterrupted high for several milliseconds to help weed out signal from noise. This uncovered a further limitation with the RF modules - a pulse more than about 20ms gets cropped off. The xmitter just shuts down, so no "loooong" transmissions possible. The code whizzes around a short loop to count how many times it gets a "high" in a row. As soon as it gets a low, it bails out. Once it has enough highs, it considers this a valid transmission. Unfortunately, the other code takes up to about 12ms to run, so the maximum time I can safely detect is about 8ms. Because of the discrete sampling points, I am seeing a few "false starts" from noise. My transmission is 16 bits. 4 bits of preamble, 4 bits of data, 4 more bits of fingerprint, 2 more data bits and 2 parity bits. I've not seen any false signals because the protocol itself is sufficiently robust - if any of the fingerprint/preamble bits are wrong, or any of the 6 data bits don't match the 2-bit checksum, the packet is discarded - so there's less than 0.04% chance of random noise generating a valid packet on those few instances I do get a long enough burst of noise to see a "start". The problem that does arise though, is that a false start causes the receiving end to receive an entire 16-bit word, decode, parse and check it before deciding to discard it. That represents up to about 100ms - and if a valid packet starts anywhere during that period, it is lost. So, a couple of ideas strike me. 1. Rather than receiving an entire 16-bits, re-arrange my word, send all the preamble/fingerprint first. Also, reduce it to only 7 bits, not 8. Check each bit as it is received. With 7 bits, there is less than a 1% chance of noise generating the right fingerprint. So checking the first bit is right and exiting immediately if it's wrong. This reduces the waiting time to about 4ms in 50% of noise cases. If it's right, take the next bit. 8ms and we are at 75% certainty. Each extra bit doubles our certainty at the penalty of 4ms. Once 7 bits received, it's a 127/128 chance it's a valid packet. Then, read 6 bits of data and 2 parity, check parity and discard if wrong. This reduces the wait-time for most noise. 2. Forget the fingerprint bits entirely. Receive 6 data + 2 parity bits. If parity doesn't match, discard. This reduces processing time for all packets but significantly increases the chance of noise being seen as valid data. 3. Like 2, but send the data+parity twice, making it 12 bits. After the first 8 bits, checking bit-by-bit for the next 8 and exit immediately if any mismatch. This gives the "valid packet" probability of close to 1022/1023, with a minimum time of only 8 bits increasing to 16-bits as probability increases. 4. Completely change the way I send data. Constantly send a low-bitrate signal, 1 bit per approx 20ms. Each bit gets loaded into a shift register. The original 16-bit word with fingerprint bits is sent, and when the fingerprint and checksum matches a valid word, the action is taken. This means a single noise burst won't tie up the receiver, but each transmission would now take 320ms (latency in response to pressing a button. This would be up to 640ms if a previous button was still being sent. Regular status update packets could be aborted to send a button to keep latency down as far as practical). There may be other ideas. Nothing is cast in stone. What I have now works "pretty well", but I want as close to 100% as I can get, just because...