Comment by srkiNZ

Comment by srkiNZ 6 hours ago

0 replies

Great! I agree 100%. We have to assume a "reliable network" within a "boundary" (i.e. a computers CPU, memory, busses etc...). Distributed systems (from which these rules are taken) are specifically systems where anything within one of these "boundaries" is considered a "single node" and treated the same, whether it's a NIC, a kernel module/driver, a user space process or anything else.

In our case if we were to take (for example) that the NIC would de-duplicate the messages for us, anyone writing the producer/sender and a user-space program for the receiver a would need to know that the NIC was doing this (or risk having messages dropped for failure of including a unique id).

This is a pedantic point, but I would strongly stress that the only reason these "delivery rules" are so popular (and evoke such a reaction) is because of the very large number of times that programmers mis-understand them.

Commonly they either assume that:

* the network is reliable

* something else will guarantee "exactly once delivery" for me (when in fact nothing will)