Comment by kokada

Comment by kokada 9 days ago

2 replies

I am also curious, I keep reading from this thread folks talking that this is possible, but I can't see to find anything searching in Google/DDG.

There is this document from Golang devs itself[1], that says:

> Reads of memory locations larger than a single machine word are encouraged but not required to meet the same semantics as word-sized memory locations, observing a single allowed write w. For performance reasons, implementations may instead treat larger operations as a set of individual machine-word-sized operations in an unspecified order. This means that races on multiword data structures can lead to inconsistent values not corresponding to a single write. When the values depend on the consistency of internal (pointer, length) or (pointer, type) pairs, as can be the case for interface values, maps, slices, and strings in most Go implementations, such races can in turn lead to arbitrary memory corruption.

Fair, this matches what everyone is saying in this thread. But I am still curious to see this in practice.

[1]: https://go.dev/ref/mem

Edit: I found this example from Dave Cheney: https://dave.cheney.net/2014/06/27/ice-cream-makers-and-data.... I am curious if I can replicate this in e.g.: Java.

Edit 2: I can definitely replicate the same bug in Scala, so it is not like Go is unique for the example in that blog post.

tsimionescu 9 days ago

> Edit 2: I can definitely replicate the same bug in Scala, so it is not like Go is unique for the example in that blog post.

Could you share some details on the program and the execution environment? Per my understanding of the Java memory model, a JVM should not experience this problem. Reads and writes to references (and to all 32 bit values) are explicitly guaranteed to be atomic, even if they are not declared volatile.

  • kokada 7 days ago

        import java.util.concurrent.Executors
        import scala.concurrent.{ExecutionContext, ExecutionContextExecutor, Future}
    
        trait IceCreamMaker {
          def hello(): Unit
        }
    
        class Ben(name: String) extends IceCreamMaker {
          override def hello(): Unit = {
            println(s"Ben says, 'Hello my name is $name'")
          }
        }
        class Jerry(name: String) extends IceCreamMaker {
          override def hello(): Unit = {
            println(s"Jerry says, 'Hello my name is $name'")
          }
        }
    
        object Main {
          implicit val context: ExecutionContextExecutor = ExecutionContext.fromExecutor(Executors.newFixedThreadPool(2))
    
          def main(args: Array[String]): Unit = {
            val ben = new Ben("Ben")
            val jerry = new Ben("jerry")
            var maker: IceCreamMaker = ben
            def loop0: Future[Future[Future[Future[Any]]]] = {
              maker = ben
              Future { loop1 }
            }
            def loop1: Future[Future[Future[Any]]] = {
              maker = jerry
              Future { loop0 }
            }
            Future { loop0 }
            while (true) {
              maker.hello()
            }
          }
      }
    
    
    Here. I am not saying that JVM shouldn't have a stronger memory model, after thinking for a while I think the issue is the program itself. But feel free to try to understand.