Comment by floxy
So when will we be able to just run general relativity numerical simulations on our desktop machines? So that you could set up Observer A at some point, and Observer B at some other point and and mass distribution, etc, then just crunch the numbers to see what each observer could see/measure as time evolves for each observer. Seems like the differential equations are straight forward enough(?). Is the possibility of singularities at the center of a black hole the hard part? What if you just simulated something that was 99.99% of the density needed to get a black hole? I suppose that you'd need a 4 dimensional matrix to hold the simulation (three space coordinates plus a time coordinate)? Is it that we just don't have enough RAM and storage yet in consumer machines? If your simulation did a 1,000 points in ever dimension, that would be 1e12 points. If there are 10 components of the tensor at each point and we are using 64-bit doubles per parameter, that means our simulation takes up ~80 TB. Or is it a that we don't have enough processing speed? Or are there still some philosophical issues that need to be decided when you program up the simulator? How many lines of code is a numerical general relativity solver using something like Euler's method? Is the core of a naive version less than, say 500 lines of C? I can see an optimized CUDA version being significantly larger of course.