realitydrift a day ago

A lot of the hardest bugs this year feel like nothing is technically broken, but reality isn’t lining up anymore. Async boundaries, floating-point drift, and ordering guarantees. All places where meaning gets lost once systems get fast, parallel, and distributed. Once state stops being inspectable and replayable, debugging turns into archaeology rather than engineering.

call68 a day ago

Pues en una auditoría con note algo raro la verdad su portal web tanto al público y los servidores estaba totalmente colapsados osea no se podía hacer nada porque tal parece que alguien ya lo había hecho por mí en mi cabeza lo que pasó por preguntar es que pasó aquí para que me trajeron vine a buscar posibles vulnerabilidades no a buscar a alguien que hizo esto bueno entre conversiones el se ofreció a decirme que buscará al responsable cosa que no era nada fácil pero tampoco imposible

gethly 3 days ago

Not a bug but rather an engineering oversight. Also not hard and it did not affect me, I caught it soon, but it was one of those surprising moments worth mentioning.

I have a write-online table in MariaDB and ordering of records is important. I have realised that the database has no such thing as append-only table that stores records in the order they are submitted into the database. Every record has one or more indices, and it is these indices that dictate the ordering and only for the data they index. What I have overlooked is when a transaction A starts, then transaction B starts, the transaction A might have records with smaller keys, as it started sooner, but transaction B commits first with higher keys, which means I end up with out-of-order entries. This is not too bad, actually, it depends on the context and in my case the context was that there were readers constantly waiting for new records. And so if a reader reads records after transaction B commits but not before transaction A commits, the reader will never see new records from transaction A. I have solved it by blocking the readers based on number of active transactions with ordering being considered.

I have wrote about it in this blog post, in the "Event Log and proper ordering of events" section https://gethly.com/blog/how-of-gethly/event-sourcing-right-w...

guntis_dev 4 days ago

Not exactly a bug, but I was given a company written video player that receives a video stream, decodes it via the browser WebCodecs API, and renders via WebGL. Users complained that video was laggy and often froze on their iPhones. My task was to make it perform better - using the browser's built-in player wasn't an option.

After profiling, I found two bottlenecks: converting frames to RGB was happening on the CPU and was quite costly, so I rendered the decoded YUV frames directly on the GPU without conversion. Second, I moved all logic off the main thread since our heavy UI was competing for the same resources.

The main thread thing was that I was iterating through the frame buffer multiple times per second to select the appropriate frame for rendering. When heavy UI animations occurred, the main thread would block, causing the iteration to complete late - by then, the target frame's timestamp had passed, so it would get skipped and only the next frame would be drawn, creating visible stuttering.

Agent_Builder 4 days ago

While building GTWY, we realized stack traces stop being useful once workflows go async. So we designed things around step-level visibility and shared context instead.

  • varshith17 4 days ago

    Async stack traces are a nightmare. You lose the causality chain completely.

    We ran into a similar issue with 'Shared Context.' We tried to sync the context between an x86 server and an ARM edge node, but because of the floating-point drift, the 'Context' itself was slightly different on each machine.

    Step-level visibility is great, but did you have to implement any strict serialization for that shared context to keep it consistent?