FloatArtifact 10 months ago

Part of me dies every time I see projects not integrating robust restoring and backup systems.

  • dewey 10 months ago

    Providing robust restoring and backup systems for a system that allows to run any kind of workload is almost impossible. You'd have to provide database backups for all versions of all databases, correct file backup for the volumes etc.

    It feels much more dangerous to have such a system instead in place and provide false sense of security. Users know best what kind of data they need to backup, where they want to back it up, if it needs to be encrypted or not, if it needs to be daily or weekly etc.

    • sgarland 10 months ago

      ZFS. Snapshot the entire filesystem, ship it off somewhere. Done. At worst, Postgres is slow to startup from the snapshot because it thinks it’s recovering from a crash.

      • GauntletWizard 10 months ago

        Postgres is recovering from a crash if it's reading from a ZFS snapshot. It probably did have several of it's database writes succeed that it wasn't certain of, and others fail that it also wasn't certain of, and those might not have been "in order". That's why WAL files exist, and it needs to fully replay them.

      • prmoustache 10 months ago

        Most projects/products can survive a few seconds of downtime to have a clean snapshot.

      • dewey 10 months ago

        Classic HN reply that’s very disconnected from reality. Most people don’t run ZFS, most people using these tools are using this to self host their apps as it’s cheaper than managed cloud server. Usually on a dedicated or VPS server where by default you run stock Ubuntu and no niche file system.

    • sally_glance 10 months ago

      A viable strategy, but requires an experienced Linux/Unix admin and quite some planning & setup effort.

      There are a lot of non-obvious gotchas with ZFS, and a lot of knobs to turn to make it do what you want. Anecdotally, a coworker of mine set it up on his development machine back when Ubuntu was heavily promoting it for default installs. It worked well until one day his machine started randomly freezing for minutes multiple times a day... He traced the issue back to some improper snapshotting setup, then spend a couple of days trying to fix it before going back to ext4.

      For the Postgres data use case in particular, I would be wary of interactions and probably require a lot of testing if we were to introduce it... Though it seems at least some people are having success with it (not exactly plug and play or cheap setup though): https://lackofimagination.org/2022/04/our-experience-with-po...

      • sgarland 10 months ago

        I think you meant to reply to me.

        There are a ton of ZFS knobs, yes, but you don’t need most of them to have a safe and performant setup. Optimal, no, but good enough.

        It’s been well-tested with DBs for years; Percona in particular is quite fond of it, with many employees writing blog posts on their experiences.

    • cweagans 10 months ago

      I don't think that's true. I opened https://github.com/dokku/dokku/issues/5008 a while back and Jose didn't seem to disagree.

      Addressing your argument directly though: you know that if you spin up a Postgres database for your app, you need to dump the database to disk to back it up (or if you wanna get fancy, you can do a delta from the last backup + a full backup periodically). Anytime a Postgres database exists, you know the steps you need to take to backup that service.

      Same with persistent file storage on disk: if you have a directory of files, you need a snapshot of all of those files.

      Each _service_ can know how to back itself up. If you tell a Dokku _app_ to back itself up, what you really mean is that each _service_ attached to that app should do whatever it needs to do to create a backup. Then, dokku only needs to collate all of the various backup outputs, include a copy of the git repository that drives the app, tar/zstd it, and write it to disk.

      As you pointed out, the user should probably be able to control the backup cadence, where those backups are shipped off to, the retention period, whether or not they are encrypted, etc, but the actual mechanics of performing a backup aren't exactly rocket science. All of the user configurable values can have reasonable defaults too -- they can/should Just Work (tm). There's value in having that work OOTB even if the backups are just being written to disk on the actual Dokku machine somewhere.

      • dewey 10 months ago

        I've landed on your issue too at the time when I was building my Dokku setup. I don't disagree that it would be nice but I just disagree with the parent poster making it sound like it's an essential feature that makes the project any less valuable.

    • FloatArtifact 10 months ago

      It's worth discussing backup/restore systems abstraction file system such as ZFS versus application layer.

  • trog 10 months ago

    My VPS provider just lets me take image snapshots of the whole machine so I can roll back to a point in time. It's a little slower and less flexible than application or component level but overall I don't even think about backup and restore now because I know it's handled there.

  • Aeolun 10 months ago

    None of my hobby projects across 15 years or so have ever needed backups or restoring. I can agree it would be nice to have, but it’s a far cry from necessary.

    • doublerabbit 10 months ago

      So when your drives finally die you'll just going to shrug it good bye?

      If you have your code stashed somewhere else than thats already backup.

  • prmoustache 10 months ago

    FWIW, backup can be ran from a separate docker container mounting same volume as main app and connecting to the db if any so it is not like backup can't be taken care of. That's it how it is often done in the kubernetes world.

mimischi 10 months ago

Been using dokku for probably 8 years now? (or something close to that; it used to be written entirely in bash!) Hosting private stuff on it, and an application at $oldplace probably also still runs on this solid setup. Highly recommended, and the devs are a great sport!

rgrieselhuber 10 months ago

I've kept a list of these tools that I've been meaning to check out. In scope, do they cover securing the instance? Is there any automation for creating networks of instances?

  • dewey 10 months ago

    > In scope, do they cover securing the instance?

    Most of these I checked don't, but a recent Ubuntu version is perfectly fine to use as-is.

    > Is there any automation for creating networks of instances?

    Not that I'm aware, it would also defeat the purpose of these tools a bit that are supposed to be simple. (Dokku is "just" a shell script).

oulipo 10 months ago

What would be the best between Dokku / Dokploy / Coolify?

  • dewey 10 months ago

    Depends on what you prefer. I went with Dokku as for me it was important that I could run docker-compose based apps along side with my "Dokku managed" apps. I didn't want to convert my existing apps (Sonarr, Radarr etc.) into Dokku apps and only use Dokku for my web projects.

    I also wanted to be able to remove Dokku if needed and everything would continue to run as before. Both of these work very well with Dokku.

  • Aeolun 10 months ago

    I tried many, but eventually kept running on Portainer.

    Best part is that I can just dump whole docker-compose.yml files in and it just works.

[removed] 10 months ago
[deleted]