dewey 4 hours ago

I'd also suggest people to take a look at Dokku, it's a very mature project with a similar scope and was discussed here a few weeks ago:

https://news.ycombinator.com/item?id=41358020

I wrote up my own experiences too (https://blog.notmyhostna.me/posts/selfhosting-with-dokku-and...) and I can only recommend it. It is ~3 commands to set up an app, and one push to deploy after that.

  • FloatArtifact 3 hours ago

    Part of me dies every time I see projects not integrating robust restoring and backup systems.

    • dewey 2 hours ago

      Providing robust restoring and backup systems for a system that allows to run any kind of workload is almost impossible. You'd have to provide database backups for all versions of all databases, correct file backup for the volumes etc.

      It feels much more dangerous to have such a system instead in place and provide false sense of security. Users know best what kind of data they need to backup, where they want to back it up, if it needs to be encrypted or not, if it needs to be daily or weekly etc.

      • sgarland an hour ago

        ZFS. Snapshot the entire filesystem, ship it off somewhere. Done. At worst, Postgres is slow to startup from the snapshot because it thinks it’s recovering from a crash.

        • GauntletWizard 29 minutes ago

          Postgres is recovering from a crash if it's reading from a ZFS snapshot. It probably did have several of it's database writes succeed that it wasn't certain of, and others fail that it also wasn't certain of, and those might not have been "in order". That's why WAL files exist, and it needs to fully replay them.

    • trog an hour ago

      My VPS provider just lets me take image snapshots of the whole machine so I can roll back to a point in time. It's a little slower and less flexible than application or component level but overall I don't even think about backup and restore now because I know it's handled there.

    • Aeolun 2 hours ago

      None of my hobby projects across 15 years or so have ever needed backups or restoring. I can agree it would be nice to have, but it’s a far cry from necessary.

  • mimischi 3 hours ago

    Been using dokku for probably 8 years now? (or something close to that; it used to be written entirely in bash!) Hosting private stuff on it, and an application at $oldplace probably also still runs on this solid setup. Highly recommended, and the devs are a great sport!

  • rgrieselhuber 3 hours ago

    I've kept a list of these tools that I've been meaning to check out. In scope, do they cover securing the instance? Is there any automation for creating networks of instances?

    • dewey 3 hours ago

      > In scope, do they cover securing the instance?

      Most of these I checked don't, but a recent Ubuntu version is perfectly fine to use as-is.

      > Is there any automation for creating networks of instances?

      Not that I'm aware, it would also defeat the purpose of these tools a bit that are supposed to be simple. (Dokku is "just" a shell script).

  • oulipo 2 hours ago

    What would be the best between Dokku / Dokploy / Coolify?

    • dewey 2 hours ago

      Depends on what you prefer. I went with Dokku as for me it was important that I could run docker-compose based apps along side with my "Dokku managed" apps. I didn't want to convert my existing apps (Sonarr, Radarr etc.) into Dokku apps and only use Dokku for my web projects.

      I also wanted to be able to remove Dokku if needed and everything would continue to run as before. Both of these work very well with Dokku.

pqdbr 9 hours ago

This looks really nice, congrats!

1) I see Kamal was an inspiration; care to explain what differs from it? I'm still rocking custom Ansible playbooks, but I was planning on checking out Kamal after version 2 is released soon (I think alongside Rails 8).

2) I see databases are in your roadmap, and that's great.

One feature that IMHO would be game changer for tools like this (and are lacking even in paid services like Hatchbox.io, which is overall great) is streaming replication of databases.

Even for side projects, a periodic SQL dump stored in S3 is generally not enough nowadays, and any project that gains traction will need to implement some sort of streaming backup, like Litestream (for SQLite) or Barman with streaming backup (for Postgres).

If I may suggest this feature, having this tool to provision a Barman server in a different VPS, and automate the process of having Postgres stream to it would be game changer.

One barman server can actually accommodate multiple database backups, so N projects could do streaming backup to one single barman server.

Of course, there would need to be a way to monitor if the streaming is working correctly, and maybe even help the user with the restoration process. But that effectively brings RTO down to near 0 (so no data loss) and can even allow point in time restoration.

  • mightymoud 6 hours ago

    1) Kamal is more geared towards having one VPS for project - it' made for big projects really. They also show on the demo that even the db is hosted on its own VPS. Which is great! But not for me or Sidekick target audience. Kamal V2 will support multi-projects on a single VPS afaik

    2) yes yes yes! I really like litestream. Also backup is one of those critical but annoying thing that Sidekick is meant to take care of for you. I'll look into Bearman. My vision is like we would have one command for most popular db types and it would use stubs to configure everything the right way. Need to sort out docker-compose support first though...

  • indigodaddy 7 hours ago

    Pretty sure that fly.io for example supports litestream as I remember seeing some fly doc related to litestream when I was looking a few days ago for my own project. Would also make sense that they do given Litestream’s creator is currently Fly’s VP of Product (I believe).

    • ctvo 4 hours ago

      Yes, fly.io is associated with Litestream, but... how is that related to the above thread or this tool?

      • indigodaddy 2 hours ago

        Quoted from the parent comment:

        “One feature that IMHO would be game changer for tools like this (and are lacking even in paid services like Hatchbox.io, which is overall great) is streaming replication of databases.”

        And then I mentioned that I believe fly.io has litestream support. I think it’s fairly relevant to the comment/thread.

4star3star 8 hours ago

I like what I'm seeing, though I'm not sure I have a use case. On a VPS, I'll typically run a cloudflared container and configure a Cloudflare tunnel to that VPS. Then, I can expose any port and point it to a subdomain I configure in the CF dashboard. This gives https for free. I can expose services in containers or anything else running on the VPS.

I'll concede there's probably a little more hands on work doing things this way, but I do like having a good grip on how things are working rather than leaning on a convenient tool. Maybe you could convince me Sidekick has more advantages?

  • skinner927 6 hours ago

    I must be an old simpleton, but why get cloudflare involved? You can get https for free with nginx and letsencrypt.

    • mightymoud 5 hours ago

      It's a tunnel. So VPS can only be reached through cloudflare. It's not only for https, but more for security and lockdown

      • mediumsmart 4 hours ago

        excellent and if cloudflare thinks your IP is iranian its going to get a really secure lockdown.

        • nine_k 3 hours ago

          More seriously, it also helps when you're a target of a DDoS.

          It's always a balancing act between outsourcing your heavy lifting, and having to trust that party and depend on them.

  • hu3 5 hours ago

    Nice setup.

    But isn't this a little too tied to Cloudflare?

    Caddy as a reverse proxy on that VPS would also give us free HTTPS. The downside is less security because no CF tunneling.

    • aborsy 4 hours ago

      You could put Authentik in front. It does Cloudflare stuff on VPS.

  • SahAssar 4 hours ago

    Are you also making sure that nothing on the VPS is actually listening on outside ports? A classic mistake is to setup something similar to what you are describing but not validating that the services are not listening on 0.0.0.0.

    I'd also not want to have cloudflare as an extra company to trust, point of failure and configuration to manage.

  • mightymoud 5 hours ago

    Interesting setup....

    How do you run the containers on your VPS tho? You could still use Sidekick for that!

    I think your setup is one step up in security from Sidekick nonetheless. A lot more work it seems too

  • tacone 5 hours ago

    Interesting! How do you connect via ssh? Do you just leave the port open or is there any trick you'd like to share?

  • renewiltord 2 hours ago

    This is pretty cool. I did not know I could do this. Currently, I have:

    1. nginx + letsencrypt

    2. forward based on host + path to the appropriate local docker

    3. run each thing in the docker container

    4. put Cloudflare in front in proxy DNS mode and with caching enabled

    Your thing is obviously better! Thank you.

    • jmpavlec 2 hours ago

      I used to run it the cloudflared way as the other user described but the tunnel often went offline without explanation for short periods of time and the latency was so so in my testing. I run it more similar to you now and haven't had any stability problems since dropping the cloudflared setup. I use cloudflared for a less critical app on my own hardware and that also goes up and down from time to time.

      • renewiltord 2 hours ago

        Oh thank you for that experience. This way has been entirely fire and forget (except for application layer issues) so I wouldn't want to change things then. The infra layer is pretty simple this way. I lost a 10 year server to bitrot (Hetzner wanted to sunset it and I had such a bespoke config I forgot how to admin it over the 10 years) so I'm trying to keep things simple so it will survive decades.

LVB 9 hours ago

This looks good, and I’m a target user in this space.

One thing I’ve noticed is the prevalence of Docker for this type of tool, or the larger self-managed PaaS tools. I totally get it, and it makes sense. I’m just slow to adapt. I’ve been so used to Go binary deployments for so long. But I also don’t really like tweaking Caddyfiles and futzing with systemd unit files, even though the pattern is familiar to me now. Been waffling on this for quite a while…

  • kokanee 8 hours ago

    I'm a waffler on this as well, increasingly leaning away from containers lately. I can recall one time in my pre-Docker career when I was affected by a bug related to the fact that software developed on Mac OS ran differently than software running on CentOS in production. But I have spent untold countless hours trying to figure out various Docker-related quirks.

    If you legitimately need to run your software on multiple OSes in production, by all means, containerize it. But in 15 years I have never had a need to do that. I have a rock solid bash script that deploys and daemonizes an executable on a linux box, takes like 2 seconds to run, and saves me hours and hours of Dockery.

    • bantunes 8 hours ago

      I don't understand how running a single command to start either a single container or a stack of them with compose, that then gets all the requirements in a tarball similar and just runs is seen as more complicated than running random binaries, setting values on php.ini, setting up mysql or postgres, demonizing said binaries and making sure libraries and the like are in order.

      • hiAndrewQuinn 6 hours ago

        You're going to be setting all that stuff up either way, though. It'll either be in a Dockerfile, or in a Vagrantfile (or an Ansible playbook, or a shell script, ...). But past a certain point you can't really get away from all that.

        So I think it comes down to personal preference. This is going to sound a bit silly, but to me, running things in VMs feels like living in an apartment. Containers feel more like living out of a hotel room.

        I know how to maintain an apartment, more or less. I've been living in them my whole life. I know what kinds of things I generally should and should not mess with. I'm not averse to hotels by any means, but if I'm going to spend a lot of time in a place, I will pick the apartment, where I can put all of my cumulative apartment-dwelling hours to good use.

        • kokanee 5 hours ago

          Yes, thank you for answering on my behalf. To underscore this, the decision is whether to set up all of your dependencies and configurations with a tool like bash, or to set it all up within Docker, which involves setting up Docker itself, which sometimes involves setting up (and paying for) things like registries and orchestration tools.

          I might tweak the apartment metaphor because I think it's generous to imply that, like a hotel, Docker does everything for you. Maybe Dockerless development is like living in an apartment and working on a boat, while using Docker is like living and working on a houseboat.

          There is one thing I definitely prefer Docker for, and that's running images that were created by someone else, when little to no configuration is required. For example, running Postgres locally can be nicer with Docker than without, especially if you need multiple Postgres versions. I use this workflow for proofs of concepts, trials, and the like.

      • bluehatbrit 8 hours ago

        I suppose like anything, it's a preference based on where the majority of your experience is, and what you're using it for. If you're running things you've written and it's all done the same way, docker probably is just an extra step.

        I personally run a bunch of software I've written, as well as open source things. So for me docker makes everything significantly easier, and saves me installing a lot of rubbish I don't understand well.

  • faangguyindia 6 hours ago

    Here's the thing, we've code running on VPS in cloud for a decade with any problem

    When we ran it on kubernets, without touching it, it broke itself in 3 years.

    Docker is fantastic developement tool, I do see real value in it.

    But kubernets and whole ecosystem? You must apply updates or your stuff will break one day.

    Currently I am using docker with docker compose and GCR, it does make things very simply and easy to develop and it's also self documenting.

  • mikkelam 7 hours ago

    There are tools like firecracker that significantly reduces docker overhead https://firecracker-microvm.github.io/

    I believe fly.io uses that. Not sure if OP’s tool does that

    • mightymoud 5 hours ago

      No Sidekick doesn't use firecracker. I know fly.io is built around it yes. They do that so they can put your app to sleep - basically shutting it down - then spin it up real quick when it gets a request. No place for this in Sidekick vision

    • indigodaddy 7 hours ago

      Was wondering the same— didn’t see any mention of it in the GH page though, nor even in roadmap

singhrac 41 minutes ago

Any possibility you’d add support for a Mac Mini deployment? I think the extra complexity would be from changing the Docker images, but of course the devil is in the details. I just have a Mac Mini and it would be great to self-host some stuff.

  • brirec 15 minutes ago

    As someone who used to love hosting things on a Mac mini, have you tried installing Linux on it to use as a dedicated server? If you do, it should handle this just like any other platform you could install it on.

silasb 9 hours ago

Nice, I'm working in the same space as you (not opensource, personal project). We landed on the same solution, encoding the commands inside Golang and distributing those via SSH.

I'm somewhat surprised not to see this more often. I'm guessing supporting multiple linux versions could get unwieldy, I focused on Ubuntu as my target.

Differences that I see.

* I modeled mine on-top of docker-plugins (these get installed during the bootstrapping process)

* I built a custom plugin for deploying which leveraged https://github.com/Wowu/docker-rollout for zero-downtime deployments

Your solution looks much simpler than mine. I started off modeling mine off fly.io CLI, which is much more verbose Go code. I'll likely continue to use mine, but for any future VPS I'll have to give this a try.

  • mightymoud 6 hours ago

    hahah seems like we went down the same rabbit hole. I also considered `docker-rollout` but decided to write my own script. Heavily inspired by the docker-rollout source code btw. Just curious, why did you decide to go with docker plugins?

tegiddrone 4 hours ago

Looks nice! Something I'd want in front is some sort of basic app firewall like fail2ban or CrowdSec to ban vuln scanners and other intrusion attempts. It is a nice thing about Cloudflare since they provide some of this protection.

bluehatbrit 8 hours ago

This is super nice, and I'm a big fan of the detailed readme with screenshots.

I'll definitely be trying it out, although I do have a pretty nice setup now which will be hard to pull away from. It's ansible driven, lets me dump a compose file in a directory, along with a backup and restore shell script, and deploys it out to my server (hetzner dedicated via server auction).

It's really nice that this handles TLS/SSL, that was a real pain for me as I've been using nginx and automating cerbot wasn't the most fun in the world. This looks a lot easier on that front!

  • mightymoud 6 hours ago

    Sounds like you have a great setup. My vision is to make a setup like yours more accessible really w/o having to play with low level config like ansible. I think you should try to replace nginx with Traefik - it handles certs out of the box!

turtlebits 4 hours ago

What about this is highly available? On a single VPS?

Does this only support a single app?

Nice project but the claims (production ready? Load balance on a single server?) are a bit ridiculous.

  • closewith 3 hours ago

    In my experience, single apps on VPSes have far higher availability in practice than the majority of convoluted deployments.

  • dewey 4 hours ago

    Highly available is overrated for most use cases, especially for any side projects.

trey-jones 2 hours ago

"Wow, this really looks significantly better than my own CLI tools"

I'm going to have to look into this pterm thing.

Hexigonz 9 hours ago

Ohhhh I like this. I really enjoy the flyctl CLI tools from Fly.io, which simplifies in a similar manner, but it's platform specific. Good work

gf297 3 hours ago

What's the purpose of encrypting the env file with sops, when the age secret key is stored on the VPS? If someone has access to the encrypted env file, they will also have access to the secret key, and can decrypt it.

funshed 4 hours ago

Nice, you should probably explain what traefik, sops and age will do. First time I've heard of sops, very handy!

AndrewCopeland 7 hours ago

Its a simple cli in go It uses docker There is no k8s Handles certs Zero down time

I would love for it to support docker-compose as some of my side projects needs a library in python but I like having my service be in go, so I will wrap the python library in a super simple service.

Overall this is awesome and I love the simplicity, with the world just full of serverless, AI and a bunch of other "stuff". Paralysis through analysis is really an issue and when you are just trying to create a service for yourself or an MVP, it can be a real hinderance.

I have been gravitating towards Taskfile to perform similar tasks to this. God speed to you and keep up the great work.

  • mightymoud 6 hours ago

    Thanks man! I'm working on the docker-compose support. I got it working locally, but the ergonomics are really hard to get right, cus compose files are so flexible. I was even considering using the `sidekick.yaml` file as the main config and then turn that into docker compose - similar to what fly.io does with fly.toml. But I wanna keep this docker centric... so yeah I am still doing more thinking around this

johnklos 5 hours ago

"to self-host any app"

Docker != app. Perhaps it'd be more accurate to say, "to host any Docker container"?

Sn0wCoder 8 hours ago

This looks great. Just bookmarked and then had to double check that I did not just bookmark it a few weeks ago. Turns out I had bookmarked Caddy which is similar but does not deploy the app and don’t think supports Docker. It was the auto CERT that was what I was interested in and what had stuck out in my mind. Have certbot setup and never think about it again, until my server needed to be rebuilt, and I started researching. Good to go for a few months, but my hosting will be up here in a year and going to switch providers and upgrade my setup to 2+ gig so I can run docker reliably. Thanks for posting this one just moved to the top of the list.

  • indigodaddy 7 hours ago

    In what sense would Caddy not support Docker? You can use caddy on the host itself to proxy to a docker container, and you could also have Caddy as a Docker container to proxy to other Docker containers (would just need an initial incoming iptables rule to the caddy container for the latter scenario— although caddy might have instructions somewhere on a more elegant way than iptables to get the connections to the Docker caddy container not sure)

joseferben 5 hours ago

this looks amazing!

i’m building https://www.plainweb.dev and i’m looking for the simplest way to deploy a plainweb/plainstack project.

looks like sidekick has the same spirit when it comes to simplicity.

in the plainstack docs i’ve been embracing fly.io, but reliability is an issue. and sqlite web apps (which is the core of plainstack) can’t have real zero downtime deployments, unless you count the proxy holding the pending request for 30 seconds while the fly machine is deployed.

i tried kamal but it felt like non-ruby and non-rails projects are second class citizens.

i was about to document deploying plainstack to dokku, but provisioning isn’t built-in.

my dream deployment tool would be dokku + provisioning & setup, sidekick looks very close to that.

definitely going to try this and maybe even have it in the blessed deploy path for plainstack if it works well!

aag 6 hours ago

This could be great for my projects, but I'm confused about one thing: why does it need to push to a Docker registry? The Dockerfile is local, and each image is built locally. Can't the images be stored purely locally? Perhaps I'm missing something obvious. Not using a registry would reduce the number of moving parts.

  • 3np 4 hours ago

    You can easily set up a Docker/CNCF registry[0] container running locally. It can be run either as a caching pull-through mirror for a public registry (allowing you to easily run public containers in an environment without internet access) or as a private registry for your own image (this use-case). So if you want both features, you currently need two instances. Securing it for public use is a bit less trivial but for local use it's literally a 'run' or two.

    So you can do 'docker build -t localhost/whatever' and then 'docker run localhost/whatever'. Also worth checking out podman to more easily run everything rootless.

    If all you need is to move images between hosts like you would files, you don't even need a registry (docker save/load).

    [0]: https://distribution.github.io/distribution/

  • mightymoud 6 hours ago

    Locally here means the locally on your laptop locally, not locally on your VPS. Contrary to popular opinion, I believe your source code shouldn't be on your prod machine - a docker image is all you need. Lots of other projects push your code to VPS to build the image there then use it. I see no point in doing that...

    • sdf4j 4 hours ago

      The docker registry can be avoided by exporting/importing the docker image over ssh.

sigmonsays 4 hours ago

tools like this are pretty sweet but I would rather just run it myself.

docker-compose with a load balancer (traefik) is fairly straightforward and awesome. the TLS setup is nice but I wildcard that and just run certgen myself.

The main thing I think that's missing is some sort of authentication or zero trust system, maybe vpn tunnel provisioner. Most services I self host I do not want to be made public due to security concerns.

achempion 8 hours ago

This looks amazing, congrats on the release! Really looking forward for the database hosting feature as well (and probably networking and mounting data dirs).

As a side note, any reason why you decided against using docker in swarm mode as it should have all these features already built it?

  • mightymoud 6 hours ago

    Correct me if I'm wrong, Docker Swarm mode is made to manage multi node clusters. This is meant for only one single VPS.

    • achempion 5 hours ago

      You can use docker swarm just for single VPS.

        - install docker 
        - run docker swarm init
        - create yaml that describes your stack (similar to docker-compose)
        - run docker stack deploy
      
      That's basically it. My go-to solution when I need to run some service on single VPS.

      If you want to just run a single container, you can also do this with `docker service create image:tag`

      • 3np 4 hours ago

        I thought docker-swarm had been considered neglected to the point of dead and without a future for a few years now. Is this impression incorrect/outdated?

        EDIT: So apparently what used to be known as "Docker Swarm" has been posthumously renamed to "Swarm Classic"/"Classic Swarm" and is indeed dead, abandoned, and deprecated. The project currently known as "Docker Swarm" is a younger completely different project which appears actively maintained. "Classic" still has roughly twice the GH stars and forks compared to the new one. I can't be the only one who's dismissed the latter, assuming it to be the former. Very confusing naming and branding, they would probably have more way more users if they had not repurposed the name like this.

        https://github.com/docker-archive/classicswarm

        > Swarm Classic: a container clustering system. Not to be confused with Docker Swarm which is at https://github.com/docker/swarmkit

spelunker 8 hours ago

Looks great! I similarly got frustrated about the complexity of doing side-project ops stuff and messed around with Kamal, but this goes the extra mile by automatically setting up TLS as well. I'll give it a try!

InvOfSmallC 4 hours ago

Can I run more than one app on the same VPS with this solution?

I now run more than one app into one single VPS.

dvaun 8 hours ago

Awesome! Love that it's written in Go—I've recently tested the language for some use cases at work and find it great. I'll dive into your repo to see if I can learn anything new :)

hkon an hour ago

Have used caprover. Good that more tools enter this space.

Canada 7 hours ago

Very well presented, the README.md looks great.

jjkmk 9 hours ago

Looks really good, going to test it out.

devmor 8 hours ago

Wow this is super handy! I have paid tools that function like this for a couple of specific stacks but this seems like an amazing general purpose tool.

Considering the ease of setup the README purports, a few hours of dealing with this might save me a couple hundred bucks a month in service fees.

  • mightymoud 6 hours ago

    Glad you found this useful. Let me know if you have specific features in mind.

    • devmor 6 hours ago

      I didn't see anything in the readme about deploy hooks - do you have a feature that lets users run arbitrary commands after the image is deployed? I have common use cases for both pre (ex. database migrations) and post (ex. Resource caching, worker spinup) traffic switchover.

      • mightymoud 6 hours ago

        Yup deploy hooks are on my mind. Just didn't put them on Readme. Shouldn't be very hard to implement. Might do this first before docker-compose support.

superkuh 9 hours ago

I don't know about you but I find the single command $ sudo apt install $x to be much faster, offers wider range of software, more reliable, less fragile, easier to network, and more secure when it comes to running applications on an Ubuntu VPS. The only thing the normal way of running applications is less good at (compared to this dependency manager manager) is "Zero downtime".

  • LVB 9 hours ago

    I’m not sure what you’re comparing that to. This project is about easily deploying your own app/side-project, which wouldn’t be available via apt.

    • superkuh 8 hours ago

      99% of what people run in docker is just normal applications.

      • indigodaddy 7 hours ago

        Not sure how true this statement is in general, but it’s definitely not true of course for what the project described as the use case, eg your own side project/app, which you’d obviously not be able to “apt install.” Unless OP meant like the supporting hosting/proxy infra like Apache/nginx, which yeah, that’s what this project is trying to avoid/abstract for the user to have to deal with.

        At the end of the day if you use this tool I guess all you’d need to worry about (given the tool is stable and works obviously) would be apt upgrades of the OS and even that you can automate, and then just figure out your reboot strategy. For me, I don’t even want to deal with that, so I happily use fly.

        • mightymoud 6 hours ago

          Respect! Fly is an absolute beast and to me is best in class for sure!

  • mightymoud 6 hours ago

    I think this is just miscommunication - I meant more in a side-project/application that you made yourself. Not an application package you install on ubuntu