I basically just lost interest, Firebase does everything I need ultimately. But I'd still argue it's the easiest way to deploy a side project!
They say it's because it uses Docker Swarm, I don't know if that's true.
On thing that's cool (to me) is that I once borked an update of the CapRover admin and while I couldn't get into the dashboard all my projects kept humming along without a care in the world because they were all in their own docker containers. I thought that was pretty neat (all of CapRover runs inside of containers as well and seems to be independent of apps it manages/monitors.
We have an official (paid, one-time fee) web ui here: https://pro.dokku.com/ . It's under active development.
There are others (ledokku, wharf, bullet, etc.) that folks in the community have developed/maintained, though I cannot comment on them as I haven't used them for more than 5 minutes.
> Own your own PAAS. Infrastructure at a fraction of the cost. Powered by Docker, you can install Dokku on any hardware. Use it on inexpensive cloud providers. Use the extra cash to buy a pony or feed kittens. You'll save tens of dollars a year on your dog photo sharing website.
I think one of the selling points of Heroku is that if the box you're running on goes down, your service will just be started on another box and traffic will be routed there and you don't need to do anything.
1. Dokku setup on a new server
2. Deploy your app from a git repository
3. Restore your database from backup
One would think it will get better in newer versions.
Didn't try it the last 2 or 3 years though
* Possibly multiple times. Instances on degraded hosts have a tendency to get stuck and you have to stop, and then stop again to force stop it, and then sometimes even that doesn't work and you have to just wait.
Source: have received lots of "EC2 Instance Retirement" emails
the topic is here is ease of use.
Comparing Heroku to VMs and correctly configured ASGs is fine for me as a sysadmin, but they’re not targeting the same simplicity.
I do see that it shouts 'EXPERIMENTAL!', so I guess it's 'experimental, but official'.
The warning there is more that if the scheduler plugins land in the core (which I hope to do some day), the official usage may change.
For instance, I'd love to abstract our ingress plugin such that it plays nicely with Kubernetes and Nomad (Nomad in particular, since there is no notion of ingress there at all) but that may mean that the ingress controlling functionality in the Kubernetes plugin changes signficantly.
Dokku _also_ has no notion of autoscaling at the moment, though both Kubernetes and Nomad have autoscaling tooling. It would be great to expose this officially somehow vs the plugin-specific method that is exposed in Kubernetes.
Note that at least the Kubernetes plugin is in active usage - even in at least two commercial products being actively sold.
The catch you mentioned has a few caveats:
- You can host the data for the install on a mounted volume, and replace the underlying instance at will (if you're using something like AWS or GCP).
- We support alternative schedulers (Kubernetes and Nomad, with others like Compose, Lambda, and Swarm coming soon). In the case of using an alternate scheduler, your apps won't go down if the build server goes down.
- Since it is self-hosted, your Dokku install's uptime is pegged to your infrastructure's uptime. So yeah, if you are installing it on a Raspberry PI whose uptime is decided by a light switch that is next to your kitchen and you accidentally switch it off every once in a while due to not being able to see the switch at night (very specific example, I know), then yeah, you'll be down for a little while.
I may have read your comment wrong but it would be wholly unfair to expect Dokku to provide everything and the kitchen sink gratis.
I've seen tons of "Heroku alternatives" pop up over the past couple months, but they all seem to have (rather large, IMO) downsides of "it does A and B like Heroku, but not X, Y, and Z" (where again, IMO, X, Y, and Z are typically rather core features like redundancy/failover). I know everyone uses services for different reasons, but the general appeal of Heroku that I've seen has been the basically-zero-devops-ever-necessary-for-anything approach they've taken.
(I am co-founder)
While in some cases it is a quite thin layer - some commands just proxy along to the underlying Docker command + some flags - I'd love to hear more about places where Dokku itself wasn't useful for debugging your deployed system. Please feel more than free to reach out via Discord/IRC/Slack (all three are synced) with more info if filing an issue is a big reach. I'm more than happy to help folks debug live/file issues as a result, as it makes the project much better.
Dokku supports a number of methods to deploy apps:
- Docker images : via `git:from-image`
- Tarballs  (extracting into a heroku-like env): via `git:from-archive`
- Dockerfile : autodetected using our builder-dockerfile plugin (or selectable via builder:set)
- Heroku Buildpacks : autodetected using our builder-herokuish plugin (or selectable via builder:set)
- Cloud Native Buildpacks : autodetected using our builder-nix plugin (or selectable via builder:set)
Finally, I've also got two builders in the works:
- lambda: builds a lambda-compatible docker image (and an optional zipball of the code in case you want to deploy that on their official runtimes)
- compose: builds using docker-compose (for usage with a compose scheduler)
-  Build from a docker image: https://dokku.com/docs/deployment/methods/git/#initializing-an-app-repository-from-a-docker-image
-  Build from a tarball or zipball: https://dokku.com/docs/deployment/methods/git/#initializing-an-app-repository-from-an-archive-file
-  Dockerfile support: https://dokku.com/docs/deployment/builders/dockerfiles/
-  Heroku buildpack support: https://dokku.com/docs/deployment/builders/herokuish-buildpacks/
-  Cloud Native Buildpacks support: https://dokku.com/docs/deployment/builders/cloud-native-buildpacks/
-  Build via nix-build: https://github.com/jameysharp/dokku-builder-nix
- configuration management as env vars
- reverse proxy routing traffic for a given hostname to the correct container, swapping to the new container upon deploy
- plugin for tls cert generation using letsencrypt
- zero downtime deploys - it starts and smoke tests a new container before taking down the old one on deploy
- plugins for running databases and connecting your apps to them
I'm interesting in self-hosted AWS api compatible solutions.
Parse is also interesting. In fact I discovered that there is a whole new world outside AWS and DO. It no longer makes sense for me to run weekend projects on AWS outright, thats really for large enterprise applications.
What I would love is to be able to deploy my docker images on my "own"cloud running on a dozen droplets/VPS around the world and bill my users at the same rate that AWS is charging. I figure I would be able to undercut AWS significantly with this method.
Storing 2TB on AWS was price shock for me, makes no sense when there are dedicated servers with a ton of storage and bandwidth that I could just use instead of being "serverless".
However, I find it to be a hall of mirrors, wondering if anybody on HN has been in my shoes and how they are handling it. I guess my biggest concerns:
- uptime availability (can I promise close to AWS?)
- security (how do i harden my box/selfcloud and have peace of mind?)
- IAM type of roles/permissions (how do i emulate something like IAM for my team?)
Like the billing from AWS is getting to a point where I feel like we can use bunch of droplets behind some "selfcloud" that manages all these things for me. I'm sure a solution exists, I just often have trouble recalling their names. Dokku makes it easy to remember because it sorta rhymes with Heroku.
> It no longer makes sense for me to run weekend projects on AWS outright, thats really for large enterprise applications.
depending on the type of site, cf workers is ridiculous value. single button availability of your content in every corner of the world and you wont pay a dime
aren't cf workers very constrained? like your call has to finish within a second or two for it to make sense and I think AWS Lambda matched their price? I could be confused here.
they're offering a heroku experience.
`git push` deploys your code.
small edit: i added this in a dokku thread(which is amazing, btw) because there is zero setup time. just point to your github repo and it just works.
There is a tutorial for using it. It's linked to after you install it for the first time, and is available here. It goes through a few common tasks, though by no means is a "kitchen sink" tutorial.
In regards to your specific questions:
- I've answered this elsewhere, but your uptime is defined by your host uptime (and any mechanisms for backup/restore you have setup). Some hosting providers (AWS/GCP) provide migration of hosts that die, some have block storage (Azure/AWS/DO/GCP), while others (Rackspace) provide backups of specific directories. As with anything you run and maintain, very heavy YMMV here, but I'm happy to answer questions specific to your needs.
- Dokku runs very few/no persistent binaries (atm only an optional event-listener that restarts apps if it detects a web container has changed it's IP address), so the attack vectors are your app and anything else you are running that exposes ports externally (SSH is the big one). I defer to others on container security, but our default Herokuish (and Pack) images are built to run processes as non-root users, so that should provide _some_ level of isolation. Container security is a big space, so I don't want to write something here that will be outdated in three seconds or is just plain wrong. If you have concerns about specific things in the project, please get in touch.
- I've used metadataproxy to lock down IAM roles on a per-app basis (with a custom plugin injecting the correct container env var for each app). If you need to lock apps to specific users, there is the community dokku-acl plugin. I'm also working on team management support in Dokku Pro which will be a bit more familiar to users of Heroku.
- I know of a few users using our kubernetes plugin (automated via terraform) against mostly default Kubernetes clusters in Digitalocean and AWS. If the Dokku host dies, they just lose the ability to deploy, but can restore access fairly quickly. Everything else just kinda lives out there on their cluster.
-  Official tutorial: https://dokku.com/docs/deployment/application-deployment/
-  Ways to get in touch: https://dokku.com/docs/getting-started/where-to-get-help/
-  metadataproxy: https://github.com/lyft/metadataproxy
-  dokku-acl: https://github.com/dokku-community/dokku-acl
-  Dokku Pro: https://pro.dokku.com/
-  Kubernetes Scheduler plugin: https://github.com/dokku/dokku-scheduler-kubernetes
It is a way to get the same sort of developer experience benefit as dokku, but without dockers and containers, using plain UNIXy tools on a single Linux node. The above link explains how it works.
For those worried about a Single Point of Failure, you can't have your cake and eat it too. Docker and Dokku is a single point of failure when you build directly on it, I guess. Kubernetes mitigates that.
We're still here, keeping the tires on. I haven't personally tested K8s 1.24 with Workflow yet, but we are actively seeking maintainers on all of the cloud vendors.
If you find that something is broken and want to see it fixed, talk to me on Slack. I'm aware of at least Azure that has broken their Bucket Storage API since Workflow was originally published by Deis, and it hasn't been quite fixed yet.
Documented support for Google Cloud and AWS are both extant as well, although I've heard one person call into question whether Google Cloud storage will work out of the box, or if it still needs upgrades. We'd like to add DigitalOcean support as well, I already know it basically works, we just need to add documentation and probably also give it a quick spit-shine and polish.
Dokku on the other hand is less heavyweight and does not require as much work for maintenance, since it does not have to be decentralized or protect against any SPOF. It's a wonder that more people do not want to use this. I've heard of at least one more Deis Workflow fork in the wild, possibly in better or worse states of maintenance than Team Hephy, but if you are looking to get off of Heroku then you could certainly do much worse.
We do need volunteers as we do not have time to maintain everything by ourselves, (come talk to us on the Slack if you're interested in joining the fun!)
Well, the docs did work, (they're 100% down now...)
Keen to hear what you’re running.
Could you share how big the VPS yo used is (cores/Ram) and what traffic you got to manage with that?
If you have one app then dedicate a server to it and run it right on the operating system. Maybe have 2 and a load balancer so you can do prod deployments with no downtime. The point is get dokku, docker, and all that abstraction between the network card and your app out of the way.
telegraf - report the host / process metrics
opentracing - instrument the application itself with tracing, with some pre-built libs like https://github.com/opentracing-contrib/go-stdlib
tempo - collect the traces
Loki - collect the logs
influxdb - collect the metrics
grafana - display it all
It's not too hard once you set it up... but as you see there's a lot of elements to it.
There are a lot of elements indeed, https://docs.teamhephy.com/understanding-workflow/components...
This is classic literature: https://docs.teamhephy.com/understanding-workflow/architectu...
It is much easier to just use it than to try and re-invent your own: https://docs.teamhephy.com/quickstart/
I cannot guarantee the cloud provider bits all work, I don't use them so I don't maintain them, but if you have Kubernetes with persistent disk provided, which any managed Kubernetes can most certainly do, it's easy enough to hook up generic Minio (which has the added benefit of working just as well on bare metal hosts), and I'm certain that does work as I use it.
Now listen buddy... That CV is not gonna pad itself, ok?
Seriously though, if you have time and curiosity, I'd recommend putting together a monitoring stack by hand once. There's a lot to learn from that experience.
I'm sure I'm not configuring it well.
There are too many leaps I couldn't follow directly in how to make the config, and there's something about how arrays merge in YAML that makes it harder to write a config like this in Helm... so there is a lot of what I've written here that shouldn't need to be in my own config, because they are the standard default alerts, but I wound up having to include them to make my one extra alert join the rest (and this is my excuse for why it is several hundred lines long, when in reality it's mostly stock.)
This config goes so far as making alerts to Slack with Alertmanager that work out of the box!
The single point of failure though, is something people worry too much about. If you are serious about what you host, you do need to have another host ready to go.
Majority of workloads can be down for a few hours without customers worrying anyway.
We actually use it for hosting quite large workloads, but we do have a few bar e metal hosts we manually provision different services across. At a moment's notice we can quickly push the app to a different server and would be down for a few tens of minutes at most, minutes normally.
I just don't get the expense of Heroku just because your (in reality) non-essential workload can't be down for a few seconds.
There are both Kubernetes and Nomad plugins. I'm also building schedulers for AWS Lambda, Compose (which I guess also sorta provides Azure ACI and AWS ECS support), and Swarm.
You can also scale individual process types (so anything in your Procfile) via our `ps` plugin. All plugins should support this, so you don't need to learn a new set of commands for alternative schedulers.
-  Kubernetes Scheduler Plugin: https://github.com/dokku/dokku-scheduler-kubernetes
-  NomadScheduler Plugin: https://github.com/dokku/dokku-scheduler-nomad
-  Process Management: https://dokku.com/docs/processes/process-management/
From a dev perspective, it’s been awesome.
I have zero monitoring on it, so I have no idea if users love it. But analytics look good.
Either way. Impressed. And appreciate all the efforts behind the project.
Rails, pg, sidekiq, etc.
Docker updates are usually the stressful ones, not Dokku (I try to call out all deprecations/changes in our Migration guides). You can turn on Docker live-restore to make Docker upgrades safer.
I'd imagine the reason you have crashes is due to not having enough system resources (usually memory) to run your workloads. There isn't too much Dokku can do about that, though I would have loved to see a ticket or something on our chat venues to help debug.
Dokku also supports kubernetes if you still want the Dokku experience, and you can deploy images built in CI.
Glad to hear you have a setup that works for you though :)
 Upgrading Dokku through different versions: https://dokku.com/docs/getting-started/upgrading/#migration-...
 Docker live-restore: https://docs.docker.com/config/containers/live-restore/
 Kubernetes scheduler plugin: https://github.com/dokku/dokku-scheduler-kubernetes
 Build from a docker image: https://dokku.com/docs/deployment/methods/git/#initializing-...
Hetzner offers 48 dedicated vcores/192 GB RAM in a single machine for less than 500€/month. I can totally see many business working for years within these boundaries. Also, not every company need Google-grade high availability.
I was very disappointed that it did not have tcp/udp proxy support.
Fortunately the plug-in system allowed us to extend this functionality in a day.
Dokku is very nice for this use-case. No need to have a complicated kubernetes setup.
I’ll never understand why one stacks config syntax and semantics is seen as complicated while another stack of gigs and gigs of state is not.
A Linux host that pulls down a web server via its package manager is gigs of special state too. Where’s the simplicity? It’s no less complicated, maybe more familiar.
There’s way too much romantic and poetic day dreaming about what it is we do in IT.
You can certainly roll your own setup and get the minimal functionality _you_ need. Thats the general "build vs buy" question.
Dokku itself comes with a ton of functionality and is actively developed separate from your app. It's been built for thoughtful extensibility (one person in this story mentioned adding TCP/UDP support in a day, as an example) so if it doesn't do what you want, you can add it. We also provide a "curated" experience that handles things based on years of working on other deployment tools (and not just simply cloning the Heroku CLI and experience, though it has had a very deep influence). If our workflow and experience are what you want, its there today.
With something you build/maintain, the system is often simpler. Maybe one or two tools with a couple hundred lines of code, all fairly grokable. If you need to extend it, you know exactly where and can do so. In fact, thats how a lot of Dokku alternatives come out - someone didn't like our codebase or setup and decided to write something else. Building your own deployment stack is often how most companies start - roll your own against tooling that is close-by or familiar.
The unfortunate thing is you now own that experience, so if you need functionality, you need to build it, which can be distracting from your actual business goals (unless you're a deployment tooling company, in which case this totally makes sense!). Would you rather spend 10 hours working on a feature for your customers or hacking together a version of review apps? If this is for your own home cluster, maybe that makes sense, but even then, I'd assume most people _should_ just want to deploy their app and move on vs work on that tooling.
All that said, if Minikube + Ingress and a few yaml templates work for you, go to town. Feel free to try us out if you want a more curated experience :)
-  Piku is an example where they wanted ARM support and we didn't have it yet, while the Caprover developer wanted something that works with Swarm.
About 5-6 years ago I found out that "Dokku" means something like "crappy" in Telugu (I think). Lots of people seem to refer to it when talking about politics, theaters, and methods of transportation on twitter.
It's also the name of some famous Korean actor's Dog (I think they got the name from a Naruto character named Dokku). The dog is fairly new, and lots of folks stanning particular actors or actresses on twitter use the word `dokku` in their username.
Dokku Umarov was also a Chechen mujahid in both Chechen wars. References to him pop up every so often (he died in 2013, _just_ after the Dokku project started).
IIRC the name "Dokku" is a portmanteau of `Docker` and `Heroku`. How you pronounce it is up to you, though it seems I pronounce it very differently from others (I use "dough" instead of "dah").
Finally, "Dooku" is not how it's spelled, though lots of people seem to refer to Count Dokku of Star Wars fame :D
-  Cloning existing apps: https://dokku.com/docs/deployment/application-management/#cloning-an-existing-app
-  Github actions review app support: https://github.com/dokku/github-action/blob/master/example-workflows/review-app.yml
And I bet you the same goes for performance, mine can share data between cores atomically.