danjl 4 days ago

I love that the only alternative is a "pile of shell scripts". Nobody has posted a legitimate alternative to the complexity of K8S or the simplicity of doctor compose. Certainly feels like there's a gap in the market for an opinionated deployment solution that works locally and on the cloud, with less functionality than K8S and a bit more complexity than docker compose.

15
drewbailey 4 days ago

K8s just drowns out all other options. Hashicorp Nomad is great, https://www.nomadproject.io/

sunshine-o 3 days ago

I am puzzled by the fact that no successful forks of Nomad and Consul have emerged since the licence change and acquisition of Hashicorp.

If you need a quick scheduler, orchestrator and services control pane without fully embracing containers you might soon be out of luck.

NomadConfig 3 days ago

Nomad was amazing at every step of my experiments on it, except one. Simply including a file from the Nomad control to the Nomad host is... impossible? I saw indications of how to tell the host to get it from a file host, and I saw people complaining that they had to do it through the file host, with the response being security (I have thoughts about this and so did the complainants).

I was rather baffled to an extent. I was just trying to push a configuration file that would be the primary difference between a couple otherwise samey apps.

mdaniel 3 days ago

https://github.com/hashicorp/nomad/blob/v1.6.0/website/conte... seems to have existed since before the license rug-pull. However I'm open to there being some miscommunication because https://developer.hashicorp.com/nomad/docs/glossary doesn't mention the word "control" and the word "host" could mean any number of things in this context

dhash 2 days ago

+1 to miscommunication, but host_volume is indeed what I’ve used to allow host files into the chroot. Not all drivers support it, and there are some nomad config implications, but it otherwise works great for storing db’s or configurations.

marvinblum 4 days ago

Thumbs up for Nomad. We've been running it for about 3 years in prod now and it hasn't failed us a single time.

dijit 4 days ago

I coined a term for this because I see it so often.

“People will always defend complexity, stating that the only alternative is shell scripts”.

I saw people defending docker this way, ansible this way and most recently systemd this way.

Now we’re on to kubernetes.

msm_ 4 days ago

>and most recently systemd this way.

To be fair, most people attacking systemd say they want to return to shell scripts.

dijit 3 days ago

No, there are alternatives like runit and SMF that do not use shell scripts.

Its conveniently ignored by systemd-supporters and the conversation always revolves around the fact that we used to use shell scripts. Despite the fact that there are sensible inits that predate systemd that did not use shell languages.

znpy 3 days ago

Hey, systemd supporter here and yes, I do ignore runit and SMF.

systemd is great and has essentially solved the system management problem once and for all. it's license is open enough not to worry about it.

SMF is proprietary oracle stuff.

Runit... tried a few years ago on void linux (I think?) and was largely unimpressed.

E39M5S62 3 days ago

Runit absolutely uses shell scripts. All services are started via a shell script that exec's the final process with the right environment / arguments. If you use runit as your system init, the early stages are also shell scripts.

d--b 4 days ago

At least I never saw anyone arguing that the only alternative to git was shell scripts.

Wait. Wouldn't that be a good idea?

weikju 4 days ago

Kamal was also built with that purpose in mind.

https://kamal-deploy.org/

danjl 3 days ago

This looks cool and +1 for the 37Signals and Basecamp folks. I need to verify that I'll be able to spin up GPU enabled containers, but I can't imagine why that wouldn't work...

nikeee 4 days ago

Docker Swarm is exactly what tried to fill that niche. It's basically an extension to Docker Compose that adds clustering support and overlay networks.

Taikonerd 3 days ago

Docker Swarm is a good idea that sorely needs a revival. There are lots of places that need something more structured than a homemade deploy.sh, but less than... K8s.

papichulo2023 3 days ago

Completely adectonal, but I see more and more people using in /r/selfhosted

czhu12 4 days ago

This is basically exactly what we needed at the start up I worked at, with the added need of being able to host open source projects (airbyte, metabase) with a reasonable level of confidence.

We ended up migrating from Heroku to Kubernetes. I tried to take some of the learnings to build https://github.com/czhu12/canine

It basically wraps Kubernetes and tries to hide as much complexity from Kubernetes as possible, and only expose the good parts that will be enough for 95% of web application work loads.

belthesar 2 days ago

I've personally been investing heavily in [Incus](https://linuxcontainers.org/incus/), which is the Linux Containers project fork and continuation of LXD post Canonical takeover of the LXD codebase. The mainline branch has been seeing some rapid growth, with the ability to deploy OCI Application Containers in addition to the System containers (think Xen paravirtualized systems if you know about those) and VMs, complete with clustering and SDN. There's work by others in the community to create [incus-compose](https://github.com/bketelsen/incus-compose), a way to use Compose spec manifests to define application stacks. I'm personally working on middleware to expose instance options under the user keyspace to a Redis API compliant KV store for use with Traefik as an ingress controller.

Too much to go into with what Incus does to tell you everything in a comment, but for me, Incus really feels like the right level of "old school" infrastructure platform tooling with "new school" cloud tech to deploy and manage application stacks, the odd Windows VM that accounting/HR/whoever needs to do that thing that can't be done anywhere else, and a great deal more.

mdaniel 2 days ago

For others interested in such things, colima also supports it: https://github.com/abiosoft/colima/tree/v0.8.0#incus

kikimora 4 days ago

While not opinionated but you can go with cloud specific tools (e.g. ECS in AWS).

danjl 4 days ago

Sure, but those don't support local deployment, at least not in any sort of easy way.

acdha 3 days ago

That very much depends on what you’re doing. ECS works great if your developers can start a couple of containers, but if they need a dense thicket of microservices and cloud infrastructure you’re probably going to need remote development environments once you outgrow what you can do with localstack but that’s not really something Kubernetes fixes and really means that you want to reconsider your architecture.

iamsanteri 4 days ago

Docker Swarm mode? I know it’s not as well maintained, but I think it’s exactly what you talk about here (forget K3s, etc). I believe smaller companies run it still and it’s perfect for personal projects. I myself run mostly docker compose + shell scripts though because I don’t really need zero-downtime deployments or redundancy/fault tolerance.

glonq 3 days ago

Somebody gave me the advice that we shouldn't start our new project on k8s, but should instead adopt it only after its value became apparent.

So we started by using docker swarm mode for our dev env, and made it all the way to production using docker swarm. Still using it happily.

lithos 1 day ago

Remember using shell scripts to remove some insane node/js-brain-thonk hints, being easier than trying to reverse engineering how the project was supposed to be "compiled" to properly use those hints.

jedberg 4 days ago

I hate to shill my own company, but I took the job because I believe in it.

You should check out DBOS and see if it meets your middle ground requirements.

Works locally and in the cloud, has all the things you’d need to build a reliable and stateful application.

[0] https://dbos.dev

stackskipton 4 days ago

Looking at your page, it looks like Lambdas/Functions but on your system, not Amazon/Microsoft/Google.

Every company I've ever had try to do this has ended in crying after some part of the system doesn't fit neat into Serverless box and it becomes painful to extract from your system into "Run FastAPI in containers."

jedberg 4 days ago

We run on bare metal in AWS, so you get access to all your other AWS services. We can also run on bare metal in whatever cloud you want.

stackskipton 4 days ago

Sure but I'm still wrapped around your library no? So if your "Process Kafka events" decorator in Python doesn't quite do what I need to, I'm forced to grab the Kafka library, write my code and then learn to build my own container since I assume you were handling the build part. Finally, figure out which 17 ways to run containers on AWS (https://www.lastweekinaws.com/blog/the-17-ways-to-run-contai...) is proper for me and away I go?

That's my SRE recommendation of "These serverless are a trap, it's quick to get going but you can quickly get locked into a bad place."

jedberg 4 days ago

No, not at all. We run standard python, so we can build with any kafka library. Our decorator is just a subclass of the default decorator to add some kafka stuff, but you can use the generic decorator around whatever kafka library you want. We can build and run any arbitrary Python.

But yes, if you find there is something you can't do, you would have to build a container for it or deploy it to an instance of however you want. Although I'd say that mostly likely we'd work with you to make whatever it is you want to do possible.

I'd also consider that an advantage. You aren't locked into the platform, you can expand it to do whatever you want. The whole point of serverless is to make most things easy, not all things. If you can get your POC working without doing anything, isn't that a great advantage to your business?

Let's be real, if you start with containers, it will be a lot harder to get started and then still hard to add whatever functionality you want. Containers doesn't really make anything easier, it just makes things more consistent.

danjl 4 days ago

Nice, but I like my servers and find serverless difficult to debug.

jedberg 4 days ago

That's the beauty of this system. You build it all locally, test it locally, debug it locally. Only then do you deploy to the cloud. And since you can build the whole thing with one file, it's really easy to reason about.

And if somehow you get a bug in production, you have the time travel debugger to replay exactly what the state of the cloud was at the time.

danjl 4 days ago

Great to hear you've improved serverless debugging. What if my endpoint wants to run ffmpeg and extract frames from video. How does that work on serverless?

jedberg 4 days ago

That particular use case requires some pretty heavy binaries and isn't really suited to serverless. However, you could still use DBOS to manage chunking the work and managing to workflows to make sure every frame is only processed once. Then you could call out to some of the existing serverless offerings that do exactly what you suggest (extract frames from video).

Or you could launch an EC2 instance that is running ffmpeg and takes in videos and spits out frames, and then use DBOS to manage launching and closing down those instances as well as the workflows of getting the work done.

justinclift 4 days ago

Looks interesting, but this is a bit worrying:

  ... build reliable AI agents with automatic retries and no limit on how long they can
  run for.
It's pretty easy to see how that could go badly wrong. ;)

(and yeah, obviously "don't deploy that stuff" is the solution)

---

That being said, is it all OSS? I can see some stuff here that seems to be, but it mostly seems to be the client side stuff?

https://github.com/dbos-inc

jedberg 4 days ago

Maybe that is worded poorly. :). It's supposed to mean there are no timeouts -- you can wait as long as you want between retries.

> That being said, is it all OSS?

The Transact library is open source and always will be. That is what you gets you the durability, statefulness, some observability, and local testing.

We also offer a hosted cloud product that adds in the reliability, scalability, more observability, and a time travel debugger.

faizshah 4 days ago

Capistrano, Ansible et al. have existed this whole time if you want to do that.

The real difference in approaches is between short lived environments that you redeploy from scratch all the time and long lived environments we nurse back to health with runbooks.

You can use lambda, kube, etc. or chef, puppet etc. but you end up at this same crossroad.

Just starting a process and keeping it alive for a long time is easy to get started with but eventually you have to pay the runbook tax. Instead you could pay the kubernetes tax or the nomad tax at the start instead of the 12am ansible tax later.

nicodjimenez 4 days ago

Agreed, something simpler than Nomad as well hopefully.

papichulo2023 3 days ago

Powershell or even typescript are better suit for deploying stuff but for some reason the industry sticks to bash and python spaghetti.

sgarland 3 days ago

> some reason

Probably because except in specific niche industries, every Linux box you ever experience is extremely likely to have Bash and Python installed.

Also, because Powershell is hideously verbose and obnoxious, and JS and its ilk belong on a frontend, not running servers.

Melatonic 3 days ago

I havent used Nomad but isnt that exactly what everyone always recommends?

sc68cal 4 days ago

Ansible and the podman Ansible modules