I hate to shill my own company, but I took the job because I believe in it.
You should check out DBOS and see if it meets your middle ground requirements.
Works locally and in the cloud, has all the things you’d need to build a reliable and stateful application.
[0] https://dbos.dev
Looking at your page, it looks like Lambdas/Functions but on your system, not Amazon/Microsoft/Google.
Every company I've ever had try to do this has ended in crying after some part of the system doesn't fit neat into Serverless box and it becomes painful to extract from your system into "Run FastAPI in containers."
We run on bare metal in AWS, so you get access to all your other AWS services. We can also run on bare metal in whatever cloud you want.
Sure but I'm still wrapped around your library no? So if your "Process Kafka events" decorator in Python doesn't quite do what I need to, I'm forced to grab the Kafka library, write my code and then learn to build my own container since I assume you were handling the build part. Finally, figure out which 17 ways to run containers on AWS (https://www.lastweekinaws.com/blog/the-17-ways-to-run-contai...) is proper for me and away I go?
That's my SRE recommendation of "These serverless are a trap, it's quick to get going but you can quickly get locked into a bad place."
No, not at all. We run standard python, so we can build with any kafka library. Our decorator is just a subclass of the default decorator to add some kafka stuff, but you can use the generic decorator around whatever kafka library you want. We can build and run any arbitrary Python.
But yes, if you find there is something you can't do, you would have to build a container for it or deploy it to an instance of however you want. Although I'd say that mostly likely we'd work with you to make whatever it is you want to do possible.
I'd also consider that an advantage. You aren't locked into the platform, you can expand it to do whatever you want. The whole point of serverless is to make most things easy, not all things. If you can get your POC working without doing anything, isn't that a great advantage to your business?
Let's be real, if you start with containers, it will be a lot harder to get started and then still hard to add whatever functionality you want. Containers doesn't really make anything easier, it just makes things more consistent.
Nice, but I like my servers and find serverless difficult to debug.
That's the beauty of this system. You build it all locally, test it locally, debug it locally. Only then do you deploy to the cloud. And since you can build the whole thing with one file, it's really easy to reason about.
And if somehow you get a bug in production, you have the time travel debugger to replay exactly what the state of the cloud was at the time.
Great to hear you've improved serverless debugging. What if my endpoint wants to run ffmpeg and extract frames from video. How does that work on serverless?
That particular use case requires some pretty heavy binaries and isn't really suited to serverless. However, you could still use DBOS to manage chunking the work and managing to workflows to make sure every frame is only processed once. Then you could call out to some of the existing serverless offerings that do exactly what you suggest (extract frames from video).
Or you could launch an EC2 instance that is running ffmpeg and takes in videos and spits out frames, and then use DBOS to manage launching and closing down those instances as well as the workflows of getting the work done.
Looks interesting, but this is a bit worrying:
... build reliable AI agents with automatic retries and no limit on how long they can
run for.
It's pretty easy to see how that could go badly wrong. ;)(and yeah, obviously "don't deploy that stuff" is the solution)
---
That being said, is it all OSS? I can see some stuff here that seems to be, but it mostly seems to be the client side stuff?
Maybe that is worded poorly. :). It's supposed to mean there are no timeouts -- you can wait as long as you want between retries.
> That being said, is it all OSS?
The Transact library is open source and always will be. That is what you gets you the durability, statefulness, some observability, and local testing.
We also offer a hosted cloud product that adds in the reliability, scalability, more observability, and a time travel debugger.