I don't get it. How is this different than starting new threads?
In the article example, it doesn't look like anything is returned from each parallel function call. the main loop just invokes the func for each I, and they print when done. No shared memory, no scheduling or ordering.. what's the advantage here?
In code examples, seems shared memory & scheduling are not a thing either. More like functional or chain programming - a function calls next func and passes output to it. Each loop runs independently, asynchronously from others. Reminds me of ECS model in gamedev.
That's great and all, but it doesn't solve or simplify intricacies of parallel programming so much as it circumnavigates them, right?
Is the advantage it being low-level and small?
I think the same "concept" can be done in Bash: ```for i in $(seq 1 100); do fizzbuzz $i & ; done```
What is the equivalent of Prolog facts in your Bash example? Are they as easy to add and retract as in Prolog?
Are Facts used in the the Fleng fizzbuzz example?
You're probably right - I'm sure this has more features coming from Logic programming, And I'm just too hung-up on the Concurrent part of the title.
Sure, there's one, 'loop2(_, 101).'.
If it wasn't a toy problem but rather a larger set of rules describing a more salient algorithm it would matter more whether you could pour in more facts as data enters the system.
I get your point, I personally do a lot of crude concurrency with POSIX fork() and shell spawns from within suitable programming languages, e.g. Picolisp, Elixir.