I built Staying – a tool that instantly turns your code into interactive animations with no setup required. Just write or paste your code and hit "Visualize". No installs, no accounts, no configuration. *Supports*: Python, JavaScript & experimental C++
I took a peek at the terms of service[1] when considering signing up for an account. It seemed unusual to have them in a separate language from the rest of the site.
Presuming the translation was correct I would "agree to comply with Chinese laws ... [and] grant the company a non-exclusive, free, perpetual license for global use (including modification, display, and derivative development)."
The referenced terms (including clauses like perpetual licensing) were placeholder text I generated with ChatGPT during development. Currently, my website doesn’t store any user code—even if you’ve registered—except for an unused article feature.
As a bilingual service, I recognize how misleading these provisional terms appear and will remove this page immediately.
This is really great!
Is there a way to run it locally? Maybe with docker?
—
If there is any way to make a small donation, buy you a coffee, I would.
Hey, some feedback on those 1/24 holes that fill through scrolling. They're not matched with the steps of the scrollwheel. It seems like a step fills 1.2 holes. It pauses the animation of your demonstration between your points. Checked both Chromium and Firefox. Also checked my mouse events, and a single step matches with a single event.
It's not great on a smooth scrolling input device either. You need to carefully scroll to not leave it halfway through a transition. And there's no next/previous buttons available to step through properly. Best you can do is click the little bubbles in order.
I also notice this. Maybe it was not intended to "snap" on animation checkpoints though. Scrolling with a touch scroll or mouse middle-click-and-drag is pretty smooth.
Thanks for the feedback! I’ve logged this scrolling misalignment issue and will fix it in the next release.
This is a great learning tool!
One small improvement, show the return values, not just the result, and somehow visualize if the function has not yet returned.
Maybe it's just because I'm used to debuggeres, but the vertical arrangement of variables and their values seems weird.
Thanks for the suggestion! i'm continuously working to improve the visualization, and your point is very valuable.
I might have lost all my iOS Safari users because I used requestIdleCallback without testing it on iOS...
Quite neat. Reminds of this one that I used to use: https://pythontutor.com/visualize.html#mode=edit
Glad you like it, pythontutor is also one of the inspirations for me creating this product.
Tried this out today and it’s surprisingly smooth for a browser-based tool. The zero-setup part really helps. Would be great if the visualizer could also show how values flow across function calls over time, especially for recursive logic. That’s where beginners often get stuck.
Thank you for your feedback. Your suggestion makes a lot of sense, and improving data visualization has always been an area I’m continuously working on.
Very cool!
Maybe LSP integration for greater compatibility with languages would make this even more cool and useful!
Imagine visualizing a whole codebase with a tool like this.
Thanks for building it! It can really help help my 10 yo who is just learning coding. How does it handle potentially infinite loops like below? Currently I get a parsing error after it takes a while.
```python
while exit != "yes":
print("\*")
exit = input("Exit?: ")
``` I'm really sorry, but the input() function isn't supported at the moment. Also, just a heads-up, print() won't have any visible effect right now—maybe I should think about how to better visualize print output.
Interesting.
In the Configuration box, for the Core Language selection, when I switched to c/c++, the example code didn't automatically update to a C/C++ example. At least not for me in Firefox.
You're absolutely right,thanks for pointing that out! I'll add auto-switching for all languages. Appreciate your feedback!
Very exciting. I'm using math in particular as an entry point for teaching programming to offspring.
this could be truly helpful if I could include it into my (large) existing codebase to help spot performance bottlenecks. that's not something I could so simply pasted into a self-contained snippet, though. Do other HN know of other static analysis tools that would be great for this?
Hey, what do you mean by "performance bottlenecks"? Do you mean finding CPU/memory hotspots in your apps? If so, APM tools like New Relic or runtime scanners like AppMap sound like a better fit than static code analysis.
However, if you want to visualize the codebase structure and reason about how coupling and design choices impact performance, static analysis becomes your friend.
If you're on .NET, you might consider joining our early testing campaign at Noesis.vision (https://noesis.vision). There are also a bunch of other tools—some more AI-based (like GitDiagram, DeepWiki), and others less or not AI-based and more language-specific (often IDE plugins). Let me know if you'd like to chat more.
A lot of our code was written by domain experts who aren't trained in algorithms/data structures. We have new relic and other performance assessment tools to see where we have long running queries etc. But looking at the realized performance will only show you the biggest problem areas, like long running queries, and will miss the "Death by 1000 papercuts" of functions that work but in a way that is unnecessary. It would be nice if there is a tool that looks more holistically at whether certain functions are designed well, both in terms of space-time complexity of the algorithms and in terms of overall design of certain features. For example, a feature that sequentially changes a lot of things in a database could not raise any red flags in a profiler, but could be unnecessarily adding a lot of time versus an approach that might pull data into memory, conduct all the operations, then bulk reinserting. Or which could be refactored to act in parallel versus sequential.
This is what is really hard to figure out because you need to know 1) what is the business logic you actually need (and what tradeoffs can you make that would be acceptable given the product), 2) algorithm design, 3) how web apps scale things horizontally, 3) which things get performed on the cpu / memory versus a database, and more.
Instead of hoping for a tool that can do all of that at once, it would be nice if a tool could at least visualize (2) within an existing project to help a human who can keep those things in their head at one time to spot problem areas with code design / system architecture that wouldn't necessarily be revealed by simply looking at logging/APM tools.
Oh that's more clear right now. Hints of such refactorings are certainly within reach of todays AI tools (if you agree to send your code to the LLM models). Have you tried asking Cursor/Windsurf this question with a prompt similar to what you've just written above?
BTW it might be an interesting feature for Noesis if it needs to be done during regular scans. Thanks for a tip ;)
Yes I've tried cursor. Currently it gives 1) high level suggestions if I ask about architecture, which may be valid but doesn't solve the issue of refactoring a large existing codebase to make architectural changes, or 2) some specific improvements on very simple functions, but it majorly falls short for 3) actually implementing improvements, because it doesn't have the context of the product and what "makes sense" as tradeoffs and choices. There are a lot of times where for us, "correctness" is a state of data calculations rather than code validity, where unit tests / integration tests don't exist and aren't trivial to generate.It is counterproductive if we make something run faster but return the wrong results. Or if a team member were to look at a task/function, they could reason "actually this feature that does X should actually be doing Y" but that isn't something the AI can reason about in practice. In those cases, it would be ideal to change the function without relying on tests, because you would actually want the behavior to change. Small example: a feature is not performant, and rather than just making that feature perform better, the better solution would be to switch to a different library that we added elsewhere in the codebase for accomplishing that work.
Also, while cursor is now able to scan terminal server logs to see errors, it doesn't come "out of the box" hooked up to app performance profiling tools -- even just running locally. There probably are some MCP servers or something to do that but I haven't set that up. Really you would want the IDE agent to have a feedback loop that goes like "optimize {speed, resource usage} subject to the constraint of {unit/integration test}" and let it run asynchronously or overnight etc. (Of course, there are tons of times that LLMs will work themselves into a dead end loop, and it would be bad to indefinitely generate LLM API calls on a dead end overnight).
Perhaps the parent means "identify the commands/procedures that would cause workload "5", and if many of them exist and rank them accordingly? So a procedure that 'prints a line on the log' 'costs' "1", but a thousand of them would 'cost' "1000" or something similar?
Maybe I'll try visualizing parts of large-scale projects in the future, though it might be a bit challenging.
Totally off topic but this is the first time I've seen (or noticed) an ICP license link in a footer. I was curious so I looked it up (https://en.wikipedia.org/wiki/ICP_license) and its been in effect since 2000. I guess I'm one of the lucky 10,000 today.
Stick your site behind cloudflare, you'll get geographically distributed caching for free. It's currently very slow as if you are serving it from your basement.
Thanks for the suggestion, I'll give it a try
i just got a email says: Cloudflare's network is now boosting staying.fun’s speed and security