Without an explanation of why it works on languages other than C, that is a hard claim to believe! I mean, I don't think they're lying but (given it doesn't use an LLM) I am bewildered.
So the short answer is that some of the reduction passes are pretty generalizable to C-family languages, and these can be some of the most effective passes.
Some of those passes are:
* Tokenize the input according to C, and randomly chuck away chunks of length 1-(I think) 13. Most languages vaguely similar to C have very similar tokenization rules to C, so these kinds of passes are going to have similar effects, and this will tend to be effective in doing things like omitting needless qualifiers or attributes.
* Balanced parentheses, and chuck away units of balanced parentheses. This includes (), {}, and []--and for almost any language, this can be a useful level of stuff to throw away or reduce.
* Stripping comments and whitespace. Again, many languages use the /* and // styles of comments that C does, so this is pretty effective.
There's actually relatively few passes that are incredibly C/C++-specific, and in my experience, one of the biggest issues with creduce is that it is absolutely terrible at trying to detemplatize the code as a reduction step, which should be a relatively easily automatible step.
I recommend skimming the PLDI paper linked by asmeurer, it has a good summary. Some of its transforms are pretty C-specific, using the Clang frontend; some are pretty generic and probably work on any Algol-descended language. It's meant to be a modular tool, so you could add transforms that understand other languages if you like.
Without looking at the paper, I would guess it would be like a fuzzer that apply mutations that reduce the size of the input.
Isn't that potentially unsafe though? Random permutations can reduce arbitrary programs to destructive programs, in particular any program can be mutated to `rm -rf /` given enough time. Also even the problem of finding syntactically valid programs is combinatoric, it's pretty surprising that it can go toward a local optima in such a short time without having any understanding of the function or its derivatives.
For compiled languages it should be fine, as you're only going to compile the permuted source code, not execute it.
Given a sufficiently bad compiler bug anything is possible, but I think given that it's trying to minimize the size of an input that gives a particular output I don't think it's likely to explore distant branches.
> For compiled languages it should be fine, as you're only going to compile the permuted source code, not execute it.
Only if you'te going for compiler bug. If you're working on minimal reproducible example. You need to make sure your code isn't reduced to:
int main() { return 0; }
in that case.
It runs the executed code in order to determine if the bug exists, does it not?
That depends completely on the interesting-ness test that's provided. If you're looking for a compiler bug (like I do often for my language), then the interesting-ness test checks the compile logs for information like the "Segmentation Fault" text, no need to run the actual executable. You could also hoist everything into docker if you really want to, or ship it off to another device to run.
> Given a sufficiently bad compiler bug anything is possible, ...
I can't help but wonder about the consteval/constexpr machinery in the various C++ compilers... It'd be interesting to see how much adversarial input has been tested for those.
(I guess Zig's comptime might also be relevant, but I'm not sure what that's allowed to do. Maybe execute arbitrary code?)
... anyway just a stray thought.
It seems pretty unlikely to mutate in a malicious direction, random chance probably wouldn't get there, and there doesn't seem like any reason it would be guided in that direction.
"Enough time" does a lot of work here and warrants further analysis. With enough time it might produce the works of shakespare (if you ignore its designed to minimize), but we should probably view anything that would take more than a year as too much times.
Yes, and without understanding how it works, I'm left wondering whether it's even safe to use this way. Will creduce execute mutated versions of the input script, potentially deleting my files or eating my lunch?
it will execute whatever script you pass it (immutable) and mutate whatever other files you pass it
I think the question is, how do you know C-reduce isn't going to mutate your test case into calling `remove("/path/to/my/important/file");`
C-reduce is meant to... Reduce your files, it would not add stuff that was not there in the first place. Also, I think it's meant to only be run against the "frontend" of most languages, not full execution
Yeah but you could have a shell script that does
rm -rf /foo/bar
and in theory it could reduce it to rm -rf /
The passes are very heuristic, so that doesn't seem like something you can rule out.(and I actually want to run it on shell scripts! )
While I've used c-reduce before, I've never done it in a way where it could be destructive. However speculating based on what I do know. I think the 2 things I would do would be in the interesting-ness test, grep for already known harmful instructions and force them to be uninteresting (returning 1). And then if I was still unsure of the potential harm of the program and I had to run it to determine if it's interesting or not (segfault or something similar). I think I would hoist the binary/script into a docker container and run it there. That way the log and result can still be checked, but it's access to actual file system is minimized as much as possible.
TLDR; C-Reduce just gives you text to run/compile, if you're worried about things like that sandbox as much as possible.
> I mean, I don't think they're lying but (given it doesn't use an LLM) I am bewildered.
I’m pretty sure (based on the last time I saw this) that this is just good old fashioned computer science.[1]
I really hope that HN hasn’t forgotten about good old fashioned computer science stuff already.
[1] All the algorithm and whatnot stuff, not the spooky machine learning stuff. Even things like Prolog-for-AI, although that has the slight downside of not working (for the purposes of making AI).
To be clear my comment was meant to be an awareness that it is good old fashioned computer science. Without LLMs, which this predates, it is surprising to me that you'd have a lot of success reducing a program in an arbitrary language and still having something that's valid syntax!
I do get that it will reject a lot of stuff as not working (and has to even in the target language) and I also get that algol-like languages are all very similar, but I am still surprised that it works well enough to function on ~arbitrary languages.
These are very LLM-era properties for a program to have. The question is not "does it work for language x" but "how well does it work for language x", and the answer is not "is it one of the languages it was designed for" but instead "idunno try it out and see".
Yeah, I would have liked to see more examples and an explanation.
I glanced at the code. It seems to have multiple possible "passes" which reduce the code in various ways, and the passes here not tagged with "C"=>1 are used in the not-c mode recommended in the post.
https://github.com/csmith-project/creduce/blob/31e855e290970...
You can see the pass implementation in files on the left.
The key thing is that the transforms it makes aren’t required to always produce a program that can actually run, or even build.
The user provides a script which determines if a program is “interesting.” A program with build-time errors should be considered “not interesting” in most cases. (And if you’re hunting an incorrectly reported error, you’ll want to make sure you only consider that error to be interesting.)
Then it yoinks out parts of your program and checks if the result is still interesting. If it’s not, that attempt is discarded and it tries something else. If the reduced program is still interesting, it will try yoinking out more stuff. Repeat until you like the result.
There doesn’t need to be any understanding of the program in order for this to work. You just need something where removing some bit of the program has a chance of producing a new program that’s still interesting. That works in most programming languages.
This process can take a long time, and it’s slower when there’s a lower probability of a removal producing an interesting program. Thus heuristics are added: don’t remove random character ranges, but work at a token level. When removing a brace, find the matching brace and remove the insides. That sort of thing. Most modern languages are similar enough to C that many of these rules are helpful for them too. And even if they don’t help, this just slows the process down, but it still works.