TLDR: lolbench compiles ~350 benchmarks with every Rust nightly. It then runs them and highlights potential performance regressions in the standard library and the output of the compiler. Each toolchain’s run is summarized with a list of likely candidates, as seen in the image below, and we’re now getting started using these to safeguard the performance of Rust programs. Come help!
TLDR: I’ve been playing around with snapshot, a crate for automating golden master tests in Rust. It’s experimental and unstable, but I think it’s a cool example of how easy it is to build procedural macro helpers with the newer Rust APIs.
Blogs serve lots of purposes. In the past I’ve written here to talk about mostly technical Rust things, but I thought it’d be worth writing a little bit about what I’ve been doing since last fall. I don’t expect anyone to read this but it’ll be nice to have for myself down the road.
I published a quick little crate for long-running “streaming” parallel tasks.
TLDR: I’m toying with writing a C standard library in Rust by porting musl-libc over function-by-function. The work is in progress at https://github.com/anp/rusl.
TLDR: Comparing cargo bench results to a slightly more robust method eliminates a lot of the noise, but there still appear to be a few performance regressions that both methods agree on. If someone has the statistics expertise to set me straight or help me take this further or both, please get in touch.