Rust is important, but that article isn't very good.
Rust is at the level of C/C++, without an elaborate run-time system. It solves the three basic problems that cause most failures in C/C++ programs: "How big is it?", "Who deletes it", and "Who locks it". It does this without throwing a garbage collector at the problem, which means it can be used for operating systems and hard real time.
The basic memory management mechanism is single-owner pointers, with language enforcement to make this work even as pointers are passed to functions. (Pointers passed to functions are "borrowed"; you can't keep a copy whose scope outlives the function return.) For more complex ownership, there are pointers with reference counts. For data shared across thread boundaries, there are atomic, locked pointers with reference counts.
We need Rust. We're still seeing CERT advisories on buffer overflows in C/C++ programs, after almost four decades of C. The threats are getting worse, too; it's not script kiddies any more, it's governments.
You are correct and many Rust developers and contributors are aware of that point, but the OP wants to highlight the often neglected portion of Rust that may appeal to C/C++ users. Not every C/C++ user would move to Rust just for the safety (I did, but their mileages may vary). Some people look for the better abstraction, some look for the performance, and some others look for just fun. Rust is trying to appeal to many of them, not just the paranoid C/C++ developers; I think this article is very adequate for them.
You're not paranoid when they're out to get you. The most recent US-CERT advisory:
Alert (TA14-300A) Phishing Campaign Linked with “Dyre” Banking Malware (Adobe Reader vulnerability)
"...memory corruption vulnerabilities that could lead to code execution".
This has been going on for over thirty years. It's time for the suffering to stop.
The sad part of this whole history is tha thirty years ago, if you didn't had access to an UNIX system, C didn't had any value.
Those of us in other systems already had better alternatives, but then UNIX was pushed into the enterprise and succeeded at that, with the outcome of thirty years of buffer overflows.
There are a sizable number of programmers who believe a language has to be unsafe to be fast. This is because, for several decades, most compilers with subscript checking were really dumb about it. This started with Berkeley Pascal for BSD, which made a subroutine call for every subscript check and gave Pascal under UNIX a bad name. Good Pascal compilers were hoisting subscript checks out of loops in the 1970s, but not on UNIX.
Then there was the C++ approach, trying to do it in templates. That leads to dumb subscript checking, because the compiler has no idea that the "if" involved in subscript checking is a subscript check and could potentially be hoisted out of the loop and done once at loop entry. For most math code, where this really matters, you can do one check at the top of the loop. Maybe zero checks, if the upper bound is coming from some "range" or "len" construct.
At last, smart subscript checking is coming back. This is partly because languages now usually have a "do this to all that stuff" construct ("for i in foo ..." or similar) and it's a no-brainer that you don't need to check subscripts on every iteration. The Go compiler at least has that.
How smart is the Rust compiler about this? It potentially could be very smart.
This is important. Otherwise, people will want to turn subscript checks off for "performance".
It's idiomatic in rust to use iterators rather than loops. When it comes to iterators that for example iterate over a vector/array, there is no bounds-checking. I don't know if that is implemented by explicitly opting out of the bounds checking in the implementation of the iterator, or by some other means. In either case, bounds checking is turned off when the "looping" is encapsulated in iterators, but not for arbitrary indexing (which is a good reason not to use explicit indexing when you don't have to, of course).
The best test of a language is to see if you can write the compiler itself in the language (step 1) and also if you can write the runtime system in that language (step 2). IMHO, both are required for a language to be considered a good "systems language", and a worthy competitor of e.g. C++.
I know that the Rust compiler is written in Rust (step 1), but I'm not sure about the runtime.
Rust is yes to both, although 'the runtime' is disappearing, replaced by just using the system APIs directly (that is, Rust will have no runtime beyond what C/C++ has). The non-Rust code in the main distribution is:
- LLVM for the complier
- hoedown (markdown parser/renderer) for the documentation generator
- miniz for compressing the metadata stored in each library
- a few other tiny wrappers[1] for which I see no reason other than inertia/low-priority for translating into Rust (e.g. there would be no particular build-system benefit to Rustifying them because C/C++ compilers are needed for the other things anyway)
Whoa now, let's not go lumping C and C++ together! C++ has quite a heavier runtime to manage stuff like exceptions and RTTI! (though, to be honest, I don't know what's a C++ program's runtime requirement if compiled with -fno-exceptions, -fno-rtti and the like)
Generally speaking the heavy part of both exceptions and rtti are in codegen (ie. the amount of code generated by the compiler), not in the 'runtime'. Exception jump tables and typeinfo objects are calculated as part of the compile and just referred to at run time.
If you disable rtti and exceptions, the only piece that needs runtime support is global constructors.
I've done it in the past with linker tricks to generate an ELF section with the list of ctor addresses. The asm that calls main (which if you're rolling your own runtime, you probably wrote as well) just calls them all right before it calls main().
I define a language without a runtime as one that can run on bare metal without restricting yourself to a subset of the language. To use C on bare metal, the only thing you can't use is the standard library, which is not a part of the language itself (its not in the grammar, you don't need to make sure to avoid certain keywords). More simply, any program written without using an import needs to work without an operating system under it.
C++ requires runtime support because if you remove the underlying system, it becomes stunted. Keywords like 'new' and 'delete' stop working entirely, and so a program with no imports is not guaranteed to work. Any language with a GC falls under this, obviously. Rust, if I remember right, requires restricting yourself in order to do things like kernel writing.
That's not what I mean. I mean that any C you write will run with no problems as long as it doesn't depend on an outside library. It's not possible to do anything that needs interrupts, IO, or memory without an external library, which if you're that low you're gonna be writing yourself either way. Accessing individual registers is also a special behavior that doesn't really fall under using C, and regardless that totally possible without a runtime (the __asm__ keyword, thats all compiler driven).
If you're going to get so specific as to call the x86 processor a runtime, then there's no point in arguing.
> It's not possible to do anything that needs interrupts, IO, or memory without an external library, which if you're that low you're gonna be writing yourself either way.
That external library is called a runtime in compiler design classes.
> Accessing individual registers is also a special behavior that doesn't really fall under using C, and regardless that totally possible without a runtime (the __asm__ keyword, thats all compiler driven).
The __asm__ keyword is not part of ANSI C, it is a language extension.
Not all C compilers offer support for inline assembly and in fact, a few commercial ones do not.
> If you're going to get so specific as to call the x86 processor a runtime, then there's no point in arguing.
That sounds kind of like what I mean. I consider it having a runtime if anything that can be used without importing the standard library won't work out of the box when you boot a machine to it.
Well, the 'standard library' is a bit of a flexible concept in Rust. We have 'libcore' and 'libstd'. core is what's still usable without any runtime support whatsoever, and std does.
I mean, none of Rust's _features_ don't work without runtime support, but libraries that need tasks (threads) and task unwinding require the runtime. Those are all library features not language features.
Ah ok, interesting. I've been contemplating converting my toy kernel (which doesn't do all that much yet) into Rust, since it doesn't use a GC for everything, could be fun.
For example UNIX effectively provides a certain kind of memory safety to C programs, by isolating them via VM hardware from other C programs and their memory management bugs, which is quite important in practice... calling that a language "runtime" seems appropriate, especially since there are OS architectures that don't even require VM hardware for isolation (Singularity).
You say that the runtime is disappearing. But, if LLVM is being used, then since it provides the JIT, I'd say it is a really big part of the runtime, actually.
The "VM" in LLVM is deceiving (and LLVM actually no longer stands for "low-level virtual machine"), its main use now is as an ahead-of-time optimiser/code-generator for the Clang C & C++ compiler. Rust uses it in this capacity too: optimised native code is emitted at compile time and no JITing is necessary.
Of course, LLVM can be used as a JIT (e.g. what Apple is doing with javascript), but Rust does not use or need it.
> Of course, LLVM can be used as a JIT (e.g. what Apple is doing with javascript)
Even then, assuming you're talking about FTL LLVM is "just" a codegen backend (w/ optimisations) for an existing JIT pipeline, most of the JIT infrastructure is outside LLVM.
Wouldn't a better test be to see if you can implement something which the language is meant to be good at implementing? Even though a language might be a good fit for writing compilers (shown by being implemented "in itself"), that might not have anything to do with the domain it is meant for.
The "thing" that rust is meant to be a good fit for is implementing Servo.
In principle, sure, but in practice the main reason people seem excited about Rust isn't as the language in which Servo is implemented; it's as a C++ killer (note, for instance, this article). If it's going to fulfill that role, then being good at writing compilers/runtimes is absolutely essential.
Compilers/runtimes aren't what most C++ code does, surely?
EDIT: People are excited about it because of the promise of memory-safety (fewer crashes and security issues), as a primary driver. Servo is a proof that large-scale programs can be written in this way. They're not just excited about writing things in a new language for the hell of it.
I recently started looking at Rust. I somehow had this idea in my head that it was a weird language, but up close, it is quite a friendly beast. Interesting, though.
It is indeed a weird language. I've been rusting heavily for about a week (doing Matasano crypto challenges) and I'm still puzzled at lifetime errors.
Managing lifetimes is hard to reason about (though this is not a problem with Rust, but with lifetimes... Rust just makes it explicit.)
As soon as you start using generics things can quickly get out of control. Here's one of my function signatures:
pub fn xor_together<'a, 'b, 'r, A, B, R, T, U>(iter_a: T, iter_b: U) -> Map<'r, (&'a A, &'b B), R, Zip<T, U>>
where A: BitXor<B, R>,
T: Iterator<&'a A>,
U: Iterator<&'b B>
If you get those lifetimes wrong, the error propagates and you end up having lifetime errors farther in your call chain, which are quite hard to debug. I'm not even sure those lifetimes are 100% right.
Some other weird things include closures. There are several types of closures, not interchangeable with each other (nor with fn) and honestly I don't understand them (and couldn't find docs to explain them). I hope this changes as Rust stabilizes.
Closures are soooo messed up right now, because we're in the process of replacing them wholesale but the new system is only half-implemented. As a result you're forced to choose between using closures that work but suck (the old system) or closures that don't suck but don't work (the new system). So don't judge us on closures just yet. :)
I hope that when Rust hits 1.0 -- they take backward compatibility exceptionally seriously. The pre-1.0 path has been exceptional wild ride, and it has made me a bit gun shy about the language as a whole.
I am no stranger to pre-1.0 languages, I have worked with lots of them, but none have been as frustrating (or interesting) as Rust... the question as to will Rust grow into a production language or an academic toy still remains an open one in my book.
It has gotten a lot friendlier-looking in recent versions, thanks to the reduction in odd type-punctuation for lifetime description. It's more uniform now.
I don't understand the problem. Rust doesn't have # for comments, and I find `#[random]` easier to the eyes then `@random`. Or are we speaking about `@[random]`? Then again, I've never been one for micro-syntax-discussions.
That's not really a sensible association. A lot of work is done with those languages where shells are either not used at all, or are just a development tool (e.g. could be replaced by editor features). It seems rather strange to try to forcibly connect Python/Ruby/... and sh, rather than just grouping as languages that happen to share the # for comments. (Maybe they adopted it from shells originally, but the languages definitely stand on their own now.)
Does Rust, like Go, have tools for freezing dependencies? So when the git project changes, you're not left wondering why a project has sudden bugs not seen before?
I love the idea of how easy these can be included in a project, but I dislike leaning on a public repository that may change at a moments notice.
For as much as I hate Maven for some reasons, I do appreciate the ecosystem and the ease at which I can...say, ask for a specific version of a specific artifact - ensuring my application will always build.
I'm not a Go expert at all, but I believe Cargo (the dependency management tool) far surpasses Go in this respect.
One of the main design goals for Cargo is reproducable builds. E.g. the first time anything is build Cargo will create a Cargo.lock[0] file that fixes the dependency at the exact commit that the build used, so one can come back in a year and rerun to get the same result (assuming upstream hasn't edited their history). Upgrading/changing a dep then requires explicitly calling `cargo upgrade`.
One can also manually specify versions and exact commits[1] in the Cargo.toml (the file written to specify those deps), e.g.
[dependencies.lazy_static]
git = "https://github.com/Kimundi/lazy-static.rs"
version = "1.1"
Thanks! I would much rather prefer being forced to specify a version than not. Consider building with more than one developer in mind. Or a developer joins the team a year down the road. Why waste anyones time not taking a moment to say "no we need v.1.1".
You are effectively forced to specify a version thanks to the lock file. It just assumes that in the moment you specify the dependency you mean the version you currently have access to. This is the same mechanism as Ruby with Bundler uses, and it is very friendly to multi-dev teams, the version they need to be on page with everyone else is right there in the lock file.
Is there a rationale for not just ignoring the lockfile of dependencies rather than insisting they not have them at all? I actually have always found this duality in bundler irritating, as when I work on a gem with someone I still want to be able to communicate an ideal dependency state. Not hanging a lockfile makes this difficult.
Well, if your gem didn't work with a particular combination of dependencies, that should be reflected in the version constraints. I'm not sure what an 'ideal' state is.
I'm not 100% sure if there's an official answer, exactly, but it's more representative of the state of affairs. If you do check a lockfile in, it will be ignored.
By ideal here I mean something more like known-good. A baseline of expected behavior against which I can compare. The permutations of version combinations in a typical 'production' dependency specification can get very large, and there just as much benefit to having devs start from the same working version for a library as a program.
Note that it is encouraged by convention to check your Cargo lock file into your VCS if you are developing an application. This means that everyone checking out your code will build your application using exactly the same versions of the dependencies you used when you committed the lock file.
> Does Rust, like Go, have tools for freezing dependencies?
What tools are you referring to Go here? There are a handful of extra-standard mechanisms you can use to make it somewhat better (through clever hacks on urls and such), but the expectation in go-land seems to be that 'freezing your dependencies' is 'vendor them wholesale into your git repo', which is not exactly robust. Go's story on "application will always build" is kind of notoriously bad.
> When we pass or return an Event by value, it's at worst a memcpy of a few dozen bytes. There's no implicit heap allocation, garbage collection, or anything like that.
I'm confused by this (total Rust newbie): wouldn't pass by value imply creating a new copy of the String with its own heap-allocated buffer? What does "no implicit heap allocation" refer to?
Ownership of the String is moved and the original variable cannot be used until it is reinitialised, it is not copied. Pass by value is literally always a shallow byte copy in Rust (not following pointers) and the Rust compiler must disallow further use of many variables to ensure safety.
TL;DR: Modern languages can tell when you're going to be returning a value, and just place the return in the right place, rather than making a copy that'd just get thrown away. I believe this is what they're referring to.
It also might be talking about how just the tag gets copied, and not all the values. I think.
Yeah RVO is clear, I'm confused about the "pass by value" which I assume is as a parameter to a function, where a creating a copy is unavoidable in the general case.
I have a question about 1.0. I know you can define your modules' stability, but how much of the modules that will be shipped with 1.0 will have LTS? Specifically I am wondering if I follow http://doc.rust-lang.org/guide-plugin.html AFTER 1.0 if I'll still have to keep up with point releases because I am using the internal AST.
Everything has stability markers. Currently, user defined macros and syntax extensions are not marked as stable. Syntax extensions will absolutely _not_ be stable at 1.0, macro stuff is a bit less clear, expect to hear about this soon.
That post is about syntax extensions, so yes, you'll have to be using the nightly build if you want to keep using them. They'll be considered high priority to stabilize, but given that they rely on compiler internals... I actually argued against writing that guide because of this, but eventually said okay.
Syntax extensions and the compiler internals are most likely not going to be stabilized for a while. Committing to a stable compiler core is a serious effort.
Is anyone using Rust for commercial products of embedded systems? It looks like a nice language (at least to try), but I don't find it in many open-source projects. Are there any reasons besides being new and not-so-popular for that? Is it compatible with C/C++ libraries (I mean in practice, not that they should work)?
> Are there any reasons besides being new and not-so-popular for that?
It has a level of flux that is astonishing (even compared to other pre-1.0 languages)... now, it could be argued that this is how a pre-1.0 should be, but it makes it very hard to build any serious project around it (note: a few companies have).
How stable is Rust these days? I am watching it from a distance (I have no suitable project ATM) and I had the impression that the syntax went through some major changes not too long ago (a year?). Is this a thing of the past now?
A painfully simple logger that makes use of Rust language features to ensure that it's allocated only when it is used, is RAII safe, and is mostly memory safe.
A lot of boilerplate C or C++ code was eliminated, and while we never see the actual assembly, we're told that it would generate similar instruction sets to the C C++ implementations.
So we should get pretty good performance out of it!?
I mean, it'll produce similar assembly to C++ code that does the same, but grabbing a mutex for every log message is not going to give you amazing performance. That's okay in this example, because the log seems to be compiled out by default and only used for debugging Servo, but if you're gunning for performance I'd suggest using thread-local storage to store logs per thread without any atomics and aggregating on a timer.
true and it does suffer from deadlocking (which the author states is a possibility, if one thread never releases the mutex)! Either way, it is a very lightweight implementation and Rust does shine here.
Rust is at the level of C/C++, without an elaborate run-time system. It solves the three basic problems that cause most failures in C/C++ programs: "How big is it?", "Who deletes it", and "Who locks it". It does this without throwing a garbage collector at the problem, which means it can be used for operating systems and hard real time.
The basic memory management mechanism is single-owner pointers, with language enforcement to make this work even as pointers are passed to functions. (Pointers passed to functions are "borrowed"; you can't keep a copy whose scope outlives the function return.) For more complex ownership, there are pointers with reference counts. For data shared across thread boundaries, there are atomic, locked pointers with reference counts.
We need Rust. We're still seeing CERT advisories on buffer overflows in C/C++ programs, after almost four decades of C. The threats are getting worse, too; it's not script kiddies any more, it's governments.
I just hope the Rust crowd doesn't screw up.