Nice! I like the goals of a "simpler Haskell" for small projects ( see https://github.com/taolson/Admiran ). Some questions that weren't answered in the blog:
is the evaluation model call-by-need (lazy, like Haskell) or call-by-value (strict, like most other languages)?
how is memory allocation handled? (I assume GC via the underlying JavaScript implementation)?
will it be open-sourced at some point?
a major benefit of immutable definitions is that they are always initialized; however, the type declaration format potentially opens things up to a use-before-def bug if the type declaration brings the variable name in scope. How is this handled in your implementation?
Good luck on the continued progress of your project; it can be deeply satisfying!
I love that you are using colon for the assignment operator. This is absolutely correct. Most languages use the equal sign as the assignment operator in most contexts and then the colon in limited contexts. This comes from Fortan and its just wrong. The equal sign should be reserved for comparisons, because that is what it means in mathematics.
To push back a little: Designing the syntax of a programming language always involves tradeoffs; nothing is "just wrong". For better or worse, equal sign as assignment has become widely used and understood. I think the author's use of colon is neat, but it is confusing if you're used to seeing that as the type indicator; again, a tradeoff. I like the look of := as the assignment operator but it adds another shifted key to type, which can push you that much closer to RSI for such a common operation.
> The equal sign should be reserved for comparisons, because that is what it means in mathematics.
This is touching on a pet peeve of mine: Mathematics and programming are similar in many aspects, but this is not one of them. In mathematics = is not a comparison, but a statement.
More generally, mathematics is about tautologies, that is statements that are always true. In programming, a comparison is evaluated to either true or false.
That doesn’t mean that there’s no room for conditionals in mathematics (one example is piecewise function definitions). But it’s not the same. Heck, even the definition of “function” is different between mathematics and programming.
Seconded. Languages could even use "function" only for pure functions and "procedure" for everything else. Pascal uses "procedure" for things that don't return a value, but I think the pure vs. side effect distinction is more useful.
In languages that have block scope a procedure is that block. It’s a boundary but receives no input and does not return output. Functions do allow input and do return output. This distinction is clear in C lang that has both functions and procedures.
As a new language design feature procedures could be assigned to references for reuse in the same way as calling a function by name/variable.
I've never heard of blocks in block-scoped languages being called procedures. I feel like the word "block" is well understood and serves the purpose fine. In lots of other languages, things called procedures do take input, like Ruby, which has first-class Procs like you mentioned.
> I love that you are using colon for the assignment operator. This is absolutely correct.
Historically, distinguishing an equality operator from an assignment one has existed for many decades. Whether using colon for the latter is "absolutely correct" is the subject of valid debate.
An equally compelling case could be made for any of the following assignment operators as well:
foo <- 42
foo -> 42
foo <= 42
foo => 42
foo <-> 42 if expressing substitutability is important
foo := 42 Pascal lives!
(foo 42) so does LISP!
foo is 42 for those who prefer English operators
f(foo) = 42 for mathematically inclined languages
Personally, I think this argument only holds water for languages that are rooted in mathematics (e.g. Haskell, Lean, Rocq, F*, ...). If your computational model comes from a place of physical hardware, instructions, registers, memory etc. you're going to end up with something very different than an abstract machine based on lambda calculus. Both valid ways to design a PL.
I’d argue there’s a very big difference between “x: y” and “x : y”. I can only see the former as assignment and the latter as the has-type relation.
(I find it baffling in the extreme that in many mainstream languages the convention is to write type annotations as “x: T”, both prima facie and because in those languages the notation then collides with field assignment!)
Mathematics, famous for consistent readable notation. Next idea should be to just stop declaring variables at all assuming they mean what they usually mean in similar programs.
Starting array indexes from 1 instead of 0 is another one although that was already tried :)
It's not a comparison either. OP's statement is just wrong; the obvious meaning of = is equality. For the definition of a pure function, = is the obvious symbol to use since the RHS can be substituted in for the LHS in any context.
-> for definition is just annoying. On every keyboard layout, it takes two strikes to type, possibly with shift involved.
Use ≟ for comparison. I don't know where to find it on my keyboard, though. Maybe language designers should invent a keyboard along with their language.
I'd like to see language designers make function calls happen in the order you read them.
For example, in most languages you have a notation like f(g(h(x))) which calls the functions in the exact opposite order of reading which is unnecessarily confusing.
You would probably enjoy the UFCS https://en.wikipedia.org/wiki/Uniform_function_call_syntax of Nim, D, etc. Basically `h.g.f(x)` or in Nim you can drop the parens and say `h.g.f x` { but it may not scale past a single argument }. This tends to be only "an option" - more on the "allow/enable" side than the "force them" to do it side, though.
Even this is wrong. Programming should be explicit and easy for anyone to read.
c = a + b;
is bad since it's an assertion that c is equal to the sum of a and b, and before it's executed, it's completely untrue. Not only that, but it's math and like Barbie has taught us, math is hard.
Nope, make it explicit. You're moving the value of an expression to a placeholder. The only sensible way to write that would be
ADD a TO b GIVING c
Why, with such a simple and obvious English-like syntax even a common business-oriented person could use the language.
This is great! I love projects like this! I have a hobby of making toy programming languages. (Not arguing that's what yours is.) I once made a compiler, but lately I'm satisfied with making interpreters. Executable asts with moderate rearranging capabilities. It is an addictive thing. It started when I wrote a query language for my graph database server back in 2006. I didn't know about property graphs when I started, but that's what it ended up being, and I had to make a query language for it. (I wanted to call it AQL for "associative query language" but turns out that acronym was already taken. I can't remember by what.) I learned how to do it from the dragon book -- cuz I'm old. Anyway, you struck a chord with me. Making a syntax highlighter that's able to resume on errors is an interesting challenge that teaches you a lot about both sides, the front end and the back end.
Very cool. I think everyone should try making all of these foundational technologies themselves: A programming language, an OS, a bootloader, etc.
It's one thing to read about them in books, but you learn so much nuance by actually stepping through the inherent problems and difficulties in actually making them work.
This sounds good in theory, but a jack of all trades is a master of none. I once spent a semester building an interpreter compiler for very simple LR(1) parsable languages. I learned a lot, but that knowledge fades rather quickly and even despite this significant investment of time, I never came close to being able to touch anything production worthy.
Modern tech is too advanced to meaningfully learn about it by tinkering with small toy projects. These "foundational technologies" are gigantic specializations that require an extraordinary investment of time to master.
Looking back now over the years, I wish I tinkered less and narrowed my focus more.
Why? It's a good grammatical equivalent to the full stop for the programmer. It can serve as useful context for the compiler. And it's only one character. Antagonism over semicolons is another strange symptom of conciseness at all costs. If you want APL, just use APL.
That’s why when we write, we use commas and periods. It tells the reader when a thought ends and the next begins. A semi-colon is the traditional period in programming. Not everything fits on one line. Python managed to pull it off and now everyone thinks it’s the right way… it’s just “a way” but by no means modern or right. JavaScript made them optional, but it results in ambiguous parsing sometimes, so it’s not a good idea there either.
In any case, I doubt a run on sentence is “meaningless” but it is hard to parse.