There are many things I like here. I'd like more information on how they intend to integrate sizes into a Hindley-Milner type system (particularly the dynamic sizes), since that is something I am struggling with myself[0].
I’m optimistic that we’ll eventually be able to figure out how to build practical tools for deep learning (and scientific computing for broadly) based on these sorts of ideas. The biggest challenge may be trying to fit them into Python syntax!
To expand a bit, as I understand it, Remora gives a type system (and type inference) to APL- and J-style reranking, which is called rank polymorphism in Remora. The key thing is that type inference in Remora is based on Presburger arithmetic (0,+) which is decidable, unlike the stronger but overly powerful (for their purposes) than full Peano arithmetic (0,1,+,*).
which argues that APL already provides the development ergonomics that are typically acribed to type systems. I.e. why type check your tacit function when it's implementation is about the same length as its type annotation?
Anyway, as someone trained in Category Theory and a huge fan of type systems, I found Hsu's argument surprisingly pursuasive.
I would love to hear any counterpoint arguments and the motivations behind attempts at giving J a type system.
Having used Scientific Python and R for more many years (with dabbles in C++ and Julia), I recently started Haskell so my code could look like what Dex has in their samples.
I really want a language like this to go mainstream.
They are very different, Julia is a multi-paradigm dynamically typed language at it's core, with heavy inspiration in the Lisp family. The type system is designed for the purpose of specialization and dispatch, meaning if you add two variables, the compiler will look at hundreds of possible implementations of addition for the one that best fit the combination of arguments, with increasingly more generic and less performant options as fallback until it's unable to match any implementation and fails (after running the program).
Dex is a functional statically typed language inspired by the ML family. That means type errors will be caught before even compiling the code, which gives a guarantee that once it starts running there will be no case of failure because there is no implementation of a method for given types.
The similarity between the two languages seem to be mostly the high performance powerful native multidimensional array support.
Dex looks very different from Julia, particularly in the type system. Julia is fundamentally similar to a cleaned-up Matlab or R, with some extra tricks on top (e.g. multiple dispatch and macros), and a very good and polished implementation.
If you're interested in these kinds of features but aren't up to learning Haskell, we've been writing a DSL for type-safe array programming in a language called Kotlin. It is possible to express certain type-level operations like shape checking and inference for multidimensional arrays (under certain constraints). You can read more about our approach here: https://github.com/breandan/kotlingrad
Surprised this comes from google. I thought they were only interested in "practical" languages like "go." I wonder if anybody from google dislikes golang.
And this is Google Ai Research. Perhaps what's practical in that field (e.g. automatic differentiation for machine learning) is not practical in other fields (infrastructure systems, mobile platforms, ...)
Sure, but it surely possible that subcultures of a large company like Google's converge to different targets given very different requirements in very different fields run by very different people.
How is this unreasonable? Companies have cultures that are heavily promoted. Yes 100k people has diversity but cultures promote singular opinions as well.
I simply made a mistake that functional programming and type theory wasn't on googles radar, how is that an unreasonable assumption?
> Companies have cultures that are heavily promoted. Yes 100k people has diversity but cultures promote singular opinions as well.
Google's culture doesn't seem to produce singular opinions even in the solution to very narrow problems (hence why they often out 2+ things out competing in the same or overlapping domains), much less broad questions like “what kinds of programming languages are useful”.
It's unreasonable to assume that (a) Google hires smart academics to do cutting-edge research and (b) they forget the stuff they already learned and don't keep up with outside literature.
You can safely assume that any of the stuff we commonly talk about on Hacker News is known to many Googlers.
I never assumed anything you said. I assumed that there culture promotes certain philosophies like the philosophy behind go which is very antithesis to something like Haskell.
You will note that in the design of go, rob pike actually admits that many of the flaws in go were in response to things he didn’t know. Which makes your last statement categorically wrong. I would even argue most googlers aren’t too familiar with ml style languages or algebraic data types.
You use "antithesis" like there's a contradiction, but this dissolves because we are talking about different people.
The people behind Go promote Go, and the people behind Dart promote Dart, and the people working on Tensorflow promote Tensorflow. That's what they're working on. This doesn't require consensus about languages, or even philosophies behind languages.
So, pointing to Go and saying it contradicts some other language's philosophy? There is no contradiction. They are different people. And even the same person could agree that each language has different strengths.
(Also, "known to many Googlers" doesn't imply "known to all Googlers", so pointing out someone who didn't know something isn't a counterexample.)
>The people behind Go promote Go, and the people behind Dart promote Dart, and the people working on Tensorflow promote Tensorflow. That's what they're working on. This doesn't require consensus about languages, or even philosophies behind languages.
And I assumed that it did. This is not an unreasonable assumption given the fact that all of these languages / frameworks are not functional or use ADTs.
>So, pointing to Go and saying it contradicts some other language's philosophy? There is no contradiction. They are different people. And even the same person could agree that each language has different strengths.
Literally my main post was about how my assumption was incorrect. Either way, you can not pinpoint for sure that this culture does not exist unless you've worked for google and talked to those people. It probably does, and dex may just be an exception.
>(Also, "known to many Googlers" doesn't imply "known to all Googlers", so pointing out someone who didn't know something isn't a counterexample.)
Pointing to a leader who controls entire teams of people who is a language designer who doesn't know many things is supporting evidence. Leaders establish culture in a company, which can be described as a type of consensus within a diverse group of peoples.
>You use "antithesis" like there's a contradiction, but this dissolves because we are talking about different people.
I am talking about culture NOT people with differing opinions. Culture is a singular thing. Sort of like how google is a singular thing. An abstraction. I made an assumption about the culture and it surprised me when something contradicted that assumption. The problem in your universe is such assumptions should not ever be made. The assumption is reasonable because you haven't presented any information would prove such an assumption absurd to the point of no return.
Your arguments imply that abstractions don't exist, everything is made up of components and parts; and assumptions about abstract groupings of said components are all "unreasonable" because abstract grouping are made of components and components have the potential to be different. You imply that components can never be the same and thus an assumption to that fact is incorrect.
That is the essence of your argument and it is fundamentally wrong to the point of complete absurdity.
I think we were at cross-purposes because you want to know whether your assumptions were reasonable, not whether they were false. Sorry about that, I wasn't hearing you.
It's true, Hacker News can be harsh sometimes, and I'm sorry you got downvoted so much. I don't think your assumptions were absurd.
But maybe there are better assumptions to make? Can we talk about that instead?
I'm not saying abstractions don't exist, but I would suggest taking some of them less seriously. In particular, I take the idea that companies are singular much less seriously. It's a show they put on for the convenience of the outside world and it takes effort to maintain. Internal politics is often hidden away. It's a leaky abstraction and often breaks down. (Sometimes literally through people leaking.)
Also, some things they don't try to standardize. Diversity of opinion can be fine.
Research, in particular, works that way. It would be weird to try to squelch diversity of opinion in research, where the whole point is to explore new ideas. Giving researchers the ability to publish their ideas, even those the company ends up not supporting, is a good way to recruit researchers.
[0]: https://futhark-lang.org/blog/2019-08-03-towards-size-types....