GitLab is a lot like Firefox, or "Linux on the desktop" in that way. It's what a lot of us want to use, but the less-open but more-polished option has always seemed the more pragmatic choice.
But that can change.
Recently (just as most of the world has apparently moved on from desktop computing, haha) Linux is pretty much fine for the traditional desktop computing. I have current Mac, Windows, and Ubuttnu on my desk, and they are essentially interchangeable except for a few special-case purposes (say, Final Cut video editing, or opening one of those weird wonky Excel sheets that only open on Windows).
Firefox, too, is suddenly performant and I've switched to it as my main browser (for default website use) — something absolutely, utterly unimaginable two years ago.
I hope that GitLab is reaching the same kind of transition point. I've heard horror stories from people that used it 2-3 years ago and are happy to be back on GitHub. But I don't hear much recent grumbling. I moved a toy project to it and it seems nice. As fast as GitHub for me (though I am in Japan, and GitHub is slow in Japan). More features. A nice, sort of adult/professional aesthetic. And — yay! — the open source core it's always had.
I might be wrong, since I haven't used it for reals, but it looks like they might have hit that critical usability threshold?
Open source and worse isn't a very compelling sales pitch. But an open source tool that is broadly equivalent to a closed source one is generally more attractive, especially when you're talking about services that will be used as part of your core infrastructure.
As a faithful Fx user since 1.5, I'm not sure about Firefox analogy.
Its market share was actually much higher in the past and only went down (the introduction of Quantum didn't really help market share wise).
And in term of polishness.. Firefox was MILES ahead of any browser for a very long time. When Chrome launched, it was as horrible as you can think in term of functionality, and Firefox at that time was already as solid as today. But Chrome advances very fast. People often attribute Chrome's success (merely) to Google's push, but Chrome's technological development plays a bigger role IMO.
I'd argue Firefox was easily far more fleshed out, but Chrome was simply faster when it first came out, and I made the switch. It turned out all those features didn't mean jack when a faster option was out there. Of course, Chrome has added a lot of features now... and Firefox has gotten vastly faster than it used to be.
When I left Chrome a few years back, the sluggishness of Firefox was quite palpable. I eventually moved to Edge, which, while crashier, overall performed better. But after both the Electrolysis project, and Quantum, Firefox won me back quite solidly.
Long story short: Speed > features when it comes to web browsers.
I share the same thought. I use Edge on Windows when it came out simply because it feels so much faster and smoother. However, it has such a high tendency to glitch out and freeze sometimes I wonder how can Microsoft release such a glitchy browser and expect users to use it as their main browser.
In 1998 I purchased a Windows computer for the first time in 14 years. My thoughts were "They've had 10 years. Surely they've worked their bugs out". As it turned out, Windows 98 and Internet Explorer 4 actually did have some stability problems, to put it mildly. So I would say that releasing very buggy software without blinking is not a new concept to Microsoft.
> "They've had 10 years. Surely they've worked their bugs out".
That's quite a weird thing to think in 1998 with respect to a browser, given that they'd only had any sort of a browser for less than 3 years at that point.
The question was posed "I wonder how can Microsoft release such a glitchy browser and expect users to use it as their main browser."
To which I gave a related anecdote and responded
"So I would say that releasing very buggy software without blinking is not a new concept to Microsoft."
Since they released an entire OS that was massively unstable (win98), while claiming their browser was an integral part of it, it doesn't surprise me that they would release a buggy browser at a later date and expect people to use it.
I wasn't only addressing browsers, but rather my perception of the general quality history of Microsoft products at that time. It affected my perception of Internet Explorer, which at that time was not as popular as Netscape Navigator.
When Edge actually works, it's wickedly fluid. That's IF it actually works though. I haven't been using Windows much lately to know the state of Microsoft Edge
It usually works. When it doesn't, it really doesn't though. And there's some annoying quirks: For instance, Edge always restores tabs after a crash, and you have to literally create nonexistent registry keys to disable that functionality.
The reason this is a problem is because when a malicious webpage hijacks your browser, and you have to forcibly close it to escape... Edge helpfully reopens the malicious webpage each time you relaunch Edge until you find out you need to hack on the registry to fix it.
I often try to use Firefox but there are certain things that just became irritating...that were entirely Google's fault. Like Google Meet ONLY working on Chrome.
You click enough links in Slack that open in Firefox, don't work and then you have to C&P the meet link into Chrome...eventually your commitment wears down and you just start using Chrome as the default again.
And I hate that. It's a very IE6 type move on Google's behalf. Short of applications / my system giving me a clear way to say "always open links on this domain in this browser" it makes the workflow a pain.
Maybe the Facebook stuff will make that type of thing more popular. If I could make Firefox my default browser, but always open Facebook and Google links in Chrome I'd be pretty happy. Currently running Linux Mint.
Chrome made a big thing about unit testing at the beginning of their development. I'm not sure if their definition and my definition of "unit test" matches up, or even if they still have that drive, but I've often wondered how much that has made a difference in their development process. My own experience has been that an early commitment to unit testing can make a massive difference to sustained development pace (pays off more as time goes on). My inner confirmation bias would love it to be true :-)
The interesting thing is (and without meaning to denigrate your point of view) I actually think that automated regression testing is not particularly valuable when you have unit testing. I actually want unit testing for the ability to help you reason about the code. One of the things that I would dearly like to discover is whether or not my view holds any water. It certainly feels that way from my perspective, but I'm biased ;-)
However, I think you are probably correct in that I think it is far more likely that Chrome has a good automated regression test suite than it has a good unit test suite.
Libraries benefit very heavily from unit tests, built applications or frameworks derive more benefit from regression tests.
For a small unit of code, or a library, the unit tests effectively prove that the code/library does what it says on the box.
For a continuously worked on application, regression tests holds the guarantee that the application continues to work correctly for whatever thousands or millions of use cases built up over its lifetime - even when the implementations, algorithms and libraries used change underneath.
Good regression testing is a nice way to keep unit tests honest. If you never catch anything with your unit tests but boat loads in regression testing - you are inclined to pause and ponder.
I actually test my unit tests by changing the behaviour and measuring if the tests fail :-). A great metric is fuzzing the behaviour and measure the number of unit tests that fail. If it's a lot, then you can improve your unit test game :-)
The problem with unit tests is that they test things you know are problematic. The larger issues is then ones you were never aware could be a problem in the first place.
I highly recommend reading Michael Feather's Working Effectively with Legacy Code. He has the best description of unit tests that I've seen. Briefly, he describes it like a clamp. When you are working on a wood working project, you clamp part of your project so that it doesn't move. Then you work on the bit you are interested in. Later you clamp that part and work on another part. The purpose of unit tests is not to test the behaviour (unfortunate nomenclature aside) -- it's to immobilise it. This allows you to work on another part of the the system and be alerted if you've caused something to slip.
Acceptance tests are incredibly important. They tell you if the system is working. No amount of unit tests are going to help you with that. Once you have accepted the behaviour, what you're really interested in is whether or not the behaviour has changed. You do not need your acceptance tests for that -- your unit tests will tell you.
I'll write it a bit more concisely because I think it is important: acceptance tests tell you whether or not the code is working correctly. Unit tests tell you whether or not the code is doing the same thing it was doing the last time you ran the tests.
The reason I don't favour a large suite of acceptance tests is because they are unwieldy. It's fine for a few months, but once you get a few tens of thousands of lines of code, you will end up with a lot of acceptance tests. These acceptance tests are extremely hard to refactor. It's extremely hard to remove duplication. Over time, they get more and more problematic until you are spending more time trying to figure out how to make your acceptance tests pass than you are trying to figure out how to make your production changes.
Unit tests, when written in specific ways, have less problem with this. Some people think about a "unit" as being a class. But really a "unit" is anything that you might want to isolate in your clamp. It can be a function. It can be a class. It can be a module. Your unit tests should probe the behaviour in the unit (and by "probe" I mean, expose the internal state). Michael Feather's has a great analogy of a "seam" which runs through your code. You try to find (or make) that seam and you insert probes to show you the state in various circumstances.
IMHO, you should write unit tests the exact same way you write any code. Your "circumstances" (or scenarios, I guess) consist of creating the data structures to give your initial state. Your "tests" consist of probing the state along the seams and comparing it to expected values. This is simple code. You should be able to maintain this code using the same tools you use to maintain any code. You should write functions. You should write classes. You should write modules. You should use all the tricks of your trade to reduce the complexity of your "test" code. Your goal is to create specificity when tests "fail" (the probe detects behaviour different than your expectation -- or the clamp detects that your wood has slipped). When behaviour changes, only a few tests (ideally one) should "fail". It should report the "failure" in a way that immediately describes the difference between the state you expected and the state that you probed. It should be easy to change the probe when the behaviour is intentionally changed (ideally changing only one place). It should be easy to probe new behaviour (just build your data and add an expectation). Finally, it should be easy to reason about the behaviour of the code by reading the "tests". Refactoring your tests and removing duplication is very important here.
As for acceptance tests, like I said, they are incredibly important. What I don't find particularly useful is a large suite of regression acceptance tests. The unit tests already tell me when the behaviour has changed. When written well, they even tell me exactly where in the code the behaviour "slipped". I often write manual acceptance tests. Once I have tested it, it is not necessary to test it again (as long as I have a good unit test suite).
My personal opinion as to why people find automated acceptance tests suites important is because they have never worked with a good unit test suite before. There is a general lack of experience in the industry with these concepts. Quite a few people's experience with well tested code is with green field projects. Often these people leave after a year or so. It's not until you have a lot of experience working with the legacy of various testing techniques that you can understand the advantages and disadvantages. I think this is why Michael Feathers is so respected -- as far as I can tell he specialises in legacy code.
Having said all that (and I'll be surprised if you make it to the bottom :-) ), I do value a small automated acceptance test suite. It's my canary in the mine. If it ever fails, then I know I've really stuffed something up and I launch an immediate investigation. Also, there are some things that can't be unit tested effectively (for example testing a web application across both client and server) -- you end up faking the boundaries, which leads to the possibility of skewing. Again, in those cases, I try to find a few end to end tests that will hit the majority possible problems.
I hope you found that interesting. I've typed up essentially this same message in at least 10 different threads of the past couple of years. I think it's slowly getting better, but I think I still haven't managed to explain the concepts as well as they need to be explained. If you've made it this far, thanks for reading :-)
I'm not sure if it covers the use case you're looking for, but they've already introduced the API's necessary in Nightly (or Beta?) that allows the Panorama Tabs extension to be re-introduced :)
I have been using gitlab heavily for the last year and since then nothing terrible has happened. The worst thing was for one day it was pretty slow but was fixed the next day and no data was lost.
GitLab today in my opinion is now a better piece of software than github.
I've been using linux on the desktop for over 15 years with no intention to ever stop the near future.
I had been using firefox since its first release under the name phoenix then firebird then firefox (I had been using mozilla suite before that) and I've stopped with no intent to ever come back to it when mozilla finally killed what made firefox useful to me after a lengthy agony process (BTW the claim of firefox being not performant enough 2 years ago is totally unsubstantiated as I used it daily with over 250 tabs open concurrently without a single hiccup despite having 35 extensions loaded as they were required to put back the useful features mozilla had removed, to remove the unwanted cruft mozilla has added and to add the necessary features mozilla refused to add). I have now switch to waterfox, and its name says it all.
So really comparing gitlab to linux on the desktop means gitlab will never happen, and comparing gitlab to firefox means it will be mishandled into irrelevance by a shady finance operation aiming for market domination.
To me Gitlab seems a better alternative to proprietary and centralized github that will be bought at some point in the not so distant future, has been my stance on this matter. That Microsoft is the one buying would not have been my first bet but is not a huge surprise either considering their change of PR to jump on the opensource bandwagon as an attempt to extend their agony further.
> Recently (just as most of the world has apparently moved on from desktop computing, haha) Linux is pretty much fine for the traditional desktop computing.
It's been fine for the past decade, it's just the trope of "Linux on the desktop" that's slow to die.
Eh... it's mostly fine. It's just that when something doesn't work, you're screwed if you're not technically confident enough. Emphasis confident - you have to trust your instincts when digging through internet fora for a solution. Something that I can never see my mother doing, for one.
>Eh... it's mostly fine. It's just that when something doesn't work, you're screwed if you're not technically confident enough.
As opposed to Windows or OSX where you're just screwed.
There are rough edged in Linux on Desktop but people seem to be completely house-blind* about Windows and OS X. If you spend a lot of time on Linux and go back to Windows or OS X the rough edges in those platforms become immediately obvious.
* If English isn't your first language, house-blind is where you get so used to something being out of place that it becomes part of the decor. e.g. a jumper on the back of a chair that stays there for a week+.
But we have to put this in the context of an elderly lady who is terribly frightened of breaking the computer and always believes it's her fault when software screws up, because the feedback loop of computers is terrible (either absent, or opaque jargon, or marketing lies). That is, my mother.
And keep in mind that I live abroad. Helping her out remotely is a very difficult and slow process - if I lived close by the story would be different. In that context, when it comes to Windows or OSX, my mother has a lot of people who can help her out other than me - my sisters, my father (they're separated but still get along), some of her friends.
Now, my younger sisters are getting into programming (because all professions need it) so maybe I can get them into Linux too - they're definitely capable but the question is whether they consider it worth the investment. But still.
I'm not convinced. Chromebooks seem to be popular enough and simple enough for luddites. It's certainly a flavour of Linux; despite being a bit locked down afaict.
Maybe we're so used to expecting Year of the Linux Desktop to mean Year of the FOSS Linux Desktop that we ignore the successes.
> As opposed to Windows or OSX where you're just screwed.
Let’s not be disingenuous. I don’t know about Windows, but macOS has had absolutely wonderful critical failure recovery for a while now: There is the recovery partition, which acts like a mini-macOS and lets you do various things like drop into Terminal.app, use Disk Utility for drive scanning and repair, or do an ‘archive and install’ (extremely useful for the technically challenged) which keeps all your files but sets up a fresh macOS install. If even the recovery partition is borked you get the option of ‘Internet Recovery’, which connects to WiFi and automatically downloads and installs a fresh copy of macOS (with the aforementioned archive function, if an old install is detected).
Compare this to Linux, where you either get dropped into GRUB or a bare shell..
I have quit Microsoft for Linux 8 years ago. I do the remote administration of the computer I have installed for my old mother since 5 years. At home, there are 3 Linux computers, one of them has a dual boot to Microsoft. My cloud website is on baremetal Scaleway running ubuntu. During last year, I have spend more time for system administration on windows (that I use once a month to use an application during 5 minutes) than for administration of the 5 Linux systems.
If someone asks me to help for the administration of his Linux machine, I would accept because it is so easy and so little work compared to windows. I think Linux is perfect for the noob who accept to delegate administration.
I'm happy for you and your mom! But you cannot compare that to the situation of my mother - I'm trying to make her more confident but it is a very, very slow process.
It's not that I'm not willing to help, but I live abroad. If I lived close by, I would gladly install something like Ubuntu or KDE Neon on her machine (probably Ubuntu though - the mainstream would make it easier for her to find things on her own).
Whenever I'm home I help her with her computer. The whole thing is very educational for me as an interaction designer as well. It often shows how modern interfaces make her feel like she is the dumb one, when honestly it's often the arrogant UI designers who think everyone is on board with modern UX paradigms. Or worse, abuse dark UI patterns for evil purposes.
Ubuntu gnome is more similar to the "simplicity" of windows XP than vista, win7 and win10. The changes of windows are too big and too fast for old users. My mother is still on ubuntu 14.04 to keep this stability.
Honestly, I just came home to see that somehow Bing search had installed itself over DuckDuckGo (my mom loved the search engine for the name alone), and that her Chrome browser had turned into a touch edition which hides the mouse cursor. And that was the least offensive change.
Trying to fix her Lenovo Yoga I had to navigate a forest of dark UI patterns, with pre-installed apps trying to trick me into sharing private data every step of the way.
I really fucking hate this user-hostile attitude that can only be explained by greed. I've "fixed" her computer one more time, but I think the next time I'll let her try a bootable Linux distro, and see if she likes it enough to be willing to give it a try.
"Fine"... that seems to have little to spare already.
I switched from Windows to Ubuntu last December, and it has given me a whole new appreciation for Windows. The polish (things just working well) of recent Windows versions is just amazing, in comparison to Gnome/Ubuntu.
PS. Will stick with Ubuntu though.
PPS. Gitlab is an awesome product and company.
>GitLab is a lot like Firefox, or "Linux on the desktop" in that way. It's what a lot of us want to use, but the less-open but more-polished option has always seemed the more pragmatic choice. But that can change.
I wonder if this move by GitHub is motivated by them seeing this writing on the wall.
a16z, Sequoia and friends got their liquidity event from $350+ M invested. Looks like a (probably $3.5+ B) 10x or better ROI, which should help VC’s IRR numbers.
Your numbers are a bit off. The fundings were done at roughly $750M and $2B valuations. Still going to be great for investors, but most likely not 10x or better.
Same here. The CI is easy to use and makes sense, though it lacks some features - for instance being able to automatically run manual jobs as soon as their predecessors complete. Now I have to wait (!) and then click so the manual job starts...
But all in all, good product, I hope they succeed!
I think GP means that you know that for this one pipeline related to commit X you want manual step Y to proceed once previous steps are ok, and you know it right now, so instead of waiting for previous steps to complete you want to trigger the step in a delayed fashion. Kind of like "merge this as soon as tests pass" on MRs.
> Linux is pretty much fine for the traditional desktop computing
Yeah, on the desktop things are getting better. But ... everybody is moving to the smartphone. And there things are getting worse. For example, my Banking app works only on 2 platforms, which are not open.
> Recently (just as most of the world has apparently moved on from desktop computing, haha) Linux is pretty much fine for the traditional desktop computing.
It's exactly because the world has moved away from desktop computing that Linux on the desktop has become viable: collaboration tools are increasingly web-based (or is at least web-enabled for the 80%-usecase), and those tools are exactly what tends to anchor an organisation on a single platform. These days, even Outlook has a perfectly usable web-interface that works fine in Chrome and Firefox.
Off topic but I couldn't get hibernate to work on Ubuntu Linux. I purchased a laptop with Linux pre-installed; but apparently, if a manufacturer claims to support Linux on a laptop, that does not include hibernate support. Now that we've reached the milestone of Linux on a desktop, I'm looking forward to the Year Of Linux On The Laptop.
People here are deluding themselves. Gitlab, without gobs of VC money and the promise of a big IPO payday, is an abandoned open-source project with a tiny fraction of the team that built the code. So yes, you can fork the code, but without the money and resources of the parent company, good luck keeping it up to date! Worst case, you get another mysql: neglected by an acquiring company, with lots of bureaucracy and infighting and IP tangles to slow things down.
Gitlab is essentially salting the earth for dev tool startups. I had my issues with Github, but at least they had built a business around a dev tool, behaved ethically and gave back generously, and so I wished them well. To see so many people dropping them for a fauxpen-source competitor whose primary selling point is “it’s free!” just makes me sad.
If you want nice things, you have to pay for them. If you aren’t, I guarantee you that someone else is, and they’re the ones with control.
> I had my issues with Github, but at least they had built a business around a dev tool ...
That's a strange claim given that in the current top story - of Microsoft buying Github - the following claim is made:
"The acquisition provides a way forward for San Francisco-based GitHub, which has been trying for nine months to find a new CEO and has yet to make a profit from its popular service ..."
Perhaps it comes down to your definition -- can something be non-profitable for a decade and still be called a business?
The only “axe” I have to grind is the one I clearly stated: Gitlab is salting the earth for dev tools. You can’t build a business competing with someone who is using VC money to give away their product. This is all going to end badly when the music stops.
”can something be non-profitable for a decade and still be called a business?”
This is such a bizarre talking point...do you honestly believe that Gitlab is a better business? Their model is “just like Github, but with even more stuff given away for free!” And let’s not forget that Github has to compete with Gitlab cannibalizing the low end of the market. I’m sure that hurts margins.
Someone has to pay those Gitlab engineers who are writing the bulk of the code. At the very least, as soon as the dumb money dries up, the velocity of development on Gitlab will drop like a rock. In the worst case, you’ll get an conflicted corporate hydra, like mysql.
I understand that you're claiming gitlab is salting the earth, but still don't understand why / how.
You write:
> You can’t build a business competing with someone who is using VC money to give away their product.
This is delightfully worded, given it could apply to both github and gitlab.
Remembering that github started in 2008, while gitlab.com started four years later (first commit to their codebase was 2011).
Github is running on $350m of VC funding.
In response to my question 'can you call a 10yo company that still isn't profitable, a "business"?', you avoided the question, called the matter bizarre, and tried to distract from the question by claiming github is a _better_ business.
Your claim that github has 'built a business and .. gave back generously' is also weird in that gitlab has released the source to their core product, but github hasn't. This also speaks against your claim that you're more likely to be abandoned if you commit to gitlab than github.
Finally, the idea that the 'low end of the market' is where all the money is does not match any other tech startup's experience, is belied by the pricing structure of both companies, and further invalidated by the fact that gitlab is not swimming in cash from their cornering of the frugal user segment.
And what that means is, yeah, either they keep burning $$$ every month and selling more of the control to VCs to feed the war chest until they maybe buy 2nd place, find an acquirer (and with that much ever-increasing VC control, a likely push), or yeah, layoffs will happen. Gitlab is extra interesting because their definition of innovation is biting off even more surface area (e.g., CI), and therefore even more burn.
Keep in mind.. all this says zero about how nice the product quality is or how friendly the people are. But just in the same way you don't get mad at what happens if you stick your hand in a lawn mower (https://www.youtube.com/watch?v=-zRN7XLCRhc&t=34m7s) ... there are financial forces at play from being a high-spending bottom feeder that are hard to escape. Possible, and I wish them luck, but that's a real bet.
AFAIK, Github went for growth. Gitlab went for cash flow. Gitlab is profitable and, imo, their product is comprehensively superior to Github.
>Keep in mind.. all this says zero about how nice the product quality is or how friendly the people are.
Then don't use the term bottom feeder since that means the people are making a shitty product with no ethics to really innovate. It says the people are shameful hacks and the quality of the product is bad.
In reality Gitlab is a better product and the people involved should be proud of their work.
I don't think their official statements match that? They say their fundraising approach is 2yr runway, which is only 6mo longer than the advice for a regular VC-backed startup, and they've been raising increasing amounts ~annually.
Based on that, having 275+ employees, and their stated IPO targets, I ran the numbers recently. My guess was their costs are ~$40M year (admirable: I expected way higher but they focus on non-US hires and pay only 50% percentile in _local_ markets: super low!). Likewise, their stated IPO and growth targets make me guess they make ~$20M/yr. So two different reasons to believe they're burning... ~$20M/yr. The positive thing for them, which they're not public about but I'd guess, is while they're probably growing OK in regular accounts (hard competition vs bitbucket, github, etc.), they're probably Super Great on retention + internal expansion, so net negative churn, compounding factors, etc. I think they _can_ stop hiring and let revenue catch up, though other forces take hold then: so it does look like they're on the classic growth-over-control VC treadmill (despite saying they're not), and will keep ceding control to VCs.
I think you may be correct and my information was out of date. According to the strategy documents that Gitlab publishes they seem to have changed direction towards growth via SaaS:
"""
During phase 2 there is a natural inclination to focus only on on-premises since we make all our money there. Having GitHub focus on SaaS instead of on-premises gave us a great opportunity to achieve phase 1. But GitHub was not wrong, they were early. When everyone was focused on video on demand Netflix focused on shipping DVD's by mail. Not because it was the future but because it was the biggest market. The biggest mistake they could have made was to stick with DVDs. Instead they leveraged the revenue generated with the DVDs to build the best video on demand service.
"""
The term bottom feeder refers to going after the "leftovers" that premium market leaders leave on the table: lower-paying, more demanding (e.g., requires open source), higher acquisition cost (closeted international markets), etc. Good B2B companies often raise prices as they deliver more value and build brand trust, and as they establish the market, bottom feeders will pop up and spot the missing chunks. However, they are forced to play catchup in terms of features and with less $ (or a LOT of VC $). Says nothing about being nice, smart, and high quality, just the market & financial pressures.
No label is ever 100% accurate, but a lot of that dynamic has played out here pretty clearly..
> In response to my question 'can you call a 10yo company that still isn't profitable, a "business"?'
Gitlab also likely runs at a loss. Gitlab has certainly never claimed to be profitable and some estimates are that as few as .1% of their customers pay for Gitlab.
> I understand that you're claiming gitlab is salting the earth, but still don't understand why / how.
It's pretty clear to me at least that neither Github nor Gitlab have sustainable business models. The OSS community is crazy to think that either business will continue to subsidize OSS development while losing millions of dollars a year. All of the anger against Github and the new "faith" in Gitlab is pure delusion. Both these companies subsidize OSS development while losing millions of dollars. This will go on until it stops. It certainly can't go on forever.
Personally I suspect the absolutely best thing to happen to both Github and Gitlab would be being bought out by real companies that heavily depend upon OSS and, you know, actually make money.
It came up before and now the chatter has started up again around Gitlab. I think it still makes a lot of sense for AWS to purchase Gitlab. There's a fundamental strategy alignment there (both Gitlab and Amazon aim to be a "one stop shop"), Gitlab offers the potential to lure a bunch of developers into the AWS platform with a free offering and, ultimately, Gitlab offers the same computational economics as other Amazon products because it is just another hosted product that requires a database. Wouldn't be surprised at all to see such a transaction in as little as 2-3 years.
Wouldn't a company like Gitlab be able to sustain a decent engineering team by just selling a few dozen top-tier subscriptions for their on-premises offering to top Fortune customers who are often still too afraid to have their crown jewels hosted in the cloud?
I would say gitlab is more closely aligned with Google, at least technically, with their auto DevOps targeting kubernetes, and Google cloud having the most 'turnkey' k8s offering.
”Your claim that github has 'built a business and .. gave back generously' is also weird in that gitlab has released the source to their core product, but github hasn't. This also speaks against your claim that you're more likely to be abandoned if you commit to gitlab than github.”
Uh...Gitlab is built upon libgit2, rugged and github-linguist. In other words, the core parts of Gitlab —
the ones that interact with git are built, maintained and open-sourced by GitHub. And these are just the obvious dependencies. Github people contribute heavily to open-source projects that most Ruby websites use.
If you’re going to fanboy all over the place, fine, but at least know what you’re talking about when you do it. And don’t try to weasel out of it by talking about “core products” —- without GitHub’s substantial technical contributions to the infrastructural code that interacts with git, it’s a safe bet thst Gitlab wouldn’t exist. That’s core.
> If you want nice things, you have to pay for them.
And I don't know how that fits in with people releasing / maintaining free software.
I responded to your first rant because you appeared to be 'going all fanboy' over github, declaring them a successful, superior business. I asked you if a company that hadn't turned a profit despite first mover advantage and a decade of trying could be termed a business ... and you weaselled out of that question.
> If you believe Github isn’t a business, then you’re going to be sorely disappointed by Gitlab, whose business model is worse.
The challenge discussing this with you is all your comments about Github are based around comparing them (favourably) to Gitlab.
> I'm done talking to you now.
This is a shame, as I'm consumed with curiosity on your take of today's news that Microsoft spent US$7b buying github.
From what you've described it sounds like they should have just cloned libgit2, rugged, and github-linguist, and rattled up their own gitlab over the weekend.
MySQL is a great example. Bought by Oracle, still a good product, but also forked by some big players as well as some open source groups. I'm sure it is still the most commonly used database on the web today and Mariadb and percona both maintain great MySQL forks as well.
The MariaDB story is a bit of a fuck you though - cries about how oracle will close mysql (which hasn’t happened) but then adopts a bullshit license for its own software.
But the damage is already done - people think MariaDB is some bastion of good intentions and open source software now, because they very rarely look deeper.
> The MariaDB story is a bit of a fuck you though - cries about how oracle will close mysql (which hasn’t happened) but then adopts a bullshit license for its own software.
What?
There was strong precedent for fearing what may happen with MySQL. Knowledge of what happened to Hudson, OES, OpenOffice, Solaris ... this would concern the stewards of any bit of software that got swallowed up by Oracle.
(Edit: Also I recall some worrying stories coming out from Monty and other key developers.)
What's this 'bullshit licence' that MariaDB has? I thought the source was (L)GPL all the way down?
I've looked up MariaDB MaxScale ... and found an optional / add-on product that is aimed at Enterprises, seems to require an Enterprise support licence for the Enterprise edition of MariaDB ... and I completely fail to see how any of this demonstrates that the 'MariaDB story is a bit of a fuck you'.
Basically - their formerly GPL proxy for doing HA deployments is suddenly not open source.
They can of course make this decision - it's their code to do with as they wish. But it's quite fucking rich for Monty to claim Oracle will close source MySQL, create a fork and company which then uses that fear to grow in popularity, only to do the very thing he accused oracle of doing: closing an open source product.
Edit:
Also, if you think only "enterprise" customers need database clusters that survive individual node's being offline, you're in for a big shock.
I personally have not and will never use MySQL again because Oracle owns them. That is a company where software goes to die. Plus their atrocious security record.
Indeed the most extraordinary story of the last ten years is how Google, Oracle, Redhat, Microsoft and Facebook have funded open-source software to tune of billions. This is likely the greatest act of charity the planet has ever known. And while a lot of holier-than-thou types (particularly here on HN) imagine these tech giants as not to be trusted or even the enemies of OSS, the numbers don't lie. Look closely at who actually funds and writes the vast majority of OSS and the same five companies pop up over and over and over...
> Indeed the most extraordinary story of the last ten years is how Google, Oracle, Redhat, Microsoft and Facebook have funded open-source software to tune of billions.
Definitely not the most extraordinary story over the last decade.
And trumped by IBM's famous first $1b spend on 'Linux' just shy of twenty years ago, and their subsequent announcement that they'd recouped that money within a year.
Coincidentally this speaks to your claim:
> This is likely the greatest act of charity the planet has ever known.
These guys aren't in it for the charity. There's doubtless plenty of positive PR spin from contributing to free software -- but don't mistake pragmatism or happenstance for altruism.
Your confused if you think just because one benefits from charitable actions that somehow invalidates them.
And IBM's contribution was, frankly, marketing. It does not compare to the volumes of high quality technology that the companies I mentioned have simply given away for free.
Many on HN and others are perhaps too close to it but I think people will look back upon this extraordinary corporate charity as a decisive event of the century.
> Your confused if you think just because one benefits from charitable actions that somehow invalidates them.
I think you're being overly charitable to think these tech corporations had charitable intentions when they contributed resources to tech projects that happened to improve their tech business prospects.
> And IBM's contribution was, frankly, marketing. It does not compare to the volumes of high quality technology that the companies I mentioned have simply given away for free.
Bizarre you didn't mention that up front when you named 'the big five contributors'.
On what do you base your bold claim that IBM's contribution was marketing, and the other corporations weren't?
> Many on HN and others are perhaps too close to it but I think people will look back upon this extraordinary corporate charity as a decisive event of the century.
IBM announced their first billion spend last century.
Maybe, we should ask ourselves first if it's a fair comparison. Amazon kept investing the profits into themselves. I don't think that is the case with Github though
But that can change.
Recently (just as most of the world has apparently moved on from desktop computing, haha) Linux is pretty much fine for the traditional desktop computing. I have current Mac, Windows, and Ubuttnu on my desk, and they are essentially interchangeable except for a few special-case purposes (say, Final Cut video editing, or opening one of those weird wonky Excel sheets that only open on Windows).
Firefox, too, is suddenly performant and I've switched to it as my main browser (for default website use) — something absolutely, utterly unimaginable two years ago.
I hope that GitLab is reaching the same kind of transition point. I've heard horror stories from people that used it 2-3 years ago and are happy to be back on GitHub. But I don't hear much recent grumbling. I moved a toy project to it and it seems nice. As fast as GitHub for me (though I am in Japan, and GitHub is slow in Japan). More features. A nice, sort of adult/professional aesthetic. And — yay! — the open source core it's always had.
I might be wrong, since I haven't used it for reals, but it looks like they might have hit that critical usability threshold?
Open source and worse isn't a very compelling sales pitch. But an open source tool that is broadly equivalent to a closed source one is generally more attractive, especially when you're talking about services that will be used as part of your core infrastructure.