Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Servers of Hacker News (pwd.io)
95 points by euphemize on July 31, 2013 | hide | past | favorite | 61 comments



Having issues with that one due to the image... the text only one works: http://webcache.googleusercontent.com/search?q=cache:http://...


Which seems a little bit ironic, if you ask me. Anyone know what server software the OP is running?


WordPress:

    http://ls.pwd.io/wp-content/...
I still don't understand why they don't have a cache system by default.


Wordpress is a CMS.


What exactly is ironic in this instance? It's not like the post was making a server recommendation.


standard LAMP/wp install. turned off cache a few days ago to check something out, forgot to turn back on afterwards :( / fixed now.


10 websites are actually running WEBrick in production. Nice.


Only 13 people using Lighttpd. I remember the week and a half when that was the preferred deployment platform for Rails. That was a while ago.


I run nginx on my Pi for a personal site for friends, and Lighttpd on a VPS for a very small site with a few gifs and low visitors. The Lighttpd regularly goes down to like 5-10MB memory usage which is really interesting, whereas nginx seems to cache more in RAM. Either way, both seem to work extremely well for low RAM (and the slow CPU on the Pi).


I suspect that might be under represented by people using Lighttpd for serving static content, with something else like Apache serving the main page html (Ive got a bunch of clients in that configuration with WordPress running on Apache, and all the images/css/js/etc coming out of Lighttpd.)


That should also be true of nginx, but it's pretty well represented now.


nginx is not only good a serving static file but is a (one of the most?) common reverse proxy server.


I think Heroku's default Rails app server is WEBRick: https://devcenter.heroku.com/articles/ruby#webserver


im pretty sure it's thin


It will run thin if the gem is installed. Otherwise it falls back to Webrick. Rack itself favours thin, puma, and webrick (in that order). https://github.com/rack/rack/blob/6829a8a0f416ea49a18f1e3e53...


I'll go out on a limb and guess that the majority of those were "Show HN: my weekend project" submissions, not really production systems.


That's still more instances of IIS than I was expecting. Who uses it? Is it still as awful as it was the last time I fought with it 5 years ago?


IIS 7.5 and 8 are actually really good, and among the best to interact with programmatically. You can write the equivalent of nginx modules in C#, and even implement custom framing and protocols.


Hmm. As someone who's had to run a massive IIS cluster over the last 3-4 years, I disagree. It's obtuse, unreliable, incredibly overcomplicated and regularly just fucks you because it can.

Some things that are utterly broken that really shouldn't be that have cost me literally DAYS.

1. Using IIS ARR to proxy subversion = hell. The moment someone requests a web.config file from the back end SVN server via the IIS front end, it shits a brick. You have to piss around in the applicationHost.config to fix this.

2. It knackers XML encoding transparently inside ARR somehow resulting in clients losing requests as they come down as application/octet-stream rather than what you sent them as. This breaks lots of clients randomly.

2. Wildcard SSL domains i.e. star.whatever.com. If you delete one site instance, it kills all the others completely dead and you can't re-add the certificate on the others: you have to create new site instances again from scratch. Workaround: stop a site instance and leave it there to rot forever. That's fine but we can have up to 100 instances on a cluster.

3. Thread reuse. Each module (in integrated mode) can run in a different thread so it knackers thread local variables like the current culture and identity system in .Net. This is an absolute fucker to debug as it only happens under heavy load and is barely documented.

4. Sometimes servers in our cluster randomly just stop for a few seconds for no apparent reason (7.5 on 2008 R2). Our phones start ringing then. We've had Microsoft working on a fix for over a year and even they can't work out why it's happening. They admit it is IIS that is doing it.

5. Deployment hell. You're supposed to be able to just "xcopy" deploy everything because it uses some mish-mash of shadow copies but occasionally this screws up and leaves an older dll in the ASP.Net Temporary Files folder which means your site suddenly has broken code contracts everywhere and your site goes down big time with YSOD (as even the error pages fall over) until someone fires up the magic "fix it quick" powershell script which cleans out all the crud, redeploys and restarts the cluster. That's fine but we have 16 front end IIS servers and 80 million requests a day so this means trouble for us.

6. IIS Express and Visual Studio integration is just shit. It doesn't work with source control software at all leading to constant problems when people do updates. The only solution is to shut everything down, delete the IISExpress directory in your documents folder and try again.

7. The whole dynamic vs static thing falls over when you have a module which rewrites static URLs. Everything has to run through the dynamic then. The coupling internally for features like this is terrible.

It's stuff like that which means it's still a broken pile of crap that I hate more and more every day.

For reference, we moved ALL of our development stuff over to an Apache mod_proxy setup with LDAP / AD integration and it just works. We'd love to do it with the front end but we can't for obvious reasons...


As a web hoster supporting both Apache and IIS I used to long for the configuration flexibility of Apache in IIS <=6. IIS7+ was rewritten from the ground up and is a serious a game changer for MS, it's a bloody nice web server.


If you're doing .NET then there is no substitute. Literally. But seriously it has gotten MUCH better in recent years.


Well you could use Apache + Mono, but I don't know about that in production.


I would actually like to know if someone has experience with that setup in production.


All those StackOverflow posts.


SO doesn't return a server header, so I imagine the IIS count would go up if it did.


I have a company running on .NET/Azure/IIS and it's great!


When I read the title, I thought this article was going to attempt to glean the webserver type/versions of all the servers in the hacker news Web cluster (if there is one).. That would be kind of interesting actually.. If not to only see if they keep them all at the same version/OS/updated etc. you could do this same thing on any one big site over time and the results could be interesting.


PG has stated in the past HN runs on a single, well tuned server at IBM/Softlayer/ThePlanet. Citation: https://news.ycombinator.com/item?id=5229364


Although most of the tuning appears to be to prevent people from using it

"we've limited requests to this page"

For example...


Judging from the title of the article I was hoping to get an overview of the HN server infrastructure and software.


Me too. You can find some stacks here (http://leanstack.io/cloud-stacks/) but HN isn't on that list.


Wow, there wasn't a single reddit link all month?


I had to go look what server header reddit returns. For the lazy:

Server:'; DROP TABLE servertypes; --


I'm not logged in. I see:

Server: AkamaiGHost


Makes sense, reddit is just another aggregation site. It's not too often it generates original HN worthy material.


HN is not an aggregation site?


I love it when I see a blog post about a HN discussion about a blog post about a HN discussion about a blog post.


It's far too early for me to read that sentence again.


Not a single Cherokee server :(: http://cherokee-project.org


I think the stats speak for themselves. Very few people like Cherokee. Advertising it isn't going to help. The devs need to understand why people tend not to like it and change it so that people do like it. (They've already been told the way they handle configuration is not very good, and they reject that... good for them, but as a result and perhaps for a few other reasons too Cherokee won't be doing well on any web server survey for the foreseeable future.)

I would argue that the GUI model of configuring complex server-type services is generally wrong. At most, GUIs are good for tweaking configurations, not for setting up a baseline config to begin with.


After trying to help a friend to make Cherokee, Rack and Shotgun (yes, he insisted on reloading using shotgun) work together, I abandoned the server as "ungoogleable".


Yep, it's just to hard to get help. Perhaps because it's easier to just post a config snippet for Nginx or Apache that having to take a bunch of screenshots.


Did you mean http://cherokee-project.com/ ? Your link goes to a nonexistent GitHub Page.


We tested it for a project and actually liked it. However the configuration interface is a bit tricky to get your head around. It looks nice and works just fine for most things, we just found it to complex when working with we started adding multiple Django application, domains, rewrite rules and so on.

In the end just firing up Nginx was quicker to configure, easier to understand and most of all easier to debug.

Still it a webserver I like to pull out and play with every now and then, even if I end up just using Nginx.


I remember when I was moving off of Apache, I gave Cherokee a try. It was harder to configure than writing an Nginx conf.


Server is down for me. Not a good sign...


It's actually .com, not .org


Wow, poor reputation and a racially charged name with a historically insulting pictograph. I wasn't aware that such poor taste even existed in open source projects.


unlike, you know, apache's logo and name


39% apache! Amazing.

Also interesting is that nginx 1.5.x doesn't appear once. Upgrading bbot.org to it last month was a ridiculous ordeal, but you'd figure at least one other site on the front page of HN would be running nginx mainline.


So, who's running their site off an astromech? Bottom of the "Other" list.


Minor quibble: Apache Coyote is Tomcat and not HTTPD


"Error establishing a database connection"


New Hampshire?


cloudflare-nginx can be moved to "nginx-all versions" i think.


github.com server?


Github Pages


GitHub hosted blogs I presume.


gh-pages / github.io


I would be curious to see the percentage that doesn't allow that information to be retrieved, if that's a possibility.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: