Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"Back in the day" (pre-PXE), we had rooms full of machines -- without hard drives -- that booted completely across the network and mounted all of their filesystems via NFS.

https://en.wikipedia.org/wiki/Diskless_node

Over the last few years, I've wondered why server vendors don't ship servers with type of flash-based storage (or similar) -- perhaps 4-8 GB -- that's large enough to hold an installation of (for example) VMware ESXi (or another hypervisor) and its related configuration files, leaving any local storage exclusively for VMs. Alternatively, you could boot the hypervisor from this "onboard storage" and access all data across the network (i.e. NFS, iSCSI, SAN) and not have any HDDs whatsoever in the server.



Server vendors have shipped that for quite a while now; Dell for example has offered an option on servers where ESXi is pre-installed on an SD or CF card built into the system for at least five years.


Well, there's the solution of putting a USB flash drive inside the server's case (optionally securing it with duct tape). In fact, a USB flash drive is the recommend medium for booting FreeNAS.


I've seen recent servers also ship with an SD or MicroSD slot on the motherboard too. Probably a bit more secure than a USB stick that can come unplugged easily.


SmartOS is often booted from a USB flash drive too.


I've been looking at a mSATA SSD and a low profile pcie card for our Ceph cluster to free up the OS/journal drive slots for more 4tb spinning disks.


If you want an SSD on PCIe, any reason why you are not looking at SSDs built into PCIe cards? E.g. OCZ Vector or RevoDrive PCIe cards, but there's several other alternatives too.


How are you finding ceph?


We've been using it for about 15 months now without any problems with RADOS. Last winter we had some data loss with CephFS and needed to rebuild the filesystem from backup, but CephFS is unsupported so it was somewhat expected. I think the issues came from the Linux kernel client (circa the 3.2 kernel iirc), so we switched over to the FUSE client instead and have been on that since.

Good news is that performance is much, much better with 0.61 than prior releases. Both for RADOS and CephFS. We'll probably upgrade to 0.67 in the next few weeks and it's probably time to upgrade to the 3.10 kernel as well for btrfs fixes and to kick the tires on the kernel fs client again.


VMware has a product to do exactly this (Auto Deploy). It PXE boots ESXi and can configure it automatically.


Many of them do. Many newer HP blades, for example, come with an SD card inside to do exactly that with either local or network storage.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: