why are they not durable?
why shouldn't you use them for big tables? For example for PRIMARY KEY lookups, I never ask for a range of PRIMARY KEYs, so why not use a hash?
Durability (the D in ACID) means that if the power goes out or somebody kill -9s it, the database will be in a consistent state and anything that was committed was in fact committed. In current and previous versions of PostGres, pulling the plug at the wrong time could leave hash indexes in an inconsistent state. That's fixable by deleting and recreating the index but the time required to do so is proportional to the table size. In the next version of PostGres, it won't be a problem and hash indexes can be used.
In the case of a primary key, a row is inserted or deleted but the power goes out before the hash index is updated. When the power comes back, the hash index still points to rows were deleted and is missing rows that were inserted.
They are not durable because when durability was added to PostgreSQL a long time ago people were not sure how to add durability to hash indexes, so that task was left for later. But later did not happen, until very recently when some people worked hard on adding durability for them. Adding durability required quite major changes.
PostgreSQL 10 which will be released this autumn will include durable hash index, do not use hash indexes until then unless you really know what you are doing.
Collision probabilities are way more probably than people intuitively care to believe. The pigeonhole principle or the birthday Paradoxon exemplify this quiet well.
Hash tables are good, for small (relative to available space) static sets that don't change. Dynamic production environments will introduce an escalating probablity for performance degradation, and this can detrimental for critical systems.
I think I'm missing the important point.