What is your use case? If you're deleting rows that already feels like maybe it's not the intended use case. I think about clickhouse as taking in a firehose of immutable data that you want to aggregate/analyze/report on. Let's say a million records per second. I'll make up an example, the orientation, speed and acceleration of every Tesla vehicle in the world in real time every second.
It's to power all our analytics. We ETL data into it and some data is write-once so we don't have updates/deletes but a number of our tables have summary data ETL'd into them which means cleaning up the old rows.
I'm sure CH shines for insert-only workloads but that doesn't cover all our needs.
You have already gotten excellent options from the other comments, but here's another one that's not been mentioned yet.
You may want to consider adjusting your partition key (if feasible) as a function of datetime so you can just drop a complete partition when required, rather than needing separate delete queries.
In my experience, it has proven to be a very quick and clean way to clear out older data.
You can always use different databases for different use cases.
There are many applications that require extremely high insertion rates (millions of records per second), very large total number of rows (billions, trillions) and flexible/fast querying/aggregation with high read rates (100's of millions or higher rows/s) and that's sort of the sweet spot IMO for ClickHouse and where you'll be pressed to find alternatives. I'm sure it can be used in other situations but maybe there are more choices if you're in those.
>You can always use different databases for different use cases.
Unfortunately this is not always realistic, especially in large organizations, I know where I am there is a big push from top (i.e the IT budget people) to standardize everything they want to simplify licenses, support contracts etc.
I may not be doing cutting edge stuff (I work at an Industrial plant) but we do have mixed data use cases where it could be beneficial to use different dbs but realistically I don't see it happening.
CH works just fine for cleaning up rows: Delete with mutations sync=1, or use optimize with deduplicate by, or use aggregate trees and optimize final, or query aggregate tables with final=1.
Numerous ways to achieve removal of old/stale rows.