It is annoying is you want to run your teat inside a container for ci and now you are running a container in a container and all the issues that come with it.
Why would the postgres container need to be nested inside another container? Why not just have the CI environment also run a Postgres container, along side your tests, and give your tests a `POSTGRES_URL` environment variable? Or why even bother running Postgres in a container, why not just run the Postgres binary on the host that's running your tests in the container?
Depending on the setup it can be a pain to get nested containers working sometimes. There is, e.g., Docker In Docker but this often required a privileged host container which is often not provided in CI/CD pipelines.
It's the same amount of code and on Mac you still run a full VM to load containers (with a network stack), so I'm not really sure what your point is. If anything it's less code because the notion of the container is entirely abstracted away, and the whole thing is entirely a wasm dependency that you load as a normal import.
The fact that this can run in-process is a big deal, as it means you don't have to worry about cleanup.
As soon as you have external processes that your tests depend on, your tests need some sort of wrapper or orchestrator to set everything up before starting tests, and ideally tear it down after.
In 90% of cases I see, that orchestration is done in an extremely non-portable way (like leveraging tools built in to your CI system) which can make reproducing test failures a huge pain in the ass.
Why not use something like https://testcontainers.com/? Is a container engine as an external dependency that bad?