They're made in Stable Diffusion (given the resolution of 1024 x 1024 I assume using an SDXL based model).
The reason they look good compared to lots of AI gens is that someone has clearly taken time to iterate and refine them. It might help to explain a little about how AI image gen works. With a service like MidJourney etc you are not able to directly provide prompts. Your input is first passed by their API and changed. Sometimes in quite a radical way (see the actual prompt output in the meta data for Dall-E compared to what you write). This gives a reliable and consistent way of creating images easily but also tends to result in reduced flexibility.
Stable Diffusion you prompt directly, and you have a wide range of tools that can change the image gen, refine it after the fact, and add post-processing onto it. Getting really good results with SD is an iterative process. They didn't just type something random into a text prompt and get these. They probably went through many stages in their workflow, and likely created hundreds if not thousands of images before they arrived here. I would imagine they also custom trained the model they are using, or at the very least had custom trained embeddings/LoRAs to improve the results. They likely also used image-2-image or control net to further balance how the image appears and have greater control. And then they may well have done some final edits outside of AI in a traditional app like Photoshop.