Pulumi is the only imperative one I know about, but I can't really opine on any of them.
Except...
When we find a new problem, we solve it the way we always solve things. And we like to solve things with tons of impenetrable abstraction. We hardly care about legibility of stack traces let alone ease of tracing through the code.
I'm sure 24-year-old me would disagree with this, but I think there's a qualitative difference between an arcane abstraction for getting text onto a scroll area on a screen and arcane abstraction that lets me accidentally undeploy all of my servers (24 year old me would be equally frustrated by both).
BDD test frameworks mostly look like imperative code, and after the initial shock of learning that some things are declared/registered immediately and executed only after everything else has gotten a chance to register, it's not that difficult to figure out how things work (until you have to step into the internals, and some test frameworks are okay, while others are a disaster).
I kinda think infrastructure as code should just be code. Give me something that looks like a test framework but with better adherence to Least Surprise.
I think a good analogy may be tests vs type systems, especially if you think about new vs old type systems.
Traditional deployments are just like traditional unit tests. All imperative, easy enough to work with, all useful, but if you don’t have good discipline it’s still really easy to sneak past them (ie, writing a test that doesn’t really test anything).
Some traditional DSLs like Puppet are like the type system in a language like C. It wants to help you declare invariants and validate them statically, but it kind of just sucks at it, and you still need a lot of discipline. And now you also need to be aware of your type system pitfalls. (Still better than no type system, though.)
I think what people want out of DSLs (and what others have mentioned in this thread) is something like a high-end modern type system like Idris: the ability to put useful, sensible invariants into your scripts and have them usefully validated. Now, like Idris, we probably won’t reach Nirvana on that for a while. (Stateful databases can throw a wrench in things, for example.) But for a lot of tasks, I think we are starting to know enough in general programming language theory as well as systems design that, as a culture, we can solve 80% of these problems with a really high level of reliability soon.
AWS CLI is a very useful tool, because it can do simple things with AWS. In order to do more complex things, usually you have to "script" multiple commands together or write a Boto script in Python. But if the tool just came with those complex things already pre-designed as a feature, we wouldn't have to script it, it would be more reliable, easier, simpler, and nobody would need to bang their head against a DSL to get the functionality.
Nearly everyone on AWS who has wanted to try Fargate could use the following function: "Build me an ECS cluster, create everything I need for it (a VPC, security groups, etc), create an ECR registry, upload this Docker container to the registry, create me a Fargate service with tasks for the thing I uploaded to ECR, create one ALB for the cluster, add target group forwarding rules for each Fargate service in the cluster, and manage everything in Route53, including domain and certs".
Now, you could spend hours/days trying to set up Terraform or Ansible to do all of that, or script it with Bash or Python in a few hours. Or you could run something like "aws ecs user-story fargate-apps --cluster-name foo --service-container docker-image:latest --service-name some-img-name:latest --service-url https://my.custom.domain/some-img-name/ ". No DSL. No plugins. No cobbling together 100 lego pieces and reading 200 pages of documentation. No scripting. It's a very common user story, and the AWS CLI already has literally all of the functionality you would need.
But that won't happen, because the IT industry is purposely designed to be unnecessarily inefficient, complex, and expensive. If a tool like AWS CLI or Terraform just bundled this user story natively without you having to "compose" it like a DSL, the companies that produce them wouldn't make as much money, and could potentially incur more cost through the maintenance of it. An open-source community could support it, but it'd mostly be written by engineers of private companies, and companies are pathologically terrified of releasing any intellectual property without lawyers and contracts and CLAs and so on.
Literally half of the reason my job exists at all is that nobody has yet bothered to release the cobbled-together lego components of an enterprise organization under the public domain.
> But that won't happen, because the IT industry is purposely designed to be unnecessarily inefficient, complex, and expensive.
As a fellow developer, I feel your frustration. There have always been UX problems in AWS across the various pieces of first-party software you use to access them (console, SDK, etc.)
But as a developer at AWS (my opinions are my own), I think I disagree? I'll write out a stream of consciousness, it might be wrong. It comes down to at least a couple things:
1) Good UX is hard, especially for newer services that are still learning customer usage and building foundational features. There's no test I can write to know I did it well now and for the future.
2) We have limited resources and tend to focus on building (hopefully) good APIs and features (and operating the services). Good UX requires investing a lot of time and effort, I think.
I don't think I've ever heard anyone say we shouldn't focus on providing a better UX, but most of the time we want to get through the mountain of features that our customers need. Sometimes an org grows large enough, and has been around long enough to understand their customers, and has the right leadership, that they can invest in building out UX improvements, whether that is console integrations or a custom CLI or whatever else. In the meantime, there are individuals/groups/companies that build abstractions - though I also don't think I've ever heard leadership say we should rely on third parties to make our services usable. Sometimes we adopt improvements, or partner with companies, or commit developer time to help maintain a project.
The tricky thing with first-party software is that once something goes in, it's supported forever-ish and is difficult to deprecate, so you have to be very deliberate. The wrong abstraction ends up being costly. We get a ton of value just by being able to auto-generate the CLI from the same models we use to define our service API, and the AWS CLI is pretty well suited for that. Though some teams do provide custom CLIs bundled in, I think.
Though in a more specific sense, I totally agree that the experience of "I have a Docker image. Run it for me." should be easy and I'm glad we have people doing something about it.
A declarative configuration that specifies deployment state, using a powerful language like Cue[1], and a reconcilation mechanism - written in a real programming language - that ensures that real state matches declared state (like k8s).
The Ansible/Puppet/Chef/... approach of establishing and then haphazardly poking at highly stateful server state is fundamentally flawed. Top-tier systems engineering organizations like Google figured this out a decade ago and moved to mostly stateless deploments that can be redeployed and reconfigured with confidence. There's always some state you can't get rid of, but it must be minimized to reduce the "entropy tax".
Replace "language" with "standard interface/protocol", and replace "programming language based reconciliation method" with "standard interfaces for managing infrastructure and network service components", and I agree.
"One-size-and-tool-fits-all" solutions will never work for everyone. But if you standardize the operations, and the language for communicating operations in between components, and then let anyone implement any one of these parts in a platform agnostic way - including kernels, SDNs, hypervisors, storage, processing, etc - then you have platform independent ways of managing state.
At that point state will be important but trivial, the way state in a TCP connection is trivial. We care what the state is, but every component in the network can interact with the state in a standard way, to the point that nobody tears their hair out about "oh no the network protocols are sending state everywhere!!!" If any tool can read and understand the same state in the same way, any tool can manage it (meaning multiple tools managed by multiple people). That's how network protocols work; let's just extend the model into general computing.
I believe the format/DSL of Homebrew would be excellent for this purpose. In a certain sense, you could actually use it as is: take the formulas you want, add specific configuration, and deployment is just cloning your repository and running install for all your formulas.
Each installation runs in a couple of steps, each of which has a default but can also be configured individually. It has lots of helpers for adding/replacing in configuration files, for testing, for temporary files and cleanup, for patching, and for specifying options. It’s got dependency management built in, etc.
And if (when!) you need more flexibility, you don’t have to learn about the insanity that is a for-loop in YAML, but can use a language that you either already know or should have no trouble understanding, ruby.
Make packages to describe the state of individual app configurations and version your config files, describe the server state as a set of packages.
Ansible made sense when people were physically provisioning things, no one (sane) runs apps on physically provisioned servers now except people deploying cloud platforms.