At least for SBCs, I’ve bought a few orange pi rv2s and r2s to use as builder nodes, and in some cases they are slower than the same thing running in qemu w/buildx or just qemu
There’s more to Docker Desktop than just “oh it’s just docker underneath”
1. Unified experience across Windows, Mac, Linux
2. The security posture is much stronger by default. Many people, who would probably be considered the “target audience” for Docker Desktop, don’t bother to make docker-ce rootless, or don’t use podman, so running it in a VM is better, though admittedly often annoying.
3. Not everybody is a CLI warrior. Docker Desktop gives a decent GUI, ways to monitor and control containers visually, and even deploy kubernetes with a single click.
“Inferior technology stack”. Didn’t I just read a few days ago about pf queues just now breaking 4Gbps? Look me up, I’ve written a lot about high speed networking.
How are those containers working out for you? Have you heard about these things called VMs? Which I moved on from like 8 years ago?
Not to mention ole Theo likes to alienate you folks at every possible opportunity, even when it doesn’t matter to the core philosophy of openbsd.
I mean, you do you, but at least demonstrate an ounce of intellectual integrity about it.
Containers are a joke compared to Plan9 namespaces, and docker just solves a GNU/Linux problem with itself and the zillions of incompatible distros.
FreeBSD has jails and Docker it's something laugable because with FBSD you just install the compatNx libraries and everything from version 4 and up will run as is.
And in any case you set a jail with these libraries and everything would run in a much secure way than docker defaults.
Seriously, can't even you see that Docker it's a problem written as a solution to another problem?
Kinda like NPM+Yarn+$package-package-manager of the day to solve the problems the whole ecosystem and the so-called solutions creates twice. Wake up.
Not GP, but I'm running OpenBSD on a laptop, not in a datacenter. I have a small Alpine VM that I often forget about. I also have Debian 12 on a Mac Mini and while it's systemd, it could be OpenRC for all that I care about it.
I can see a case for systemd on a server, but have never seen the point on user-facing distro.
No kidding. Using cert-manager with my DNS on cloudflare or GKE is about the easiest and most mindless and zero-friction LE implementation I’ve ever used.
The author here repeatedly claims that teams would function identically on Swarm and are wasting resources using Kubernetes.
You don’t even need to be a mid-sized team to need stuff like RBAC, service mesh, multi-cluster networking, etc.
Claiming that kubernetes only “won” because of economic pressure is only true in the most basic of sense, and claiming it as a resume padder is flat out insulting to all its actual technical merits.
The multi-tenant nature and innate capabilities is part economics of it, but operators, extensibility, and platform portability across different environments are actual technical merits.
Claiming that autoscaling is optional and not required for most production environments is at best myopic.
It also greatly undersells the operational complexity that autoscaling actually solves, versus just the reactive script based solely on CPU. Metrics pipelines, cluster-level resource constraints, and pod disruption budgets.
As far as the repeated claim that it just “works”, great. Not working is more of a function of the application not the platform.
I dunno, this whole article frames kubernetes as a massive overhead and monolithic beast rather than the programmable infrastructure that it is.
It also tries to minimize many real world needs like multi-team isolation, extensibility, and ecosystem integrations
> I dunno, this whole article frames kubernetes as a massive overhead
Author describes his context being a setup with two $83/year VPS instances - a scale so incredibly minuscule compared to typical deployments, that any of his arguments against one of the core cloud technologies fall flat.
reply