Hacker Timesnew | past | comments | ask | show | jobs | submit | k_roy's commentslogin

Which still means a single person with Claude can clear a queue in a day versus a month with a traditional team.

Your example must have incredible users or really trivial software.

Same experience here.

At least for SBCs, I’ve bought a few orange pi rv2s and r2s to use as builder nodes, and in some cases they are slower than the same thing running in qemu w/buildx or just qemu


If anything, this would be more of a way to act as a command and control server

There’s more to Docker Desktop than just “oh it’s just docker underneath”

1. Unified experience across Windows, Mac, Linux

2. The security posture is much stronger by default. Many people, who would probably be considered the “target audience” for Docker Desktop, don’t bother to make docker-ce rootless, or don’t use podman, so running it in a VM is better, though admittedly often annoying.

3. Not everybody is a CLI warrior. Docker Desktop gives a decent GUI, ways to monitor and control containers visually, and even deploy kubernetes with a single click.


Trying to act superior with your oft-broken OS.

“Inferior technology stack”. Didn’t I just read a few days ago about pf queues just now breaking 4Gbps? Look me up, I’ve written a lot about high speed networking.

How are those containers working out for you? Have you heard about these things called VMs? Which I moved on from like 8 years ago?

Not to mention ole Theo likes to alienate you folks at every possible opportunity, even when it doesn’t matter to the core philosophy of openbsd.

I mean, you do you, but at least demonstrate an ounce of intellectual integrity about it.


Containers are a joke compared to Plan9 namespaces, and docker just solves a GNU/Linux problem with itself and the zillions of incompatible distros.

FreeBSD has jails and Docker it's something laugable because with FBSD you just install the compatNx libraries and everything from version 4 and up will run as is.

And in any case you set a jail with these libraries and everything would run in a much secure way than docker defaults.

Seriously, can't even you see that Docker it's a problem written as a solution to another problem?

Kinda like NPM+Yarn+$package-package-manager of the day to solve the problems the whole ecosystem and the so-called solutions creates twice. Wake up.


Not GP, but I'm running OpenBSD on a laptop, not in a datacenter. I have a small Alpine VM that I often forget about. I also have Debian 12 on a Mac Mini and while it's systemd, it could be OpenRC for all that I care about it.

I can see a case for systemd on a server, but have never seen the point on user-facing distro.


> I have a small Alpine VM that I often forget about

“vmm” is a toy compared to kvm/libvirt.

> I also have Debian 12 on a Mac Mini and while it's systemd, it could be OpenRC for all that I care about it.

I assume Intel? I haven’t paid attention to Linux on Macs in a long time. But I love Devuan for this reason.


I’m not even arguing against systemd or not.

I’m just stating that Linux being technologically inferior because of something-something corporate overlords is… silly


> Especially historically it was pretty hard to get the JVM to release memory back to the OS.

This feels like a huge understatement. I still have some PTSD around when I did Java professionally between like 2005 and 2014.

The early part of that was particularly horrible.


Reminds me of a time many years ago when I received a whole case of Intel NICs all with the same MAC address.

It was an interesting couple of days before we figured it out.


How does that happen? Was it an OEM bulk kind of deal where you were expected to write a new MAC for each NIC when deploying them?


No kidding. Using cert-manager with my DNS on cloudflare or GKE is about the easiest and most mindless and zero-friction LE implementation I’ve ever used.


The author mostly lost me when he started doing comparative line counts between docker swarm and kubernetes.

And the docker swarm example didn’t even accomplish the same thing.


The author here repeatedly claims that teams would function identically on Swarm and are wasting resources using Kubernetes.

You don’t even need to be a mid-sized team to need stuff like RBAC, service mesh, multi-cluster networking, etc.

Claiming that kubernetes only “won” because of economic pressure is only true in the most basic of sense, and claiming it as a resume padder is flat out insulting to all its actual technical merits.

The multi-tenant nature and innate capabilities is part economics of it, but operators, extensibility, and platform portability across different environments are actual technical merits.

Claiming that autoscaling is optional and not required for most production environments is at best myopic.

It also greatly undersells the operational complexity that autoscaling actually solves, versus just the reactive script based solely on CPU. Metrics pipelines, cluster-level resource constraints, and pod disruption budgets.

As far as the repeated claim that it just “works”, great. Not working is more of a function of the application not the platform.

I dunno, this whole article frames kubernetes as a massive overhead and monolithic beast rather than the programmable infrastructure that it is.

It also tries to minimize many real world needs like multi-team isolation, extensibility, and ecosystem integrations


> I dunno, this whole article frames kubernetes as a massive overhead

Author describes his context being a setup with two $83/year VPS instances - a scale so incredibly minuscule compared to typical deployments, that any of his arguments against one of the core cloud technologies fall flat.

Of course he doesn't need Kubernetes. It's fine.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: